Canonical Voices

Prakash

OPEN SOURCE is key for humanity to preserve its history in the digital age, Vatican Library CIO Luciano Ammenti has argued.

“The Vatican Library is a conservation library. We try to preserve our history. We tried to expand the number of reading rooms available for people that want to use our library,” he said.

“But we realised that reading rooms will never be enough. We have 82,000 manuscripts in total, and at any one time only 20 percent of them can be read in the library.

Read More: http://www.theinquirer.net/inquirer/news/2407221/open-source-is-only-reliable-way-to-preserve-human-history-argues-vatican

Read more
Shuduo

In case you want to play snappy but don’t have a Raspberry Pi 2 or other hardware…

1, sudo apt-get install virtualbox
2, download snappy image http://cdimage.ubuntu.com/ubuntu-snappy/15.04/20150423/ubuntu-15.04-snappy-amd64-generic.img.xz
3, unxz ubuntu-15.04-snappy-amd64-generic.img.xz
4, VBoxManager convertdd ubuntu-15.04-snappy-amd64-generic.img snappy.vdi –format VDI
5, launch Virtualbox GUI app, create a new VM, OS type is Linux, Version is Ubuntu 64bit, memory is 512MB, Hard driver use an exist virtual hard disk file and select snappy.vdi we just converted from img file.
6, in Settings->Network, change Network Adapter from NAT to Bridged Adapter
7, Start VM, you can use browser to access Snappy App Store by url “webdm.local:4200” or login in from console or ssh with username/password ‘ubuntu/ubuntu’ to do anything fun snappy things like update/rollback

Read more

Just Say It!

While I love typing on small on screen keyboards on my phone, it is much easier to just talk. When we did the HUD we added speech recognition there, and it processed the audio on the device giving the great experience of controlling your phone with your voice. And that worked well with the limited command set exported by the application, but to do generic voice, today, that requires more processing power than a phone can reasonably provide. Which made me pretty excited to find out about HP's IDOL on Demand service.

I made a small application for Ubuntu Phone that records the audio you speak at it, and sends it up to the HP IDOL on Demand service. The HP service then does the speech recognition on it and returns the text back to us. Once I have the text (with help from Ken VanDine) I set it up to use Content Hub to export the text to any other application that can receive it. This way you can use speech recognition to write your Telegram notes, without Telegram having to know anything about speech at all.

The application is called Just Say It! and is in the Ubuntu App Store right now. It isn't beautiful, but definitely shows what can be done with this type of technology today. I hope to make it prettier and add additional features in the future. If you'd like to see how I did it you can look at the source.

As an aside: I can't get any of the non-English languages to work. This could be because I'm not a native speaker of those languages. If people could try them I'd love to know if they're useful.


Read more
Colin Ian King

Static code analysis on kernel source

Since 2014 I have been running static code analysis using tools such as cppcheck and smatch against the Linux kernel source on a regular basis to catch bugs that creep into the kernel.   After each cppcheck run I then diff the logs and get a list of deltas on the error and warning messages, and I periodically review these to filter out false positives and I end up with a list of bugs that need some attention.

Bugs such as allocations returning NULL pointers without checks, memory leaks, duplicate memory frees and uninitialized variables are easy to find with static analyzers and generally just require generally one or two line fixes.

So what are the overall trends like?

Warnings and error messages from cppcheck have been dropping over time and "portable warnings" have been steadily increasing.  "Portable warnings" are mainly from arithmetic on void * pointers (which GCC handles has byte sized but is not legal C), and these are slowly increasing over time.   Note that there is some variation in the results as I use the latest versions of cppcheck, and occasionally it finds a lot of false positives and then this gets fixed in later versions of cppcheck.

Comparing it to the growth in kernel size the drop overall warning and error message trends from cppcheck aren't so bad considering the kernel has grown by nearly 11% over the time I have been running the static analysis.

Kernel source growth over time
Since each warning or error reported has to be carefully scrutinized to determine if they are false positives (and this takes a lot of effort and time), I've not yet been able determine the exact false positive rates on these stats.  Compared to the actual lines of code, cppcheck is finding ~1 error per 15K lines of source.

It would be interesting to run this analysis on commercial static analyzers such as Coverity and see how the stats compare.  As it stands, cppcheck is doing it's bit in detecting errors and helping engineers to improve code quality.

Read more
bmichaelsen

I would walk 500 miles and I would walk 500 more
The proclaimers, 500 miles

So I recently noted that github reported I have 1337 commits on LibreOffice since I joined Canonical in February 2011. Looking at those stats, it seems I also deleted some net 155,634 lines over that time in the codebase.

LibreOffice commits

Even though I cant find that mail, I seem to remember that Michael Stahl, when joining the LibreOffice project proclaimed his goal to be to contribute ‘a net negative lines of code.’1) Now I have not looked into the details of the above stats — they might very likely reveal to be caused by some bulk change. Which would be lame, unless its the killing of the old build system, for which I think I can claim some credit. But in general I really love the idea of ‘contributing a net negative number of lines of code’.

So, at the last LibreOffice Hackfest in Cambridge 2), I pushed a set of commits refactoring the UNO bindings of writer tables. It all started so innocent. I was actually aiming to do something completely different: namely give the UNO cursors in Writer (SwUnoCrsr) somewhat saner resource management and drag them screaming and kicking out of the 1980ies. However, once in unotbl.cxx, I found more of “determined Real Programmer can write FORTRAN programs in any language” and copypasta there than I could bear. I thought: “This UNO stuff has decent test coverage, you could refactor it a bit quickly.”.

Of course I was wrong with both sides of that statement: On the one hand, when I started the coverage was 70.1% LOC on that file which is not really as high as I expected. On the other hand, I did not end with “a bit quickly”, rather I went on to refactor away:
dc -e "`git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^+|wc -l` `git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^-|wc -l` - p"
-1015

… a thousand lines. On discovering the lacking test-coverage, I quickly added some more tests — bringing coverage to 77.52% LOC at least now.3) And yes, I also silently fixed the one regression I thereby discovered I had introduced, which nobody seemed to have noticed so far. One thing I noticed in this little refactoring spree is that while C++11s features might look tame compared to more modern programming languages in metrics like avoiding boilerplate, it still outclasses what we had before. Beyond the simplifying refactoring, features like lambdas are really nice for non-interactive (test-driven) debugging, including quickly asserting on the state of variables some over some 10 stackframes up or down without going into major contortions in testcode.

1) By the way, a quick:
dc -e "`git log --author Stahl -p |grep ^+|wc -l` `git log --author Stahl -p |grep ^-|wc -l` - p"
-108686

confirms Michael is more than living up to his personal goals.

2) Speaking of the Hackfest: The other thing I did there was helping/observing Sam Tuke getting setup for his first code contribution. While we made great progress in making this easier than it used to be, we could be a lot better there still. Sadly though, I didnt see a shortcut or simplification we could implement right away.

3) And along with that did bring coverage of unochart.cxx from abismal 4.4% LOC to at least 35.31% LOC  as a collateral damage.

addendum: Note that the writer tables core also increased coverage quite a bit from 54.6% LOC to 65% LOC.


Read more
Louis

Introduction

Once in a while, I get to tackle issues that have little or no documentation other than the official documentation of the product and the product’s source code.  You may know from experience that product documentation is not always sufficient to get a complete configuration working. This article intend to flesh out a solution to customizing disk configurations using Curtin.

This article take for granted that you are familiar with Maas install mechanisms, that you already know how to customize installations and deploy workloads using Juju.

While my colleagues in the Maas development team have done a tremendous job at keeping the Maas documentation accurate (see Maas documentation), it does only cover the basics when it comes to Maas’s preseed customization, especially when it comes to Curtin’s customization.

Curtin is Maas’s fastpath installer which is meant to replace Debian’s installer (familiarly known as d-i). It does a complete machine installation much faster than with the standard debian method.  But while d-i is well known and it is easy to find example of its use on the web, Curtin does not have the same notoriety and, hence, not as much documentation.

Theory of operation

When the fastpath installer is used to install a maas unit (which is now the default), it will send the content of the files prefixed with curtin_ to the unit being installed.  The curtin_userdata contains cloud-config type commands that will be applied by cloud-init when the unit is installed. If we want to apply a specific partitioning scheme to all of our unit, we can modify this file and every unit will get those commands applied to it when it installs.

But what if we only have one or a few servers that have specific disk layout that require partitioning ?  In the following example, I will suppose that we have one server, named curtintest which has a one terabyte disk (1 TB) and that we want to partition this disk with the following partition table :

  • Partition #1 has the /boot file system and is bootable
  • Partition #2 has the root (/) file system
  • Partition #3 has a 31 Gb file system
  • Partition #4 has 32 Gb of swap space
  • Partition #5 has the remaining disk space

Since only one server has such a disk, the partitioning should be specific to that curtintest server only.

Setting up Curtin development environment

To get to a working Maas partitioning setup, it is preferable to use Curtin’s development environment to test the curtin commands. Using Maas deployment to test each command quickly becomes tedious and time consuming.  There is a description on how to set it up in the README.txt but here are more details here.

Aside from putting all the files under one single directory, the steps described here are the same as the one in the README.txt file :

$ mkdir -p download
$ DLDIR=$(pwd)/download
$ rel="trusty"
$ arch=amd64
$ burl="http://cloud-images.ubuntu.com/$rel/current/"
$ for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-{arch}-disk1.img; do wget "$burl/$f" -O $DLDIR/$f; done
$ ( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
$ BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
$ ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
$ mkdir src
$ bzr init-repo src/curtin
$ (cd src/curtin && bzr  branch lp:curtin trunk.dist )
$ (cd src/curtin && bzr  branch trunk.dist trunk)
$ cd src/curtin/trunk

You now have an environment you can use with Curtin to automate installations. You can test it by using the following command which will start a VM and run « curtin install » in it.  Once you get the prompt, login with :

username : ubuntu
password : passw0rd

$ sudo ./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"

Using Curtin in the development environment

To test Curtin in its environment, simply remove  — curtin install « PUBURL/${ROOTTGZ##*/} » at the end of the statement. Once logged in, you will find the Curtin executable in /curtin/bin :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin --help
usage: main.py [-h] [--showtrace] [--verbose] [--log-file LOG_FILE]
{block-meta,curthooks,extract,hook,in-target,install,net-meta,pac
k,swap}
...

positional arguments:
{block-meta,curthooks,extract,hook,in-target,install,net-meta,pack,swap}

optional arguments:
-h, --help            show this help message and exit
--showtrace
--verbose, -v
--log-file LOG_FILE

Each of Curtin’s commands have their own help :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin install --help
usage: main.py install [-h] [-c FILE] [--set key=val] [source [source ...]]

positional arguments:
source what to install

optional arguments:
-h, --help show this help message and exit
-c FILE, --config FILE
read configuration from cfg
--set key=val define a config variable

 

Creating Maas’s Curtin preseed commands

Now that we have our Curtin development environment available, we can use it to come up with a set of commands that will be fed to Curtin by Maas when a unit is created.

Maas uses preseed files located in /etc/maas/preseeds on the Maas server. The curtin_userdata preseed file is the one that we will use as a reference to build our set of partitioning commands.  During the testing phase, we will use the -c option of curtin install along with a configuration file that will mimic the behavior of curtin_userdata.

We will also need to add a fake 1TB disk to Curtin’s development environment so we can use it as a partitioning target. So in the development environment, issue the following command :

$ qemu-img create -f qcow2 boot.disk 1000G Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu: ubuntu password: passw0rd

ubuntu@ubuntu:~$ sudo -s root@ubuntu:~# cat /proc/partitions

major minor  #blocks  name

253        0    2306048 vda 253        1    2305024 vda1 253       16        426 vdb 253       32 1048576000 vdc 11        0    1048575 sr0

We can see that the 1000G /dev/vdc is indeed present.  Let’s now start to craft the conffile that will receive our partitioning commands. To test the syntax, we will use two simple commands :

root@ubuntu:~# cat << EOF > conffile 
partitioning_commands:
  builtin: []
  01_partition_make_label: ["/sbin/parted", "/dev/vdc", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vdc", "-s", "'","mkpart","primary","1049K","538M","'"] 
sources:
  01_primary: http://192.168.0.13:9923//trusty-server-cloudimg-amd64-root.tar.gz
  EOF

The sources: statement is only there to avoid having to repeat the SOURCE portion of the curtin command and is not to be used in the final Maas configuration. The URL is the address of the server from which you are running the Curtin development environment.

WARNING

The builtin [] statement is VERY important. It is there to override Curtin’s native builtin statement which is to partition the disk using « block-meta simple ».  If it is removed, Curtin will overwrite he partitioning with its default configuration. This comes straight from Scott Moser, the main developer behind Curtin.

Now let’s run the Curtin command :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

Curtin will run its installation sequence and you will see a display which you should be familiar with if you installed units with Maas previously.  The command will most probably exit on error, comlaining about the fact that install-grub received an argument that was not a block device. We do not need to worry about that at the motent.

Once completed, have a look at the partitioning of the /dev/vdc device :

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4

The partitioning commands were successful and we have the /dev/vdc disk properly configured.  Now that we know that the mechanism works, let try with a complete configuration file. I have found that it was preferable to start with a fresh 1TB disk :

root@ubuntu:~# poweroff

$ rm -f boot.img

$ qemu-img create -f qcow2 boot.disk 1000G
Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu@ubuntu:~$ sudo -s

root@ubuntu:~# cat << EOF > conffile 
partitioning_commands:
  builtin: [] 
  01_partition_announce: ["echo", "'### Partitioning disk ###'"]
  01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
  02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
  04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
  05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
  06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
  07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
  08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
  09_partition_announce: ["echo", "'### Creating filesystems ###'"]
  10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
  11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
  12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
  13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
  14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
  15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
  16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
  17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
  18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
  21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]
sources: 01_primary: http://192.168.0.13:9923//trusty-server-cloudimg-amd64-root.tar.gz EOF

You will note that I have added a few statement like [« echo », « ‘### Partitioning disk ###' »] that will display some logs during the execution. Those are not necessary.
Now let’s try a second test with the complete configuration file :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical

We now have a correctly partitioned disk in our development environment. All we need to do now is to carry that over to Maas to see if it works as expected.

Customization of Curtin execution in Maas

The section « How preseeds work in MAAS » give a good outline on how to select the name of the a preseed file to restrict its usage to specific sub-groups of nodes.  In our case, we want our partitioning to apply to only one node : curtintest.  So by following the description in the section « User provided preseeds« , we need to use the following template :

{prefix}_{node_arch}_{node_subarch}_{release}_{node_name}

The fileneme that we need to choose needs to end with our hostname, curtintest. The other elements are :

  • prefix : curtin_userdata
  • osystem : amd64
  • node_subarch : generic
  • release : trusty
  • node_name : curtintest

So according to that, our filename must be curtin_userdata_amd64_generic_trusty_curtintest

On the MAAS server, we do the following :

root@maas17:~# cd /etc/maas/preseeds

root@maas17:~# cp curtin_userdata curtin_userdata_amd64_generic_trusty_curtintest

We now edit this newly created file and add our previously crafted Curtin configuration file just after the following block :

{{if third_party_drivers and driver}}
  early_commands:
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
  driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
  driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
  driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
  driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
  driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
  {{endif}}

The complete section should look just like this :

{{if third_party_drivers and driver}}
  early_commands:
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
   driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
   driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
   driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
   driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
   driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
  {{endif}}
  partitioning_commands:
   builtin: []
   01_partition_announce: ["echo", "'### Partitioning disk ###'"]
   01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
   02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
   02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
   04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
   05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
   06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
   07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
   08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
   09_partition_announce: ["echo", "'### Creating filesystems ###'"]
   10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
   11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
   12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
   13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
   14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
   15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
   16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
   17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
   18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
   21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]

Now that maas is properly configured for curtintest, complete the test by deploying a charm in a Juju environment where curtintest is properly comissionned.  In that example, curtintest is the only available node so maas will systematically pick it up :

caribou@avogadro:~$ juju status
environment: maas17
machines:
« 0 »:
agent-state: started
agent-version: 1.24.0
dns-name: state-server.maas
instance-id: /MAAS/api/1.0/nodes/node-2555c398-1bf9-11e5-a7c4-525400214658/
series: trusty
hardware: arch=amd64 cpu-cores=1 mem=1024M
state-server-member-status: has-vote
services: {}
networks:
maas-eth0:
provider-id: maas-eth0
cidr: 192.168.100.0/24

caribou@avogadro:~$ juju deploy mysql
Added charm « cs:trusty/mysql-25″ to the environment.

Once the mysql charm has been deployed, connect to the unit to confirm that the partitioning was successful

caribou@avogadro:~$ juju ssh mysql/0
ubuntu@curtintest:~$ sudo -s
root@curtintest:~# parted /dev/vda print
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
 
Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical
ubuntu@curtintest:~$ swapon -s
Filename Type Size Used Priority
/dev/vda6 partition 31249404 0 -1

Conclusion

Customizing disks and partition using curtin is possible but currently not sufficiently documented. I hope that this write up will be helpful.  Sustained development on Curtin is currently done to improve these functionalities so things will definitively get better.

Read more
April Wang

Pawel Stolowski于2015年6月11日发布一篇名为“Cleaning up scopes settings”的博文,我们在这里简单翻译转发和大家分享一下。

Unity 7(在当前桌面上提供Ubuntu shell和默认UX体验)和Unity 8(支持手机且很快将支持融合桌面)在数据源可见性方面存在很大程度的差异。未来Unity 8版本将废弃过去遗产的隐私标记,会更偏向于一种更加清晰的方式,让用户自己决定将数据发送到哪里。

Unity 7中的Scope搜索并保存隐私 

在默认情况下,在Unity 7中使用常规Dash搜索时,它将首先联络Canonical的智能Scope服务器,该服务器会推荐适合搜索关键词的最好或最相关的Scope。然后,下一步是在这些Scope中筛选查询到实际结果,然后呈现出来。

但是,这种方法意味着用户事先并不一定知道自己的搜索内容有向哪些Scope提出查询,而且搜索关键词会被发送到智能Scope的服务器。虽然发送至服务器的数据是匿名的,我们了解到仍有一些用户担心数据隐私问题。正是出于这个原因,我们推出了隐私标记:一个阻止访问智能Scope服务器的Scope设置。

Unity 8中的Scope搜索

Unity 8中的Scope体系结构截然不同:整个搜索过程不会涉及到智能Scope服务器。

相反,每次的搜索查询仅被发送到目前正在使用的Scope(即当前可见的Scope)中,因此用户始终知道他们的搜索数据的都会被发送到哪里。

如果是在使用一个聚合类Scope的情况下, 其中聚合了不同数据来源的子Scope,它的设置页面内会列出所有聚合了的子Scope。用户可以选择自行设置禁用每个单独子Scope的数据源。 

Unity 8废弃了过去遗留的隐私标记

由于在Unity 8中进行内容搜索查询时,可以清晰的看到并且可以轻松禁用Scope及其子Scope的数据源,隐私标记自然就变成了多此一举多余设置。正因如此,我们决定在我们的手机系统/ Unity 8后续简介中去掉清除这一项过去遗留下的设置。

如果你在Unity 8中有使用这一标记,其实通过取消收藏设置(在手机上点掉星号设置)或禁用聚合类Scope中对应的子Scope数据源都可以达到相同的效果。你还可以卸载独立Scope。

在Unity 8中保护隐私

在shell中,你可以看到两种Scope:普通独立/品牌类Scope和聚合类Scope。品牌类/独立Scope可以访问自己独立数据源,但不能同时访问其他或此品牌之外的数据源。因此,例如名叫“我的音乐”的Scope,将仅查询你的手机上本地的音乐文档,而名为“BBC News”的Scope也只能查询到bbc.co.uk的新闻内容。如果你不希望使用“BBC News”Scope,就不调用(通过manage dash)或收藏该Scope(类似于不调用网页应用程序)。

而聚合类Scope却不同于独立/品牌类Scope,本身可以(通过关键字tagging的方式)聚合到各种子Scope数据源,而且不区分它们是在访问本地还是远程的数据。如果你对某一子Scope的内容不太放心,你就可以在聚合类Scope的设置页面中禁用它。

关于作者

Pawel Stolowski工作于Unity API团队,致力于开发并实现Unity Shell搜索功能的Scope中间件及API有关的工作。他还致力于实际Scope(例如Music、Video、Apps等等)的开发以及Ubuntu Linux相关的其他项目中。大家可以通过Twitter关注Pawel - @pstolowski

Read more
Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.

Read more
Sergio Schvezov

The github or launchpad dilemma

We wanted to start a migration path from bazaar to git given how ubiquitous it is and due to the fact that most in our team prefer it. A few months ago the decision was easy, since launchpad did not support git, we would just switch to github given it’s popularity. That’s not true anymore…

Today launchpad supports git and our comparison becomes finer grained and we have to break it down a bit more.

So here are things I like github:

  • Code is presented first.
  • Documentation is easy to write and very nice to read.
  • Non technical people can make edits and propose pull requests all from the web.
  • It’s a bit more social (e.g; you have mentions).
  • Web hooks and many things embracing them.
  • A big user base, mostly everyone is already on github.
  • The code review interface.
  • The UI layout in general.
  • The API.

The things I like about launchpad:

  • Direct link between the source and ubuntu.
  • A very nice bug tracking system.
  • Given we work with Ubuntu, a very big existing database. Every other team working on Ubuntu uses launchpad already.
  • Very product oriented.
  • A nice language translation system.

Most of the things a like about one are probably things that I don’t like or are missing in the other.

snappy

Given we work on lp:snappy most of the time now, I want to have a look at the would be workflow when on launchad and on github.

The launchpad workflow

First of all, if the codebase were moved to launchpad’s git support we’d be missing proper support to query merge proposal status and linking bug reports to commits.

The flow with git would be as follows:

  1. cd $GOPATH/src/launchpad.net/snappy.
  2. git branch -c <feature>
  3. edit/create/fix
  4. git commit -s -m '...'
  5. git push git+ssh://USER@git.launchpad.net/~USER/snappy`
  6. Create merge proposal.
  7. Manually merge.
  8. Manually invoke test run.
  9. git push git+ssh://USER@git.launchpad.net/snappy

It is an improvement over bzr (especially since branches are colocated and go likes that), but we miss:

  • unit test runs.
  • unit test coverage tracking.
  • automatic merging, launchpad support required and a new tarmac implementation.
  • translation support, only supported for bazaar.
  • package recipe to push latest trunk to a PPA, also requires launchpad support.

That said, things are coming along and most of this would be solved by either launchpad API enhancements to understand git or webhooks.

The github workflow

Given github’s popularity, mostly everything is already done for you, and since they have webhooks a chain of events that follow an action in github gives us a very neat experience.

This is what would happen:

  1. cd $GOPATH/src/launchpad.net/snappy.
  2. git branch -c <feature>
  3. edit/create/fix
  4. git commit -s -m '...', if the issue is part of the comment it gets linked through github.
  5. git push git@github.com:/snappy.git
  6. Create pull request.
  7. travis is triggered by the event and runs everything we
    tell it to:
    1. Run a test build.
    2. Runs unit tests.
    3. Runs sanity checks (go vet, lint, …)
    4. Push the unit test coverage to corevalls.io
    5. Build deb.
  8. Reviewer uses the data updated in real time aside from his human provided factor to determine if the PR should be merged. This data includes, travis passing unit tests and it’s coverage increase or decrease among others with nice badges.
  9. Click on Merge PR.
  10. The master branch has it’s status/sanity presented with badges as well.

Closing thoughts

It is no secret I’ve been wanting to move to github for a while, it solves many problems we have that we don’t want to go around and solve ourselves. It is not the panacea but it does seem fit for most of the things we need.

Given that now both launchpad and github support git we can ping pong between them as seen fit (not out of spite though).

The biggest hurdle we’d be facing on every change is our go import paths which are absolute to make go get straightforward (even if we don’t take too many other advantages for it) for which one solution I’ve been wanting to give a try is http://getgb.io.

In some sense I sometimes get the feeling that github is like vim and launchpad is like emacs, and I am a vim person.

Read more
Ben Howard

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at cloud-images.ubuntu.com. But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges 10.0.0.0/8

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


Version
Region
HVM-SSD
HVM-Instance
14.04-LTS
eu-central-1
ami-7e94ac63
ami-8e93ab93
sa-east-1
ami-f943c1e4
ami-e742c0fa
ap-northeast-1
ami-543c9b54
ami-b4298eb4
eu-west-1
ami-4ae2a73d
ami-48e7a23f
us-west-1
ami-fbd126bf
ami-6bd3242f
us-west-2
ami-63585c53
ami-875357b7
ap-southeast-2
ami-7de69c47
ami-1de19b27
ap-southeast-1
ami-aca4a0fe
ami-2a9b9f78
us-east-1
ami-95877efe
ami-e58b728e
15.04
eu-central-1
ami-9a94ac87
ami-ae93abb3
sa-east-1
ami-1340c20e
ami-0743c11a
ap-northeast-1
ami-9c3c9b9c
ami-42379042
eu-west-1
ami-a2e2a7d5
ami-e4e7a293
us-west-1
ami-4bd0270f
ami-1dd32459
us-west-2
ami-f9585cc9
ami-1dd32459
ap-southeast-2
ami-5de69c67
ami-01e19b3b
ap-southeast-1
ami-74a5a126
ami-c89b9f9a
us-east-1
ami-29f90042
ami-8d8a73e6

It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>


Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150623 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: Wily Development Kernel

Our wily kernel remains rebased on 4.0.5. We have cleaned up some
config discrepancies and will plan to upload to our
canonical-kernel-team ppa today. We’ll the hopefully get that copied
out to the archive sometime this week or next. Also, with 4.1 final
having just been release, we’ll get our master branch in
git://kernel.ubuntu.com/ubuntu/unstable.git rebased. We will then plan
on rebasing Wily to 4.1 and uploading as well.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs June 25 – Alpha 1 (~2 days away)
    Thurs July 30 – Alpha 2 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 13-Jun through 04-Jul
    ====================================================================
    12-Jun Last day for kernel commits for this cycle
    14-Jun – 20-Jun Kernel prep week.
    21-Jun – 04-Jul Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
Anthony Dillon

Why we needed a new framework

Some time ago the web team at Canonical developed a CSS framework the we called ‘Guidelines’. Guidelines helped us to maintain our online visual language across all our sites and comprised of a number of base and component Sass files which were combined and served as a monolithic CSS file on our asset server.

We began to use Guidelines as the baseline styles for a number of our sites; www.ubuntu.com, www.canonical.com, etc.

This worked well until we needed to update a component or base style. With each edit we had to check it wasn’t going to break any of the sites we knew used it and hope it didn’t break the sites we were not aware.

Another deciding factor for us was was the feedback that we started receiving as internal teams started adopting Guidelines. We received a resounding request to break the components into modular parts so they could customise which ones they could include. Another request we heard a lot was the ability to pull the Sass files locally for offline development but keep the styling up to date.

Therefore, we set out to develop a new and improved build and delivery system, which lead us to a develop a whole new architecture and we completely refactored the Sass infrastructure.

This gave birth to Vanilla; our new and improved CSS framework.

Building Vanilla

The first decision we made was to remove the “latest” version target, so sites could no longer directly link to the bleeding edge version of the styles. Instead sites should target a specific version of Vanilla and manually upgrade as new versions are released. This helps twofold, shifting the testing and QA to the maintainers of each particular site allows for staggered updates without a sweeping update to all sites at once. Secondly, allowed us to modify current modules without updating the sites until the update was applied.

We knew that we needed to make the update process as easy as possible to help other teams keep their styles up to date. We decided against using Bower as our package manager and chose NPM to reduce the number of dependencies required to use Vanilla.

We knew we needed a build system and, as it was a greenfield project, the world was our oyster. Really it came down to Gulp vs Grunt. We had a quick discussion and decided to run with Gulp as we had more experience with it. Gulp had all the plugins we required and we all preferred the Gulp syntax instead of the Grunt spaghetti.

We had a number of JavaScript functions in Guidelines to add simple dynamic functionality to our sites, such as, equal heights or tabbed content. The team decided we wanted to try and remove the JS dependency for Vanilla and make it a pure CSS framework. So we stepped through each function and tried to work out if we, most importantly, required it at all. If so, we tried to develop a CSS replacement with an acceptable degradation for less modern browsers. We managed to cover all required functions with CSS and removed some older functionality we did not want any more.

Using Vanilla

Importing Vanilla

To start using Vanilla simple run $ npm install vanilla-framework --save in the root of your site. Then in your main stylesheet simple add:


@import ../path/to/node_modules/vanilla-framework/build/scss/build.scss
@include vanilla;

The first line in the code above imports the main build file of the vanilla-framework. Then included as it is entirely controlled with mixins, which will be explained in a future post.

Now that you have Vanilla imported correctly you should see the some default styling applied to your site. To take full advantage of the framework we require a small amount of mark up changes.

Mark up amendments

There are a number of classes used by Vanilla to set up the site wrappers. Please refer to the source for our demo site.

Vanilla-framework

Conclusion

This is still a work in progress project but we are close to releasing www.ubuntu.com and www.canonical.com based on Vanilla. Please do use Vanilla and any feedback would be very much appreciated.

For more information please visit the Vanilla project page.

Read more
Michael

I tend to do all my work in the cloud (and work physically from a useful but comparatively powerless chromebook running Ubuntu). So creating a Mir Snap on my local machine wasn’t an option I was keen to try.

Mainly for my own documentation, the first attempt at creating a Mir snap (a display server for experimenting with Kodi on Ubuntu Snappy), went like this:

First spin up an Ubuntu Wily development instance in a(ny) openstack cloud (I’m using a Canonical internal one):

$ nova boot --flavor cpu2-ram4-disk100-ephemeral20 --image 1461273c-af73-4839-9f64-3df00446322a --key-name ******** wily_dev

Less than a minute later, ssh in and use the existing snappy packaging branch for mir:


$ sudo add-apt-repository ppa:snappy-dev/tools
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install snappy-tools bzr cmake gcc-multilib ubuntu-snappy-cli mir-demos mir-graphics-drivers-desktop dpkg-dev

$ bzr branch lp:~mir-team/mir/snappy-packaging && cd snappy-packaging && make

The last step uses deb2snap to create the snap package from the installed deb packages. So, the (76M) snap is for the amd64 architecture – as the instance I created is amd64:


$ ls -lh mir_snap1_amd64.snap
-rw-rw-r-- 1 ubuntu ubuntu 76M Jun 21 11:20 mir_snap1_amd64.snap

Next up… using either QEMU or an ARM cloud instance (I’m not sure that the latter is available) to create an ARM Mir snap for my Raspberry Pi, and testing it out…


Filed under: Uncategorized

Read more
facundo


Estos últimos días se liberaron nuevas versiones de dos proyectos en los que estoy involucrado activamente.

A principio de mes lancé Encuentro 3.1 (como ya sabrán, este programa permite buscar, descargar y ver contenido del Canal Encuentro, Paka Paka, BACUA, Educ.ar y otros).

La versión 3.1 trae los siguientes cambios con respecto a la versión anterior:

  • Vuelve a funcionar luego de los cambios de backend de Encuentro y Conectate
  • Ahora con CTRL-F se va directamente al campo de filtro (gracias Emiliano)
  • Se rehizo el manejo de la lista de episodios: ahora verlos y filtrarlos es muchísimo más rápido
  • Mejoras en el empaquetado, debería funcionar para muchas (todas?) las versiones de Debian/Ubuntu (gracias Adrián Alves). 
  • Varias mejoras al encontrar nuevos episodios de los distintos backends, y correcciones en general. 

Más info y cómo descargarlo, instalarlo, etc, en la página oficial.

Por otro lado, ayer se lanzó fades 3 (un proyecto orientado a desarrolladores Python, en contraposición a Encuentro que está pensado para el usuario final), que desarrollamos principalmente Nico Demarchi y yo.

fades (en inglés: FAst DEpendencies for Scripts) es un sistema que maneja automáticamente los virtualenvs en los casos simples que uno normalmente encuentra al escribir scripts o programas pequeños.  Crea automáticamente un nuevo virtualenv (o reusa uno creado previamente) instalando las dependencias necesarias, y ejecutando el script dentro de ese virtualenv.

¿Qué hay de nuevo en esta release?

  • Podés usar diferentes versiones del intérprete: simplemente pasá --python=python2 o lo que te convenga.
  • Las dependencias pueden especificarse en la linea de comando: no hay necesidad de cambiar el script para una prueba rápida, simplemente especificá la dependencia necesaria con --dependency.
  • Modo interactivo: es la manera más rápida de probar una nueva biblioteca. Sólo hacé fades -d <dependencia> y te abrirá un intérprete interactivo dentro de un venv con esa dependencia.
  • Soporta tomar argumentos desde el shellbang. De esta manera podés crear un script y poner al principio del mismo algo como: #!/usr/bin/env fades -d <dependencia> --python=python2.7
  • Puede parsear requerimientos desde un archivo. No hay necesidad de ningún cambio si ya tenés un archivo requirements.txt: simplemente indicalo con --requirement.
  • Si no se especifica el repo, toma PyPI por defecto, lo que resulta en código más limpio y simple.
  • Tiene una base de datos integrada para conversiones típicas de nombres: de esta manera se puede marcar con fades un "import bs4" incluso si ese no es el nombre del paquete en PyPI.
  • Otros cambios y correcciones menores.

Toda la info, en la página de PyPI del proyecto.

Read more
Colin Ian King

Powerstat and thermal zones

Last night I was mulling over an overheating laptop issue that was reported by a user that turned out to be fluff and dust clogging up the fan rather than the intel_pstate driver being broken.

While it is a relief that the kernel driver is not at fault, it still bothered me that this kind of issue should be very simple to diagnose but I overlooked the obvious.   When solving these issues it is very easy to doubt that the complex part of a system is working correctly (e.g. a kernel driver) rather than the simpler part (e.g. the fan not working efficiently).  Normally, I try to apply Occam's Razor which in the simplest form can be phrased as:

"when you have two competing theories that make exactly the same predictions, the simpler one is the better."

..e.g. in this case, the fan is clogged up.

Fortunately, laptops invariably provide Thermal Zone information that can be monitored and hence one can correlate CPU activity with the temperature of various components of a laptop.  So last night I added Thermal Zone sampling to powerstat 0.02.00 which is enabled with the new -t option.

 
powerstat -tfR 0.5
Running for 60.0 seconds (120 samples at 0.5 second intervals).
Power measurements will start in 0 seconds time.

Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Watts x86_pk acpitz CPU Freq
11:13:15 5.1 0.0 2.1 92.8 0.0 1 7902 1152 7.97 62.00 63.00 1.93 GHz
11:13:16 3.9 0.0 2.5 93.1 0.5 1 7168 960 7.64 63.00 63.00 2.73 GHz
11:13:16 1.0 0.0 2.0 96.9 0.0 1 7014 950 7.20 63.00 63.00 2.61 GHz
11:13:17 2.0 0.0 3.0 94.5 0.5 1 6950 960 6.76 64.00 63.00 2.60 GHz
11:13:17 3.0 0.0 3.0 93.9 0.0 1 6738 994 6.21 63.00 63.00 1.68 GHz
11:13:18 3.5 0.0 2.5 93.6 0.5 1 6976 948 7.08 64.00 63.00 2.29 GHz
.... 

..the -t option now shows x86_pk (x86 CPU package temperature) and acpitz (APCI thermal zone) temperature readings in degrees Celsius.

Now this is where the fun begins.  I ran powerstat for 60 seconds at 2 samples per second and then imported the data into LibreOffice.  To easily show corrleations between CPU load, power consumption, temperature and CPU frequency I normalized the data so that the lowest values were 0.0 and the highest were 1.0 and produced the following graph:

One can see that the CPU frequency (green) scales with the the CPU load (blue) and so does the CPU power (orange).   CPU temperature (yellow) jumps up quickly when the CPU is loaded and then steadily increases.  Meanwhile, the ACPI thermal zone (purple) trails the CPU load because it takes time for the machine to warm up and then cool down (it takes time for a fan to pump out the heat from the machine).

So, next time a laptop runs hot, running powerstat will capture the activity and correlating temperature with CPU activity should allow one to see if the overheating is related to a real CPU frequency scaling issue or a clogged up fan (or broken heat pipe!).

Read more
Prakash

E-commerce was supposed to simplify things, but in reality it is getting more complicated.

While purchasing online is getting easy, however making payments is painful.

First these sites give you a dozen option to make payments, for example they will say if you pay using a third-party wallet you would get 2% off, but if you use another wallet you would get 5% off and a third wallet will give you 7%!!

Now you have to first go and register in these wallets, if the wallet was popular, why would they offer discounts? They are offering the discounts to capture customers, hence you have to register with them first. With leading banks such as ICICI and HDFC jumping onto the wallet business, I think this will get more complicated.

You end up spending time registering with different wallets. After registering, they will still ask you for your credit card credentials.

In case you are already registered, they ask for you for login password at-least.

If that’s not enough, the credit card company will again ask for you a password to compete the transaction!

I am tired now of e-commerce, so I just choose cash on delivery :) but that’s not available every-time.

 

Read more
Colin Ian King

Snooping on I/O using iosnoop

A while ago I blogged about Brendan Gregg's excellent book for tracking down performance issues titled "Systems Performance, Enterprise and the Cloud".   Brendan has also produced a useful I/O diagnostic bash script iosnoop that uses ftrace to gather block device I/O events in real time.

The following example snoops on I/O for 1 second:

$ sudo iosnoop 1
Tracing block I/O for 1 seconds (buffered)...
COMM PID TYPE DEV BLOCK BYTES LATms
kworker/u16:2 650 W 8,0 441077032 28672 1.46
kworker/u16:2 650 W 8,0 441077024 4096 1.45
kworker/u16:2 650 W 8,0 364810624 462848 1.35
kworker/u16:2 650 W 8,0 364810240 69632 1.34

And the next example snoops and shows start and end time stamps:
$ sudo iosnoop -ts  
Tracing block I/O. Ctrl-C to end.
STARTs ENDs COMM PID TYPE DEV BLOCK BYTES LATms
35253.062020 35253.063148 jbd2/sda1-211 211 WS 8,0 29737200 53248 1.13
35253.063210 35253.063261 jbd2/sda1-211 211 FWS 8,0 18446744073709551615 0 0.05
35253.063282 35253.063616 <idle> 0 WS 8,0 29737304 4096 0.33
35253.063650 35253.063688 gawk 551 FWS 8,0 18446744073709551615 0 0.04
35253.766711 35253.767158 kworker/u16:0 305 W 8,0 433580264 4096 0.45
35253.766778 35253.767258 kworker/0:1H 321 FWS 8,0 18446744073709551615 0 0.48
35253.767289 35253.767635 <idle> 0 WS 8,0 273358464 4096 0.35
35253.767309 35253.767654 <idle> 0 W 8,0 118371312 4096 0.35
35253.767648 35253.767741 <idle> 0 FWS 8,0 18446744073709551615 0 0.09
^C
Ending tracing...
One needs to run the tool as root as it uses ftrace. There are a selection of filtering options, such as showing I/O from a specific device, I/O issues of a specific I/O type, selecting I/O on a specific PID or a specific name. iosnoop also can display the I/O completion times, start times and Queue insertion I/O start time. On Ubuntu, iosnoop can be installed using:
sudo apt-get install perf-tools-unstable
A useful I/O analysis tool indeed. For more details, install the tool and read the iosnoop man page.

Read more
Prakash

One of the drives to Cloud is that it is suppose to be green, but is Amazon Web Services green itself ?

Amazon Web Services has been under fire in recent weeks from a group of activist customers who are calling for the company to be more transparent in its usage of renewable energy.

In response, rather than divulge additional details about the source of power for its massive cloud infrastructure, the company has argued that using the cloud is much more energy efficient than customers powering their own data center operations.

But the whole discussion has raised the question: How green is the cloud?

Lets find out: http://www.networkworld.com/article/2936654/iaas/how-green-is-amazon-s-cloud.html

Read more
Shuduo

1, components on the desk
Image and video hosting by TinyPic

2, bag
Image and video hosting by TinyPic

3, assembling
Image and video hosting by TinyPic

4, and…
Image and video hosting by TinyPic

5, and…
Image and video hosting by TinyPic

6, piglow plugged
Image and video hosting by TinyPic

7, finish
Image and video hosting by TinyPic

8, Ubuntu Sanppy is running.. \o/
Image and video hosting by TinyPic

Read more
Prakash

The latest Kilo release of the OpenStack software, made available Thursday, sports new identity (ID) federation capability that, in theory, will let a customer in California use her local OpenStack cloud for everyday work, but if the load spikes, allocate jobs to other OpenStack clouds either locally or far, far away.

“With Kilo, for the first time, you can log in on one dashboard and deploy across multiple clouds from many vendors worldwide,” Mark Collier, COO of the OpenStack Foundation, said in an interview.

Read More: http://fortune.com/2015/04/30/openstack-federation-cloud/

Read more