Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

Building a great community

Xubuntu
In last night’s Community Council meeting, we met up with the Xubuntu team. They have done a great job inviting new members into their part of the community. Just read this:

<pleia2> elfy notes all contributors in his announcements
<dholbach> that's really really nice
<pleia2> we do blog posts, emails directly to all the testing members and to -devel list
<dholbach> wow
<pleia2> this cycle we're giving out stickers to some of our top testers
<elfy> if we get that sorted
<pleia2> share on social media too

This is just fantastic. I’m very happy with what the Xubuntu folks are doing and I’m glad to be part of such an open and welcoming community as well.

If you think that’s great too and want to get involved, have a look at their “Get involved” page. They particularly need testers for the new release.

Xubuntu team: keep up the great work! :-)

Read more
bmichaelsen

No-no-no, light speed is too slow!
Yes, we’ll have to go right to… ludicrous speed!

– Dark Helmet, Spaceballs

So, I recently brought up the topic of writers notes in the LibreOffice ESC call. More specifically: the SwNodeIndex class, which is, if one broadly simplifies an iterator over the container holding all the paragraphs of a text document. Before my modifications, the SwNodes container class had all these SwNodeIndices in a homegrown intrustive double linked list, to be able to ensure these stay valid e.g. if a SwNode gets deleted/removed. Still — as usual with performance topics — wild guesses arent helpful, and measurements should trump over intuition. I used valgrind for that, and measured the number of instructions needed for loading the ODF spec. Since I did the same years and years ago on the old OpenOffice.org performance project, I just checked if we regressed against that. Its comforting that we did not at all — we were much faster, but that measurement has to be taken with a few pounds of salt, as a lot of other things differ between these two measurements (e.g. we now have a completely new build system, compiler versions etc.). But its good we are moving in the right direction.

implementation SwNodes SwNodeIndex total instructions performance linedelta
DEV300_m45 71,727,655 73,784,052 9,823,158,471 ? ?
master@fc93c17a 84,553,232 60,987,760 6,170,762,825 0% 0
std::list 18,461,317 103,461,317 14,502,230,571 -5,725%
(-235% of total)
+12/-70
std::vector 18,986,848 3,707,286,032 9,811,541,380 -2,502% +22/-70
std::unordered_map 18,984,984 82,843,000 7,083,620,244 -627%
(-15% of total)
+16/-70
std::vector rbegin 18,986,848 143,851,229 6,214,602,532 -30%
(-0.7% of total)
+23/-70
sw::Ring<> 23,447,256 inlined 6,154,660,709 11%
(2.6% of total)
+108/-229

With that comforting knowledge, I started to play around with the code. The first thing I did was to replace the handcrafted intrusive list with a std::list pointing to the SwNodeIndex instances as a member in the SwNodes class. This is expected to slow down things, as now two allocs are needed: one for the SwNodeIndex class and one for the node entry in the std::list. To be honest though, I didnt expect this to slow down the code handling the nodes by a factor of ~57 for the loading of the example document. This whole document loading time (not just the node handling) slows by a factor of ~2.4. So ok, this establishes for certain that this part of the code is highly performance sensitive.

The next thing I tried to get a feel for how the performance reacts was using a std::vector in the SwNodes class. When reserving some memory early, this should severely reduce the amount of allocs needed. And indeed this was quicker than the std::list even with a naive approach just doing a push_back() for insertion and a std::find()/std::erase() for removal. However, the node indices are often temporarily created and quickly destroyed again. Thus adding new indices at the end and searching from the start certainly is not ideal: Thus this is also slower than the intrusive list that was on master by a factor of ~25 for the code doing the node handling.

Searching for a SwNodeIndex from the end of the vector, where we likely just inserted it and then swapping it with the last entry makes the std::vector almost compatitive with the original implementation: but still 30% slower than the original implementation. (The total loading time would only have increased by 0.7% using the vector like this.)

For completeness, I also had a look at a std::unordered_map. It did a bit better than I expected, but still would have slowed down loading by 15% for the example experiment.

Having ruled out that standard containers would do much good here without lots of tweaking, I tried the sw::Ring<> class that I recently rewrote based on Boost.Intrusive as a inline header class. This was 11% quicker than the old implementation, resulting in 2.6% quicker loading for the whole document. Not exactly a heroic archivement, but also not too bad for just some 200 lines touched. So this is now on master.

Why do this linked list outperform the old linked list? Inlining. Especially, the non-inlined constructors and the destructor calling a trivial non-inlined member function. And on top of that, the contructors and the function called by the destructor called two non-inlined friend functions from a different compilation unit, making it extra hard for a compiler to optimize that. Now, link time optimization (LTO) could maybe do something about that someday. However, with LTO being in different states on different platforms and with developers possibly building without LTO for build time performance for some time, requiring the compiler/linker to be extra clever might be a mixed blessing: The developers might run into “the map is not the territory” problems.

my personal take-aways:

  • The SwNodeIndex has quite a relevant impact on performance. If you touch it, handle with care (and with valgrind).
  • The current code has decent performance, further improvement likely need deeper structual work (see e.g. Kendys bplustree stuff).
  • Intrusive linked lists might be cumbersome, but for some scenarios, they are really fast.
  • Inlining can really help (doh).
  • LTO might help someday (or not).
  • friend declarations for non-inline functions across compilation units can be a code smell for possible performance optimization.

Please excuse the extensive writing for a meager 2.6% performance improvement — the intention is to avoid somebody (including me) to redo some or all of the work above just to come to the same conclusion.


Note: Here is how this was measured:

  • gcc 4.8.3
  • boost 1.55.0
  • test document: ODF spec
  • valgrind --tool=callgrind "--toggle-collect=*LoadOwnFormat*" --callgrind-out-file=somefilename.cg ./instdir/program/soffice.bin
  • ./autogen.sh --disable-gnome-vfs --disable-odk --disable-postgresql-sdbc --disable-report-builder --disable-scripting-beanshell --enable-gio --enable-symbols --with-external-tar=... --with-junit=... --with-hamcrest=... --with-system-libs --without-doxygen --without-help --without-myspell-dicts --without-system-libmwaw --without-system-mdds --without-system-orcus --without-system-sane --without-system-vigra --without-system-libodfgen --without-system-libcmis --disable-firebird-sdbc --without-system-libebook --without-system-libetonyek --without-system-libfreehand --without-system-libabw --disable-gnome-vfs --without-system-glm --without-system-glew --without-system-librevenge --without-system-libcdr --without-system-libmspub --without-system-libvisio --without-system-libwpd --without-system-libwps --without-system-libwpg --without-system-libgltf --without-system-libpagemaker --without-system-coinmp --with-jdk-home=...


Read more
Ben Howard


Snappy Launches


When we launched Snappy, we introduced it on Microsoft Azure [1], Google’s GCE [2], Amazon’s AWS [3] and our KVM images [4]. Immediately our developers were asking questions like, “where’s the Vagrant images”, which we launched yesterday [5].

The one final remaining question was “where are the images for <insert hypervisor>”. We had inquiries about Virtualbox, VMware Desktop/Fusion, interest in VMware Air, Citrix XenServer, etc.

OVA to the rescue

OVA is an industry standard for cross-hypervisor image support. The OVA spec [6] allows you to import a single image to:

  • VMware products
    • ESXi
    • Desktop
    • Fusion
    • VSphere
  • Virtualbox
  • Citrix XenServer
  • Microsoft SCVMM
  • Red Hat Enterprise Virtualization
  • SuSE Studio
  • Oracle VM

Okay, so where can I get the OVA images?

To get the latest OVA image, you can get it from here [7]. From there, you will need to follow your hypervisor instructions on importing OVA images. 

Or if you want a short URL, http://goo.gl/xM89p7


---

[1] http://www.ubuntu.com/cloud/tools/snappy#snappy-azure
[2] http://www.ubuntu.com/cloud/tools/snappy#snappy-google
[3] http://www.ubuntu.com/cloud/tools/snappy#snappy-amazon
[4] http://www.ubuntu.com/cloud/tools/snappy#snappy-local
[5] https://blog.utlemming.org/2015/01/snappy-images-for-vagrant.htm
[6] https://en.wikipedia.org/wiki/Open_Virtualization_Format
[7] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-cloud.ova

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant box add snappy http://goo.gl/6eAAoX
  • vagrant init snappy
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-core-devel-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-core-devel-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Nicholas Skaggs

PSA: Community Requests

As you plan your ubuntu related activities this year, I wanted to highlight an opportunity for you to request materials and funds to help make your plans reality. The funds are donations made by other ubuntu enthusiasts to support ubuntu and specifically to enable community requests. In other words, if you need help traveling to a conference to support ubuntu, planning a local event, holding a hackathon, etc, the community donations fund can help.

Check out the funding page for more information on how to apply and the requirements. In short, if you are a ubuntu member and want to do something to further ubuntu, you can request materials and funding to help. Global Jam is less than a month away, is your loco ready? Flavors, trying to plan events or hold other activities? I'd encourage all of you to submit requests if money or materials can help enable or enhance your efforts to spread ubuntu. Here's to sharing the joy of ubuntu this year!

Read more
Michael Hall

Whenever a user downloads Ubuntu from our website, they are asked if they would like to make a donation, and if so how they want their money used. When the “Community” option is chosen, that money is made available to members of our community to use in ways that they feel will benefit Ubuntu.

I’m a little late getting this report published, but it’s finally done. You can read the report here: https://docs.google.com/a/canonical.com/document/d/1HrBqGjqKe-THdK7liXFDobDU2LVW9JWtKxoa8IywUU4/edit#heading=h.yhstkxnvuk7s

We pretty consistently spend less than we get in each quarter, which means we have money sitting around that could be used by the community. If you want to travel to an event, would like us to sponsor an event, need hardware for development or testing, or anything else that you feel will make Ubuntu the project and the community better, please go and fill out the request form.

 

Read more
bmichaelsen

Auld Lang Syne

we’ll tak’ a cup o’ kindness yet,
for auld lang syne.

– Die Roten Rosen, Auld Lang Syne

Eike already greeted the Year of Our Lady of Discord 3181 four days ago, but I’d like to take this opportunity to have a look at the state of the LibreOffice project — the bug tracker status that is.

By the end of 2014:

unconfirmed

And a special “Thank You!” goes out to everyone who created one of the over 100 Easy Hacks written for LibreOffice in 2014, and everyone who helped, mentored or reviewed patches by new contributors to the LibreOffice project. Easy Hacks are a good way someone curious about the code of LibreOffice to get started in the project with the help of more experienced developers. If you are interested in that, you find more information on Easy Hacks on the TDF wiki. Note that there are also Easy Hacks about Design related topics and on topics related to QA.

If “I should contribute to LibreOffice once in 2015″ wasnt part of your new years resolutions yet, you are invited to add this as Easy Hacks might convince you that its worthwhile and … easy.


Read more
Dustin Kirkland


Awww snap!

That's right!  Snappy Ubuntu images are now on AWS, for your EC2 computing pleasure.

Enjoy this screencast as we start a Snappy Ubuntu instance in AWS, and install the xkcd-webserver package.


And a transcript of the commands follows below.

kirkland@x230:/tmp⟫ cat cloud.cfg
#cloud-config
snappy:
ssh_enabled: True
kirkland@x230:/tmp⟫ aws ec2 describe-images \
> --region us-east-1 \
> --image-ids ami-5c442634

{
"Images": [
{
"ImageType": "machine",
"Description": "ubuntu-core-devel-1418912739-141-amd64",
"Hypervisor": "xen",
"ImageLocation": "ucore-images/ubuntu-core-devel-1418912739-141-amd64.manifest.xml",
"SriovNetSupport": "simple",
"ImageId": "ami-5c442634",
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"BlockDeviceMappings": [],
"State": "available",
"VirtualizationType": "hvm",
"Name": "ubuntu-core-devel-1418912739-141-amd64",
"OwnerId": "649108100275",
"Public": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ # NOTE: This AMI will almost certainly have changed by the time you're watching this ;-)
kirkland@x230:/tmp⟫ clear
kirkland@x230:/tmp⟫ aws ec2 run-instances \
> --region us-east-1 \
> --image-id ami-5c442634 \
> --key-name id_rsa \
> --instance-type m3.medium \
> --user-data "$(cat cloud.cfg)"
{
"ReservationId": "r-c6811e28",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"OwnerId": "357813986684",
"Instances": [
{
"KeyName": "id_rsa",
"PublicDnsName": null,
"ProductCodes": [],
"StateTransitionReason": null,
"LaunchTime": "2014-12-18T17:29:07.000Z",
"Monitoring": {
"State": "disabled"
},
"ClientToken": null,
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"PrivateDnsName": null,
"ImageId": "ami-5c442634",
"BlockDeviceMappings": [],
"Placement": {
"GroupName": null,
"AvailabilityZone": "us-east-1e",
"Tenancy": "default"
},
"AmiLaunchIndex": 0,
"VirtualizationType": "hvm",
"NetworkInterfaces": [],
"SecurityGroups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"State": {
"Name": "pending",
"Code": 0
},
"Hypervisor": "xen",
"InstanceId": "i-af43de51",
"InstanceType": "m3.medium",
"EbsOptimized": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ aws ec2 describe-instances --region us-east-1 | grep PublicIpAddress
"PublicIpAddress": "54.145.196.209",
kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
ssh: connect to host 54.145.196.209 port 22: Connection refused
255 kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
The authenticity of host '54.145.196.209 (54.145.196.209)' can't be established.
RSA key fingerprint is 91:91:6e:0a:54:a5:07:b9:79:30:5b:61:d4:a8:ce:6f.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '54.145.196.209' (RSA) to the list of known hosts.
Welcome to Ubuntu Vivid Vervet (development branch) (GNU/Linux 3.16.0-25-generic x86_64)

* Documentation: https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to the Ubuntu Core rolling development release.

* See https://ubuntu.com/snappy

It's a brave new world here in snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snappy --help'
for app installation and transactional updates.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-10-153-149-47:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=1923976k,nr_inodes=480994,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=385432k,mode=755)
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
/dev/xvda3 on /writable type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,mode=755)
tmpfs on /etc/fstab type tmpfs (rw,nosuid,noexec,relatime,mode=755)
/dev/xvda3 on /etc/systemd/system type ext4 (rw,relatime,discard,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=385432k,mode=755)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/xvda3 on /etc/hosts type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/sudoers.d type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /root type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /usr/share/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/snappy type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/click type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/initramfs-tools type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/writable type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ssh type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/tmp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/cache/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/apparmor.d/cache type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ufw type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/log type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/system-image type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /var/lib/sudo type tmpfs (rw,relatime,mode=700)
/dev/xvda3 on /var/lib/logrotate type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dhcp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dbus type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/cloud type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apps type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /mnt type tmpfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/xvda3 on /apps type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /home type ext4 (rw,relatime,discard,data=ordered)
/dev/xvdb on /mnt type ext3 (rw,relatime,data=ordered)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=385432k,mode=700,uid=1000,gid=1000)
ubuntu@ip-10-153-149-47:~$ mount | grep " / "
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
ubuntu@ip-10-153-149-47:~$ sudo touch /foo
touch: cannot touch ‘/foo’: Read-only file system
ubuntu@ip-10-153-149-47:~$ sudo apt-get update
Ubuntu Core does not use apt-get, see 'snappy --help'!
ubuntu@ip-10-153-149-47:~$ sudo snappy --help
Usage:snappy [-h] [-v]
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
...

snappy command line interface

optional arguments:
-h, --help show this help message and exit
-v, --version Print this version string and exit

Commands:
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
info
versions
search
update-versions
update
rollback undo last system-image update.
install
uninstall
tags
build
chroot
framework
fake-version ==SUPPRESS==
nap ==SUPPRESS==
ubuntu@ip-10-153-149-47:~$ sudo snappy info
release: ubuntu-core/devel
frameworks:
apps:
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
ubuntu@ip-10-153-149-47:~$ sudo snappy search docker
Part Version Description
docker 1.3.2.007 The docker app deployment mechanism
ubuntu@ip-10-153-149-47:~$ sudo snappy install docker
docker 4 MB [=============================================================================================================] OK
Part Tag Installed Available Fingerprint Active
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy search webserver
Part Version Description
go-example-webserver 1.0.1 Minimal Golang webserver for snappy
xkcd-webserver 0.3.1 Show random XKCD compic via a build-in webserver
ubuntu@ip-10-153-149-47:~$ sudo snappy install xkcd-webserver
xkcd-webserver 21 kB [=====================================================================================================] OK
Part Tag Installed Available Fingerprint Active
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
ubuntu@ip-10-153-149-47:~$ exit
logout
Connection to 54.145.196.209 closed.
kirkland@x230:/tmp⟫ ec2-instances
i-af43de51 ec2-54-145-196-209.compute-1.amazonaws.com
kirkland@x230:/tmp⟫ ec2-terminate-instances i-af43de51
INSTANCE i-af43de51 running shutting-down
kirkland@x230:/tmp⟫

Cheers!
Dustin

Read more
Michael Hall

There’s a saying in American political debate that is as popular as it is wrong, which happens when one side appeals to our country’s democratic ideal, and the other side will immediately counter with “The United States is a Republic, not a Democracy”. I’ve noticed a similar misunderstanding happening in open source culture around the phrase “meritocracy” and the negatively-charged “oligarchy”. In both cases, though, these are not mutually exclusive terms. In fact, they don’t even describe the same thing.

Authority

One of these terms describes where the authority to lead (or govern) comes from. In US politics, that’s the term “republic”, which means that the authority of the government is given to it by the people (as opposed to divine-right, force of arms, of inheritance). For open source, this is where “meritocracy” fits in, it describes the authority to lead and make decisions as coming from the “merit” of those invested with it. Now, merit is hard to define objectively, and in practice it’s the subjective opinion of those who can direct a project’s resources that decides who has “merit” and who doesn’t. But it is still an important distinction from projects where the authority to lead comes from ownership (either by the individual or their employer) of a project.

Enfranchisement

History can easily provide a long list of Republics which were not representative of the people. That’s because even if authority comes from the people, it doesn’t necessarily come from all of the people. The USA can be accurately described as a democracy, in addition to a republic, because participation in government is available to (nearly) all of the people. Open source projects, even if they are in fact a meritocracy, will vary in what percentage of their community are allowed to participate in leading them. As I mentioned above, who has merit is determined subjectively by those who can direct a project’s resources (including human resource), and if a project restricts that to only a select group it is in fact also an oligarchy.

Balance and Diversity

One of the criticisms leveled against meritocracies is that they don’t produce diversity in a project or community. While this is technically true, it’s not a failing of meritocracy, it’s a failing of enfranchisement, which as has been described above is not what the term meritocracy defines. It should be clear by now that meritocracy is a spectrum, ranging from the democratic on one end to the oligarchic on the other, with a wide range of options in between.

The Ubuntu project is, in most areas, a meritocracy. We are not, however, a democracy where the majority opinion rules the whole. Nor are we an oligarchy, where only a special class of contributors have a voice. We like to use the term “do-ocracy” to describe ourselves, because enfranchisement comes from doing, meaning making a contribution. And while it is limited to those who do make contributions, being able to make those contributions in the first place is open to anybody. It is important for us, and part of my job as a Community Manager, to make sure that anybody with a desire to contribute has the information, resources, and access to to so. That is what keeps us from sliding towards the oligarchic end of the spectrum.

 

Read more
Dustin Kirkland


As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin

Read more
Daniel Holbach

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

Read more
Daniel Holbach

I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.

Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.

Problem statements

  • Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
  • Perception of contributor drop in “older” parts of the community.
    • Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
    • Less activity in LoCos (lacking a sense of purpose?)
    • No drop in members/developers.
  • Less activity in Canonical-led projects.
  • We don’t spend marketing money on social media. Build a pavement online.
  • Downloading a CD image is too much of a barrier for many.
  • Our “community infrastructure” did not scale with the amount of users.
  • Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
  • We don’t have enough time to train newcomers.
  • Language barriers make it hard for some to get involved.
  • Canonical does a bad job announcing their presence at events.

Questions

  • Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
  • Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
  • Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?

Proposals

  • community.ubuntu.com: More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
  • Make Ubuntu events more about “doing things with Ubuntu”?
  • Ubuntu Leadership Mentoring programme.
  • Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.

Join the hangout on ubuntuonair.com on Friday, 12th December 2014, 16 UTC.

Read more
Daniel Holbach

It’s fantastic that a we have more discussion about where we want our community to go. We get ideas out of it, people communicate and get a common understanding of issues. Jono’s blog post and the ubuntu-community-team mailing list generated a lot of good stuff already. Last week we had an IRC meeting with the CC and discussed governance and leadership in there.

We took quite a bit of notes, and Elfy set up a doc where we note down actions. I would like to suggest we have

Please

  • use Elfy’s action’s doc for submitting agenda items,
  • your agenda item is a concrete proposal or something which could be turned into work items,
  • make sure you’re there,
  • add your name to it!

Looking forward to seeing you there! :-)

Read more
Dustin Kirkland



A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive


You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.


$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help



And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...


While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Enjoy!
Dustin

Read more
jdstrand

Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

A developer doc will be published soon.


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
Nicholas Skaggs

I thought I would add a little festivity to the holiday season, quality style. In case your holidays just are not the same without a little quality in your life, allow me to share how you can get involved.

There are opportunities for every role listed on the QA wiki. Testers and test writers are both needed. Testing and writing manual tests can be learned by anyone, no coding required. That said if you have skills or interest in technical work, I would encourage you help out. You will learn by doing and get help from others while you do it.

Now onto the good stuff! What can you do to help ubuntu this cycle from a quality perspective?

Dogfooding
There is an ever present need for brave folks willing to simply run the development version of ubuntu and use it as a daily machine throughout the cycle. It's one of the best ways for us as a community to uncover bugs and issues, in particular things that regress from the previous release. Upgrade to vivid today and see what you can break!

QATracker
This tool is written in drupal7 and runs the iso.qa.ubuntu.com and packages.qa.ubuntu.com sites. These sites are used to record and view the results of all of our manual testing efforts. Currently dkessel is leading the effort on implementing some needed UI changes. The code and more information about the project can be found on launchpad. The tracker is one of our primary tools and needs your help to become friendly for everyone to use.

In addition a charm would be useful to simplify setting up a development environment. The charm can be based upon the existing drupal charm. At the moment this work is ready for someone to jump in.

Unity8
Running unity8 as a full-time desktop is a personal goal I have for this cycle. I hope some others might also want to be early adopters and join me in this goal. For now you can help by testing the unity8 desktop. Have a look at running unity in lxc for an easy way to run unity8 today on your machine. Use it, test it, and offer feedback. I'll be talking more about unity8 as the cycle progresses and opportunities to test new features aimed at the desktop appear.

Core Apps
The core apps project is an excellent way to get involved. These applications have been lovingly developed by community members just like you. Many of the teams are looking for help in writing tests and for someone who can help bring a testing mindset and eye to the work. As of this writing specifically the docviewer, terminal and calculator teams would love your help. The core apps hackdays are happening this week, drop by and introduce yourself to get started!

Manual Tests
Like the sound of writing tests but the idea of writing code turns you off? Manual tests are needed as well! They are written in English and are easy to understand and write. Manual tests include everything you see on the qatracker and are managed as a launchpad project. This means you can pick a bug and "fix it" by submitting a merge request. The bugs involve both fixing existing tests as well as requests for new testcases.

Images
As always there are images that need testing. Testing milestones occur later in the cycle which involve everyone helping to test a specific set of images. In the meantime, daily images are generated that have made it through the automated tests and are ready for manual testing. Booting an image in a live session is a great way to check for regressions on your machine. Doing this early in the cycle can help make sure your hardware and others like it experience a regression free upgrade when the time comes.

Triaging
After subjecting software to testing, bugs are naturally found. These bugs then need to be verified and triaged. The bugsquadders, as they are called, would be happy to help you learn to categorize or triage bugs and do other tasks.

No matter how you choose to get involved, feel free to contact me for help if needed. Most of all, Happy Testing!


Read more
Daniel Holbach

The call for an Ubuntu Foundation has come up again. It has been discussed many times before, ever since an announcement was made many years ago which left a number of people confused about the state of things.

The way I understood the initial announcement was that a trust had been set up, so that if aliens ever kidnapped our fearless leader, or if he decided that beekeeping was more interesting than Ubuntu, we could still go on and bring the best flavour of linux to the world.

Ok, now back to the current discussion. An Ubuntu Foundation seems to have quite an appeal to some. The question to me is: which problems would it solve?

Looking at it from a very theoretical point of view, an Ubuntu foundation could be a place where you separate “commercial” from “public” interests, but how would this separation work? Who would work for which of the entities? Would people working for the Ubuntu foundation have to review Canonical’s paperwork before they can close deals? Would there be a board where decisions have to be pre-approved? Which separation would generally happen?

Right now, Ubuntu’s success is closely tied to Canonical’s success. I consider this a good thing. With every business win of Canonical, Ubuntu gets more exposure in the world. Canonical’s great work in the support team, in the OEM sector or when closing deals with governments benefits Ubuntu to a huge degree. It’s like two sides of a coin right now. Also: Canonical pays the bills for Ubuntu’s operations. Data centers, engineers, designers and others have to be paid.

In theory it all sounds fine: “you get to have a say”, “more transparency”, etc. I don’t think many realise though, that this will mean that additional people will have to sift through legal and other documents, that more people will be busy writing reports, summarising discussions, that there will be more need for admin , that customers will have to wait longer, that this will in general have to cost more time and money.

I believe that bringing in a new layer will bring incredible amounts of work and open up endless possibilities for politics and easily bring things to a stand-still.

Will this fix Ubuntu’s problems? I absolutely don’t think so. Could we be more open, more inspiring and more inviting? Sure, but demanding more transparency and more separation is not going to bring that.

Read more
bmichaelsen

To Win in Toulouse

Now the only thing a gambler needs
Is a suitcase and a trunk.
– Animals, The House of the Rising Sun

So, as many others, I have been to the LibreOffice Hackfest in Toulouse which — unlike many of our other Hackfests — was part of a bigger event: Capitole du Libre. As we had our own area and were not 30+ hackers, this also had the advantage that we got quicker to work. And while I had still some boring administrative work to do, this is a Hackfest were I actually got to do some coding. I looked for some bookmark related bugs in Writer, but the first bugs I looked at were just too well suited to be Easy Hacks: fdo#51741 (“Deleting bookmark is not seen as modification of document”) and fdo#56116 (“Names of bookmarks should allow all characters which are valid in HTML anchor names (missing: ‘:’ and ‘.’)”). Both were made Easy Hacks and both are fixed on master now. I then fixed fdo#85542 (“DOCX import of overlapping bookmarks”), which proved slightly more work then expected and provided a unittest for it to never come back. I later learned that the second part was entirely nonoptional, as Markus promised he would not have let me leave Toulouse without writing a unittest for commited code. I have to admit that that is a supportable position.

Toulouse Hackfest Room

Toulouse Hackfest Room

Scenes like the above were actually rather rare as we were mostly working over our notebooks. One thing I came up with at the Hackfest, but didnt finish there was some clang plugins for finding cascading conditional ops and and conditional ops that have assignments as a sideeffect in their midst. While I found nothing as mindboggling as the tweet that gave inspiration to these plugins in sw (Writer), I found some impressive expressions that certainly wouldnt be a joy to step through in gdb (or even better: set a breakpoint in) when debugging and fixed those. We probably could make a few EasyHacks out of what these (or similar) plugins find outside of sw/ (I only looked there for now) — those are reasonably easy to refactor, but you dont want to do that in the middle of a debugging session. While at it, I also looked at clangs “value assigned, but never read” hints. Most were harmless, but also trivial to get rid of. On the other hand, some of those pointed to real logic errors that are otherwise hard to see. Like this one, which has been hiding — if git is to be believed — in plain sight ever since OpenOffice.org was originally open sourced in 2000. All in all, this experience is encouraging. Now that there are our coverity defect density is a just a rounding error above zero getting more fancy clang plugins might be promising.

Just one week after the Hackfest in Toulouse, there was another event LibreOffice took part in: The Bug Squashing Party in Munich — its encouraging to see Jonathan Riddell being a commiter to LibreOffice too now, but that is not all, we have more events coming up: The Document Foundation and LibreOffice will have an assembly at 31c3 in Hamburg, you are most welcome to drop by there! And next then, there will be FOSDEM 2015 in Bruessels, where LibreOffice will be present as usual.


Read more
Nicholas Skaggs

Creating mutli-arch click packages

Click packages are one of the pieces of new technology that drives the next version of ubuntu on the phone and desktop. In a nutshell click packages allow for application developers to easily package and deliver application updates independent of the distribution release or archive. Without going into the interesting technical merits and de-merits of click packages, this means the consumer can get faster application updates. But much of the discussion and usage of click packages until now has revolved around mobile. I wanted to talk about using click packages on the desktop and packaging clicks for multiple architectures.

The manifest file
Click packages follow a specific format. Click packages contain a payload of an application's libraries, code, artwork and resources, along with its needed external dependencies. The description of the package is found in the manifest file, which is what I'd like to talk about. The file must contain a few keys, but one of the recognized optional keys is architecture. This key allows specifying architectures the package will run on.

If an application contains no compiled code, simply use 'all' as the value for architecture. This accomplishes the goal of running on all supported architectures and many of the applications currently in the ubuntu touch store fall into this category. However, an increasing number of applications do contain compiled code. Here's how to enable support across architectures for projects with compiled code.

Fat packages
The click format along with the ubuntu touch store fully support specifying one or more values for specific architecture support inside the application manifest file. Those values follow the same format as dpkg architecture names. Now in theory if a project containing compiled code lists the architectures to support, click build should be able to build one package for all. However, for now this process requires a little manual intervention. So lets talk about building a fat (or big boned!) package that contains support for multiple architectures inside a single click package.

Those who just want to skip ahead can check out the example package I put together using clock. This same package can be found in the store as multi-arch clock test. Feel free to install the click package on the desktop, the i386 emulator and an armhf device.

Building a click for a different architecture
To make a multi-arch package a click package needs to be built for each desired architecture. Follow this tutorial on developer.ubuntu.com for more information on how to create a click target for each architecture. Once all the targets are setup, use the ubuntu sdk to build a click for each target. The end result is a click file specific to each architecture.

For example in creating the clock package above, I built a click for amd64, i386 and armhf. Three files were generated:

com.ubuntu.clock_3.2.176_amd64.click
com.ubuntu.clock_3.2.176_i386.click
com.ubuntu.clock_3.2.176_armhf.click

Notice the handy naming scheme allows for easy differentiation as to which click belongs to which architecture. Next, extract the compiled code from each click package. This can be accomplished by utilizing dpkg. For example,

dpkg -x com.ubuntu.clock_3.2.176_amd64.click amd64

Do this for each package. The result should be a folder corresponding to each package architecture.

Next copy one version of the package for use as the base of multi-arch click package. In addition, remove all the compiled code under the lib folder. This folder will be populated with the extracted compiled code from the architecture specific click packages.

cp amd64 multi
rm -rf multi/lib/*

Now there is a folder for each click package, and a new folder named multi that contains the application, minus any compiled code.

Creating the multi-arch click
Inside the extracted click packages is a lib folder. The compiled modules should be arranged inside, potentially inside an architecture subfolder (depending on how the package is built).

Copy all of the compiled modules into a new folder inside the lib folder of the multi directory. The folder name should correspond to the architecture of the complied code. Here's a list of the architectures for ARM, i386, and amd64 respectively.


arm-linux-gnueabihf
i386-linux-gnu
x86_64-linux-gnu


You can check the naming from an intended device by looking in the application-click.conf file.

grep ARCH /usr/share/upstart/sessions/application-click.conf

To use the clock package as an example again, here's a quick look at the folder structure:

lib/arm-linux-gnueabihf/...
lib/i386-linux-gnu/...
lib/x86_64-linux-gnu/...

The contents of lib/* from each click package I built earlier is under a corresponding folder inside the multi/lib directory. So for example, the lib folder from com.ubuntu.clock_3.2.176_i386.click became lib/i386-linux-gnu/.

Presto, magic package time! 
Finally the manifest.json file needs to be updated to reflect support for the desired architectures. Inside the manifest.json file under the multi directory, edit the architecture key values to list all supported architectures for the new package. For example to list support for ARM and x86 architectures,

"architecture": ["armhf", "i386", "amd64"],

To build the new package, execute click build multi. The resulting click should build and be named with a _multi.click prefix. This click can be installed on any of the specified architectures and is ready to be uploaded to the store.

Caveats, nibbly bits and bugs
So apart from click not automagically building these packages, there is one other bug as of this writing. The resulting multi-arch click will fail the automated store review and instead enter manual review. To workaround this request a manual review. Upon approval, the application will enter the store as usual.

Summary
In summary to create a multi-arch click package build a click for each supported architecture. Then pull the compiled library code from each click and place into a single click package. Next modify the click manifest file to state all of the architectures supported. Finally, rebuild the click package!

I trust this explanation and example provides encouragement to include support for x86 platforms when creating and uploading a click package to the store. Undoubtedly there are other ways to build a multi-arch click; simply ensure all the compiled code for each architecture is included inside the click package. Feel free to experiment!

If you have any questions as usual feel free to contact me. I look forward to seeing more applications in the store from my unity8 desktop!

Read more
Daniel Holbach

Despite being an “old” technology and having its problems, we still use mailing lists… a lot.  Some of the lists have been cleaned up by the Community Council some time ago, especially if they were created and then forgotten some time later.

We do have a number of mailing lists though which are still active, but have the problem of not having enough (or enough active) moderators on board. What then happens is this:

List moderation

… which sucks.

It’s not very nice if you have lots and lots of good discussion not happening just because you had no time to tend to the moderation queue.

Some mailing lists receive quite a bit of spam, others get a lot of mails from folks who are not subscribed yet, but this really shouldn’t be a problem. If you run a popular mailing list and moderation gets too much of a hassle, please consider adding more moderators – if you ask nicely a bunch of folks will be happy to help out.

So my advice:

  1. If you every registered a mailing list, please have a look at its moderation queue and see if you need help.
  2. If yes, please add more moderators.
  3. If you don’t do it yet, use listadmin – it’s the best thing since sliced bread and keeping up with moderation in the future will be no problem at all.

Read more