Canonical Voices

Posts tagged with 'ubuntu'

pitti

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Read more
Michael Hall

Technically a fork is any instance of a codebase being copied and developed independently of it’s parent.  But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.

There are, by my observation, three different kinds of forks that can be distinguished by their intent and method.  These can be neatly labeled as Convergent, Divergent and Emergent forks.

Convergent Forks

Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as it’s parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of it’s development. The ultimate goal of a convergent fork is to take the place of the parent project.

Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.

Divergent Forks

Less common that convergent forks, but still well known by everybody in open source, are the divergent forks.  These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with it’s management.  The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project.  They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.

Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.

Emergent Forks

Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project.  Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.

Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.

All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense.  Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.

Read more
Dustin Kirkland

Transcoding video is a very resource intensive process.

It can take many minutes to process a small, 30-second clip, or even hours to process a full movie.  There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpegmencoder, and handbrake.  Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.

I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!

When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.

Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.

While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.

I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):

avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deployed, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.

http://www.w3schools.com/html/html5_video.asp

The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.

Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 

In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Read more
pitti

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

  • Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
  • Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
  • Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
  • Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
  • Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

New runner: adt-virt-ssh

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create"
$ nova keypair-list
[...]
| pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 |

# find a suitable image
$ nova image-list 
[...]
| ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                         | ACTIVE |  

$ nova flavor-list 
[...]
| 100 | standard.xsmall  | 1024      | 10   | 10        |      | 1     | 1.0         | N/A       |

# now run the tests: please be patient, this takes a few mins!
$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
   -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS
adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.

Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Test dependency installation without apt/root

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc.

Click package support

autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:

    "x-test": {
        "unit": "tests/unittests",
        "smoke": {
            "path": "tests/smoketest",
            "depends": ["shunit2", "moreutils"],
            "restrictions": ["allow-stderr"]
        },
        "another": {
            "command": "echo hello > /tmp/world.txt"
        }
    }

For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like

    "x-test": {
        "autopilot": "ubuntu_calculator_app"
    }

which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course.

So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:

$ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click
$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
      ssh -s /usr/share/autopkgtest/ssh-setup/adb
[..]
Traceback (most recent call last):
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
    self._assert_result("0.33333333")
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
    self.main_view.get_result, Eventually(Equals(expected_result)))
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'

Ran 33 tests in 295.586s
FAILED (failures=1)

Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
    ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic

This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$ sudo lxc-clone -o adt-utopic -n click
$ sudo lxc-start -n click
  # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
  # then "sudo powerdown"

# current apparmor profile doesn't allow remounting something read-only
$ echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config

Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
          --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \
          ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \
          --- lxc -es click

This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up.

autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ?.

Feedback appreciated!

Read more
Dustin Kirkland

It's that simple.
It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, and I decided to kick back and have a little fun with the remainder of my work day.

 It's now 4:37pm on Friday, and I'm now done.

Done with what?  The Yo charm, of course!

The Internet has been abuzz this week about the how the Yo app received a whopping $1 million dollars in venture funding.  (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)

It's little more than the first program any kid writes -- hello world!

Subsequently I realized that we don't really have a "hello world" charm.  And so here it is, yo.

$ juju deploy yo

Deploying up a webpage that says "Yo" is hardly the point, of course.  Rather, this is a fantastic way to see the absolute simplest form of a Juju charm.  Grab the source, and go explore it yo-self!

$ charm-get yo
$ tree yo
??? config.yaml
??? copyright
??? hooks
?   ??? config-changed
?   ??? install
?   ??? start
?   ??? stop
?   ??? upgrade-charm
?   ??? website-relation-joined
??? icon.svg
??? metadata.yaml
??? README.md
1 directory, 11 files



  • The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
  • The copyright is simply boilerplate GPLv3
  • The icon.svg is just a vector graphics "Yo."
  • The metadata.yaml explains what this charm is, how it can relate to other charms
  • The README.md is a simple getting-started document
  • And the hooks...
    • config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
    • install simply installs apache2 and overwrites /var/www/index.html
    • start and stop simply starts and stops the apache2 service
    • upgrade-charm is currently a no-op
    • website-relation-joined sets and exports the hostname and port of this system
The website relation is very important here...  Declaring and defining this relation instantly lets me relate this charm with dozens of other services.  As you can see in the screenshot at the top of this post, I was able to easily relate the varnish website accelerator in front of the Yo charm.

Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!

Cheers,

Dustin

Read more
Michael Hall

opplanet-tasco-usb-digital-microscope-780200tTwo years ago, my wife and I made the decision to home-school our two children.  It was the best decision we could have made, our kids are getting a better education, and with me working from home since joining Canonical I’ve been able to spend more time with them than ever before. We also get to try and do some really fun things, which is what sets the stage for this story.

Both my kids love science, absolutely love it, and it’s one of our favorite subjects to teach.  A couple of weeks ago my wife found an inexpensive USB microscope, which lets you plug it into a computer and take pictures using desktop software.  It’s not a scientific microscope, nor is it particularly powerful or clear, but for the price it was just right to add a new aspect to our elementary science lessons. All we had to do was plug it in and start exploring.

My wife has a relatively new (less than a year) laptop running windows 8.  It’s not high-end, but it’s all new hardware, new software, etc.  So when we plugged in our simple USB microscope…….it failed.  As in, didn’t do anything.  Windows seemed to be trying to figure out what to do with it, over and over and over again, but to no avail.

My laptop, however, is running Ubuntu 14.04, the latest stable and LTS release.  My laptop is a couple of years old, but classic, Lenovo x220. It’s great hardware to go with Ubuntu and I’ve had nothing but good experiences with it.  So of course, when I decided to give our new USB microsope a try……it failed.  The connection was fine, the log files clearly showed that it was being identified, but nothing was able to see it as a video input device or make use of it.

Now, if that’s where our story ended, it would fall right in line with a Shakespearean tragedy. But while both Windows and Ubuntu failed to “just work” with this microscope, both failures were not equal. Because the Windows drivers were all closed source, my options ended with that failure.

But on Ubuntu, the drivers were open, all I needed to do was find a fix. It took a while, but I eventually found a 2.5 year old bug report for an identical chipset to my microscope, and somebody proposed a code fix in the comments.  Now, the original reporter never responded to say whether or not the fix worked, and it was clearly never included in the driver code, but it was an opportunity.  Now I’m no kernel hacker, nor driver developer, in fact I probably shouldn’t be trusted to write any amount of C code at all.  But because I had Ubuntu, getting the source code of my current driver, as well as all the tools and dependencies needed to build it, took only a couple of terminal commands.  The patch was too old to cleanly apply to the current code, but it was easy enough to figure out where they should go, and after a couple tries to properly build just the driver (and not the full kernel or every driver in it), I had a new binary kernel modules that would load without error.  Then, when I plugged my USB microscope in again, it worked!

Red Onion skin at 120x magnificationPeople use open source for many reasons.  Some people use it because it’s free as in beer, for them it’s on the same level as freeware or shareware, only the cost matters. For others it’s about ethics, they would choose open source even if it cost them money or didn’t work as well, because they feel it’s morally right, and that proprietary software is morally wrong. I use open source because of USB microscopes. Because when they don’t work, open source gives me a chance to change that.

Read more
David Murphy (schwuk)

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can't help but have your attention grabbed by it!

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.


  1. Or make one out of wood like my colleague Gavin did! 

Read more
Dustin Kirkland

It's hard for me to believe that I have sat on a this draft blog post for almost 6 years.  But I'm stuck on a plane this evening, inspired by Elon Musk and Tesla's (cleverly titled) announcement, "All Our Patents Are Belong To You."  Musk writes:

Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.
When I get home, I'm going to take down a plaque that has proudly hung in my own home office for nearly 10 years now.  In 2004, I was named an IBM Master Inventor, recognizing sustained contributions to IBM's patent portfolio.

Musk continues:
When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors. After Zip2, when I realized that receiving a patent really just meant that you bought a lottery ticket to a lawsuit, I avoided them whenever possible.
And I feel the exact same way!  When I was an impressionable newly hired engineer at IBM, I thought patents were wonderful expressions of my own creativity.  IBM rewarded me for the work, and recognized them as important contributions to my young career.  Remember, in 2003, IBM was defending the Linux world against evil SCO.  (Confession: I think I read Groklaw every single day.)

Yeah, I filed somewhere around 75 patents in about 4 years, 47 of which have been granted by the USPTO to date.

I'm actually really, really proud of a couple of them.  I was the lead inventor on a couple of early patents defining the invention you might know today as Swype (Android) or Shapewriter (iPhone) on your mobile devices.  In 2003, I called it QWERsive, as the was basically applying "cursive handwriting" to a "qwerty keyboard."  Along with one of my co-inventors, we actually presented a paper at the 27th UNICODE conference in Berlin in 2005, and IBM sold the patent to Lenovo a year later.  (To my knowledge, thankfully that patent has never been enforced, as I used Swype every single day.)

QWERsive

But that enthusiasm evaporated very quickly between 2005 and 2007, as I reviewed thousands of invention disclosures by my IBM colleagues, and hundreds of software patents by IBM competitors in the industry.

I spent most of 2005 working onsite at Red Hat in Westford, MA, and came to appreciate how much more efficiently innovation happened in a totally open source world, free of invention disclosures, black out periods, gag orders, and software patents.  I met open source activists in the free software community, such as Jon maddog Hall, who explained the wake of destruction behind, and the impending doom ahead, in a world full of software patents.

Finally, in 2008, I joined an amazing little free software company called Canonical, which was far too busy releasing Ubuntu every 6 months on time, and building an amazing open source software ecosystem, to fart around with software patents.  To my delight, our founder, Mark Shuttleworth, continues to share the same enlightened view, as he states in this TechCrunch interview (2012):
“People have become confused,” Shuttleworth lamented, “and think that a patent is incentive to create at all.” No one invents just to get a patent, though — people invent in order to solve problems. According to him, patents should incentivize disclosure. Software is not something you can really keep secret, and as such Shuttleworth’s determination is that “society is not benefited by software patents at all.”Software patents, he said, are a bad deal for society. The remedy is to shorten the duration of patents, and reduce the areas people are allowed to patent. “We’re entering a third world war of patents,” Shuttleworth said emphatically. “You can’t do anything without tripping over a patent!” One cannot possibly check all possible patents for your invention, and the patent arms race is not about creation at all.
And while I'm still really proud of some of my ideas today, I'm ever so ashamed that they're patented.

If I could do what Elon Musk did with Tesla's patent portfolio, you have my word, I absolutely would.  However, while my name is listed as the "inventor" on four dozen patents, all of them are "assigned" to IBM (or Lenovo).  That is to say, they're not mine to give, or open up.

What I can do, is speak up, and formally apologize.  I'm sorry I filed software patents.  A lot of them.  I have no intention on ever doing so again.  The system desperately needs a complete overhaul.  Both the technology and business worlds are healthier, better, more innovative environment without software patents.

I do take some consolation that IBM seems to be "one of the good guys", in so much as our modern day IBM has not been as litigious as others, and hasn't, to my knowledge, used any of the patents for which I'm responsible in an offensive manner.

No longer hanging on my wall.  Tucked away in a box in the attic.
But there are certainly those that do.  Patent trolls.

Another former employer of mine, Gazzang was acquired earlier this month (June 3rd) by Cloudera -- a super sharp, up-and-coming big data open source company with very deep pockets and tremendous market potential.  Want to guess what happened 3 days later?  A super shady patent infringement lawsuit is filed, of course!
Protegrity Corp v. Gazzang, Inc.
Complaint for Patent InfringementCivil Action No. 3:14-cv-00825; no judge yet assigned. Filed on June 6, 2014 in the U.S. District Court for the District of Connecticut;Patents in case 7,305,707: “Method for intrusion detection in a database system” by Mattsson. Prosecuted by Neuner; George W. Cohen; Steven M. Edwards Angell Palmer & Dodge LLP. Includes 22 claims (2 indep.). Was application 11/510,185. Granted 12/4/2007.
Yuck.  And the reality is that happens every single day, and in places where the stakes are much, much higher.  See: Apple v. Google, for instance.

Musk concludes his post:
Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.
What a brave, bold, ballsy, responsible assertion!

I've never been more excited to see someone back up their own rhetoric against software patents, with such a substantial, palpable, tangible assertion.  Kudos, Elon.

Moreover, I've also never been more interested in buying a Tesla.   Coincidence?

Maybe it'll run an open source operating system and apps, too.  Do that, and I'm sold.

:-Dustin

Read more
bmichaelsen

Free Four

The memories of a man in his old age

are the deeds of a man in his prime

– Free Four, Obscured by Clouds, Pink Floyd

I just donated to:

  • the Wikimedia Foundation
  • the OpenBSD project

and became:

Being involved in a project that is heavily driven by donations, I keep remembering myself of the importance of putting my money were my mouth is.

Some of these donations were triggered by recent events and initiatives in these projects. GNOME’s outreach for women program for example. Or OpenBSDs bold initiative in starting LibreSSL, which is doing what needed to be done and vitalizing an overlooked area of open source development. Watching them explain the status quo and how they are attacking it remembers me of LibreOffice — beyond the name. Plus, I dont want to be compared with a My little Pony character again.

goals of LibreSSL -- they remind me of something

goals of LibreSSL — they remind me of something

Others are already working examples of the long tail, crowd funding and the meshed society (Wikipedia) or tailblazing to be one (Krautreporter) beyond the world of software. The latter might also have been influenced by one of the last wishes of a man that unexpectedly died way to early. May he rest in peace.


Read more
jdstrand

Last time I discussed AppArmor, I talked about new features in Ubuntu 13.10 and a bit about ApplicationConfinement for Ubuntu Touch. With the release of Ubuntu 14.04 LTS, several improvements were made:

  • Mediation of signals
  • Mediation of ptrace
  • Various policy updates for 14.04, including new tunables, better support for XDG user directories, and Unity7 abstractions
  • Parser policy compilation performance improvements
  • Google Summer of Code (SUSE sponsored) python rewrite of the userspace tools

Signal and ptrace mediation

Prior to Ubuntu 14.04 LTS, a confined process could send signals to other processes (subject to DAC) and ptrace other processes (subject to DAC and YAMA). AppArmor on 14.04 LTS adds mediation of both signals and ptrace which brings important security improvements for all AppArmor confined applications, such as those in the Ubuntu AppStore and qemu/kvm machines as managed by libvirt and OpenStack.

When developing policy for signal and ptrace rules, it is important to remember that AppArmor does a cross check such that AppArmor verifies that:

  • the process sending the signal/performing the ptrace is allowed to send the signal to/ptrace the target process
  • the target process receiving the signal/being ptraced is allowed to receive the signal from/be ptraced by the sender process

Signal(7) permissions use the ‘signal’ rule with the ‘receive/send’ permissions governing signals. PTrace permissions use the ‘ptrace’ rule with the ‘trace/tracedby’ permissions governing ptrace(2) and the ‘read/readby’ permissions governing certain proc(5) filesystem accesses, kcmp(2), futexes (get_robust_list(2)) and perf trace events.

Consider the following denial:

Jun 6 21:39:09 localhost kernel: [221158.831933] type=1400 audit(1402083549.185:782): apparmor="DENIED" operation="ptrace" profile="foo" pid=29142 comm="cat" requested_mask="read" denied_mask="read" peer="unconfined"

This demonstrates that the ‘cat’ binary running under the ‘foo’ profile was unable to read the contents of a /proc entry (in my test, /proc/11300/environ). To allow this process to read /proc entries for unconfined processes, the following rule can be used:

ptrace (read) peer=unconfined,

If the receiving process was confined, the log entry would say ‘peer=”<profile name>”‘ and you would adjust the ‘peer=unconfined’ in the rule to match that in the log denial. In this case, because unconfined processes implicitly can be readby all other processes, we don’t need to specify the cross check rule. If the target process was confined, the profile for the target process would need a rule like this:

ptrace (readby) peer=foo,

Likewise for signal rules, consider this denial:

Jun 6 21:53:15 localhost kernel: [222005.216619] type=1400 audit(1402084395.937:897): apparmor="DENIED" operation="signal" profile="foo" pid=29069 comm="bash" requested_mask="send" denied_mask="send" signal=term peer="unconfined"

This shows that ‘bash’ running under the ‘foo’ profile tried to send the ‘term’ signal to an unconfined process (in my test, I used ‘kill 11300′) and was blocked. Signal rules use ‘read’ and ‘send to determine access, so we can add a rule like so to allow sending of the signal:

signal (send) set=("term") peer=unconfined,

Like with ptrace, a cross-check is performed with signal rules but implicit rules allow unconfined processes to send and receive signals. If pid 11300 were confined, you would adjust the ‘peer=’ in the rule of the foo profile to match the denial in the log, and then adjust the target profile to have something like:

signal (receive) set=("term") peer=foo,

Signal and ptrace rules are very flexible and the AppArmor base abstraction in Ubuntu 14.04 LTS has several rules to help make profiling and transitioning to the new mediation easier:

# Allow other processes to read our /proc entries, futexes, perf tracing and
# kcmp for now
ptrace (readby),
 
# Allow other processes to trace us by default (they will need
# 'trace' in the first place). Administrators can override
# with:
# deny ptrace (tracedby) ...
ptrace (tracedby),
 
# Allow unconfined processes to send us signals by default
signal (receive) peer=unconfined,
 
# Allow us to signal ourselves
signal peer=@{profile_name},
 
# Checking for PID existence is quite common so add it by default for now
signal (receive, send) set=("exists"),

Note the above uses the new ‘@{profile_name}’ AppArmor variable, which is particularly handy with ptrace and signal rules. See man 5 apparmor.d for more details and examples.

14.10

Work still remains and some of the things we’d like to do for 14.10 include:

  • Finishing mediation for non-networking forms of IPC (eg, abstract sockets). This will be done in time for the phone release.
  • Have services integrate with AppArmor and the upcoming trust-store to become trusted helpers (also for phone release)
  • Continue work on netowrking IPC (for 15.04)
  • Continue to work with the upstream kernel on kdbus
  • Work continued on LXC stacking and we hope to have stacked profiles within the current namespace for 14.10. Full support for stacked profiles where different host and container policy for the same binary at the same time should be ready by 15.04
  • Various fixes to the python userspace tools for remaining bugs. These will also be backported to 14.04 LTS

Until next time, enjoy!


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
David Murphy (schwuk)

This is really inspiring to me, on several levels: as an Ubuntu member, as a Canonical, and as a school governor.

Not only are they deploying Ubuntu and other open-source software to their students, they are encouraging those students to tinker with their laptops, and – better yet – some of those same students are directly involved in the development, distribution, and providing support for their peers. All of those students will take incredibly valuable experience with them into their future careers.

Well done.

Read more
David Planella

As part of the Ubuntu App Developer Week, I just ran a live on-air session on how to internationalize your Ubuntu apps. Some of the participants on the live chat asked me if I could share the slides somewhere online.

So here they are for your viewing pleasure :) If you’ve got any questions on i18n or in Ubuntu app development in general, feel free to ask in the comments or ping me (dpm) on IRC.

The video

The slides

Enjoy!

The post Internationalizing your apps at the Ubuntu App Developer Week appeared first on David Planella.

Read more
David Planella

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

After the recent news of Jono stepping down as the Ubuntu Community Manager to seek new challenges at XPRIZE, a new era in Ubuntu begins. Jono’s leadership, passion and drive to continually push the boundaries have been contagious over the years, and have been the catalyst for growing the unique community of individuals that defines Ubuntu today.

Jono is now joining the ranks of non-Canonical Ubuntu members, and while this will change the angle of participation, I’m certain that it won’t change his energy and dedication one bit. But most importantly, it’s a testament to his work that his former team will continue to thrive and take up the torch in pushing those boundaries.

For us, it will be business as usual in the sense of implementing our roadmap, continuing to grow a strong and open community, being innovative in how we do it, and coordinating the logistics around our plans. So not much will be different in that regard, but obviously some organizational bits will change.

As part of the transition, the Ubuntu Community Team at Canonical in full, that is, Michael Hall, Daniel Holbach, Alan Pope, Nicholas Skaggs and myself, will now be hosting the weekly Ubuntu Q&A, starting today at 18:00 UTC on Ubuntu On Air (click here for the time at your location).

The Ubuntu Community Team Q&A

Openness, both in being a transparent and welcoming community, is one of the core values of Ubuntu, and we believe the channels should be always open for a healthy information flow and to help contributors get involved.

As such, the Ubuntu Community Team Q&A will continue to provide a weekly, 1-hour-long session open for participation to anyone who wants to ask their questions about Ubuntu. In fact, as in former editions, you can ask the Community Team just anything about Free Software, Technology, or whatever you come up with. As before, the only questions we won’t answer are those related to technical support, where you’ll be much better served using Ask Ubuntu, the Ubuntu forums or IRC.

Join the Ubuntu Community Team Q&A at 18:00 UTC today and ask your questions >

The Ubuntu Online Summit is coming soon!

Also, following the thread of events and participation, the new Ubuntu Online Summit (UOS) is coming up very soon, and it’s an excellent opportunity to learn about getting involved in Ubuntu, organizing or presenting the plans of the different Ubuntu teams for the next months.

UOS will be held on June 10th – 12th and it will be a combination of the former Ubuntu Developer Summit and the more user-facing events we’ve been organizing in the past. This opens the door to a wider audience that can follow a richer mix of developer and user or contributor content.

If you want to learn about the details, check out Michael’s UOS post on how it’s going to work. If you want to contribute and make a difference in Ubuntu, do register a session too!

Looking forward to seeing you soon!

The post A new era for the Ubuntu community team, or business as usual appeared first on David Planella.

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Michael Hall

Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.

Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.

But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted.  Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.

As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.

View the Report

Read more
David Planella

Ubuntu emulator guide

Following the initial announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[...] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, we’re continually seeing the improvements in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • Ubuntu 14.04 or later (see installation notes for older versions)
  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)

Installing the emulator

If you are using Ubuntu 14.04, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running these commands, followed by Enter:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa && sudo apt-get update
sudo apt-get install ubuntu-emulator

Alternatively, if you are running an older stable release such as Ubuntu 12.04, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named MARKDOWN_HASHb3eeabb8ee11c2be770b684d95219ecbMARKDOWN_HASH in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the MARKDOWN_HASH05556613978ce6821766bb234e2ff0f2MARKDOWN_HASH package corresponding to your architecture (i386 or amd64) to download in the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the MARKDOWN_HASH1843750ed619186a2ce7bdabba6f8062MARKDOWN_HASH package corresponding to download it at the same MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: MARKDOWN_HASH8844018ed0ccc8c506d6aff82c62c46fMARKDOWN_HASH
  10. Then run this command to install the packages: MARKDOWN_HASH0452d2d16235c62b87fd735e6496c661MARKDOWN_HASH
  11. Once the installation is successful you can close the terminal and remove the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder and its contents

Installation notes

  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.

Running the emulator

Ubuntu emulator guide

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chosen for it):

sudo ubuntu-emulator create myinstance --arch=i386
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

Notice how in the command above the --arch parameter was specified to override the default architecture (armhf). Using the i386 arch will make the emulator run at a (much faster) native speed.

Other parameters you might want to experiment with are also: --scale=0.7 and --memory=720. In these examples, we’re scaling down the UI to be 70% of the original size (useful for smaller screens) and specifying a maximum of 720MB for the emulator to use (on systems with memory to spare).

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.

Runtime notes

  • Make sure you’ve got enough space to install the emulator and create new instances, otherwise the operation will fail (or take a long time) without warning.
  • At this time, the emulator takes a while to load for the first time. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even though the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command.

Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Read more
Michael Hall

A couple of months ago Jono announced the dates for the Ubuntu Online Summit, June 10th – 12th,  and those dates are almost upon us now.  The schedule is opened, the track leads are on board, all we need now are sessions.  And that’s where you come in.

Ubuntu Online Summit is a change for us, we’re trying to mix the previous online UDS events with our Open Week, Developer Week and User Days events, to try and bring people from every part of our community together to celebrate, educate, and improve Ubuntu. So in addition to the usual planning sessions we had at UDS, we’re also looking for presentations from our various community teams on the work they do, walk-throughs for new users learning how to use Ubuntu, as well as instructional sessions to help new distro developers, app developers, and cloud devops get the most out of it as a platform.

What we need from you are sessions.  It’s open to anybody, on any topic, anyway you want to do it.  The only requirement is that you can start and run a Google+ OnAir Hangout, since those are what provide the live video streaming and recording for the event.  There are two ways you can propose a session: the first is to register a Blueprint in Launchpad, this is good for planning session that will result in work items, the second is to propose a session directly in Summit, which is good for any kind of session.  Instructions for how to do both are available on the UDS Website.

There will be Track Leads available to help you get your session on the schedule, and provide some technical support if you have trouble getting your session’s hangout setup. When you propose your session (or create your Blueprint), try to pick the most appropriate track for it, that will help it get approved and scheduled faster.

Ubuntu Development

Many of the development-oriented tracks from UDS have been rolled into the Ubuntu Development track. So anything that would previously have been in Client, Core/Foundations or Cloud and Server will be in this one track now. The track leads come from all parts of Ubuntu development, so whatever you session’s topic there will be a lead there who will be familiar with it.

Track Leads:

  • ?ukasz Zemczak
  • Steve Langasek
  • Leann Ogasawara
  • Antonio Rosales
  • Marc Deslaurs

Application Development

Introduced a few cycles back, the Application Development track will continue to have a focus on improving the Ubuntu SDK, tools and documentation we provide for app developers.  We also want to introduce sessions focused on teaching app development using the SDK, the various platform services available, as well as taking a deeper dive into specifics parts of the Ubuntu UI Toolkit.

Track Leads:

  • Michael Hall
  • David Planella
  • Alan Pope
  • Zsombor Egri
  • Nekhelesh Ramananthan

Cloud DevOps

This is the counterpart of the Application Development track for those with an interest in the cloud.  This track will have a dual focus on planning improvements to the DevOps tools like Juju, as well as bringing DevOps up to speed with how to use them in their own cloud deployments.  Learn how to write charms, create bundles, and manage everything in a variety of public and private clouds.

Track Leads:

  • Jorge Castro
  • Marco Ceppi
  • Patricia Gaughen
  • Jose Antonio Rey

Community

The community track has been a stable of UDS for as long as I can remember, and it’s still here in the Ubuntu Online Summit.  However, just like the other tracks, we’re looking beyond just planning ways to improve the community structure and processes.  This time we also want to have sessions showing users how they can get involved in the Ubuntu community, what teams are available, and what tools they can use in the process.

Track Leads:

  • Daniel Holbach
  • Jose Antonio Rey
  • Laura Czajkowski
  • Svetlana Belkin
  • Pablo Rubianes

Users

This is a new track and one I’m very excited about. We are all users of Ubuntu, and whether we’ve been using it for a month or a decade, there are still things we can all learn about it. The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server.  From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level.

Track Leads:

  • Elizabeth Krumbach Joseph
  • Nicholas Skaggs
  • Valorie Zimmerman

So once again, it’s time to get those sessions in.  Visit this page to learn how, then start thinking of what you want to talk about during those three days.  Help the track leads out by finding more people to propose more sessions, and let’s get that schedule filled out. I look forward to seeing you all at our first ever Ubuntu Online Summit.

Read more
Michael Hall

I’ve just finished the last day of a week long sprint for Ubuntu application development. There were many people here, designers, SDK developers, QA folks and, which excited me the most, several of the Core Apps developers from our community!

image20140520_0048I haven’t been in attendance at many conferences over the past couple of years, and without an in-person UDS I haven’t had a chance to meetup and hangout with anybody outside of my own local community. So this was a very nice treat for me personally to spend the week with such awesome and inspiring contributors.

It wasn’t a vacation though, sprints are lots of work, more work than UDS.  All of us were jumping back and forth between high information density discussions on how to implement things, and then diving into some long heads-down work to get as much implemented as we could. It was intense, and now we’re all quite tired, but we all worked together well.

I was particularly pleased to see the community guys jumping right in and thriving in what could have very easily been an overwhelming event. Not only did they all accomplish a lot of work, fix a lot of bugs, and implement some new features, but they also gave invaluable feedback to the developers of the toolkit and tools. They never cease to amaze me with their talent and commitment.

It was a little bitter-sweet though, as this was also the last sprint with Jono at the head of the community team.  As most of you know, Jono is leaving Canonical to join the XPrize foundation.  It is an exciting opportunity to be sure, but his experience and his insights will be sorely missed by the rest of us. More importantly though he is a friend to so many of us, and while we are sad to see him leave, we wish him all the best and can’t wait to hear about the things he will be doing in the future.

Read more
bmichaelsen

Train Stops

And the sons of pullman porters and the sons of engineers
Ride their father’s magic carpets made of steel
Mothers with their babes asleep are rockin’ to the gentle beat
And the rhythm of the rails is all they feel

– The City of New Orleans, Willie Nelson interpreting Steve Goodman

So, LibreOffice does its releases on a train release schedule and since we recently modified the schedule a bit (by putting out the alpha1 release earlier), I took the opportunity to take a closer look and explain a bit on what we are doing. With this every 6 months of LibreOffice development currently roughly look like this:
week after x.y.0 development release candidates finalized releases
fresh stable fresh stable
0 x.y.0
1 x.y.1~rc1 x.(y-1).4~rc1
2
3 x.y.1~rc2 x.(y-1).4~rc2
4 x.y.1 x.(y-1).4
5
6 x.y.2~rc1
7
8 x.y.2~rc2 x.(y-1).5~rc1
9 x.y.2
10 x.(y-1).5~rc2
11 x.y.3~rc1 x.(y-1).5
12
13 x.(y+1)~alpha1 x.y.3~rc2
14 x.y.3
15
16
17
18 x.(y+1)~beta1
19
20 x.(y+1)~beta2 x.(y-1).6~rc1
21
22 x.(y+1)~rc1 x.(y-1).6~rc2
23 x.(y-1).6
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last two columns are most visible to most visitors of the LibreOffice website. Those are the versions found on the LibreOffice Fresh and LibreOffice Stable download pages. We are in roughly at week 18 after 4.2.0 release now, and the versions available are 4.2.4 fresh and 4.1.6 stable. A careful reader will note that according to that schedule we should be at 4.2.3 and 4.1.5 — that is true, but the 4.2 series still had an extra 4.2.1 intermediate release to adjust the schedule of 4.2 in direction of the current plan. This is not expected for future releases (also note that there is always some flexibility in the plan to allow for holidays etc.)

If you count all the prereleases, release candidates and releases, you will find that we do 25 of those in 26 weeks. Beside the fact that this is a lot of work for release engineers, one might wonder if anyone can keep up with that, and if so — how? The answer to that depends on how you are using LibreOffice.

self deployment on LibreOffice fresh

If you are an user or a small business installing LibreOffice yourself, you will probably run LibreOffice fresh and the table above simplifies for you as follows:

week after x.y.0 development release candidates finalized releases
0 x.y.0
1 x.y.1~rc1
4 x.y.1
6 x.y.2~rc1
9 x.y.2
11 x.y.3~rc1
14 x.y.3
18 x.(y+1)~beta1
20 x.(y+1)~beta2
22 x.(y+1)~rc1
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last column shows the releases you are running. If you are a member of the LibreOffice community it would be very helpful if you also spend some time of this 6 months period for three actions:

  • running at least one of the release candidates in the table (available for download here) before the final is released.
  • running at least one beta releases in the table. Note that there will be a bug hunting session on the 4.3.0 beta release this week, that will help you get started.
  • running a nightly build once anywhere in the weeks 1-18. Note that if you are getting excited about seeing the latest and greatest builds while they are still steaming, there are tools that can help you with this on Linux and Windows.

If you do these each of these three things once in the timeframe of six months and report any issues you find, you are helping LibreOffice already a lot — and you are making sure that the finalized releases of the fresh series are not only containing all the latest features, but also free of severe regressions.

bigger deployments on LibreOffice stable

If you are not installing LibreOffice yourself, but instead have a major deployment administrated centrally, things are a bit different. You might be more conservative and interested in the releases from LibreOffice stable. And you probably have professional support from a certified developer or a company employing certified developers.

week after x.y.0 development release candidates finalized releases
1 x.(y-1).4~rc1
4 x.(y-1).4
8 x.(y-1).5~rc1
11 x.(y-1).5
13 x.(y+1)~alpha1
18 x.(y+1)~beta1
20 x.(y-1).6~rc1
23 x.(y-1).6

If you intend to deploy one series of LibreOffice (e.g. 4.3), there are two things that are highly recommended to be done:

  • make the alpha or beta releases available quickly to interested volunteers in your deployment early. They might find bugs or regressions that are specific to your use of the software.
  • make the release candidates of versions that you intent to deploy available early to your users.

Of these two actions, the first is by far the most important: It identifies issues early on in the life cycle and gives both your support provider and the LibreOffice developer community at large time to resolve the issue. In fact, I would argue that if you have a major deployment, the only excuse for not making available prereleases, is that you made available nightly builds.

Ubuntu

So, Ubuntu qualifies as a “bigger deployment” and I have to take care of LibreOffice on it. Also people want to be able to run the latest and greatest LibreOffice releases from the LibreOffice fresh series. Do I follow my recommendations here? Yes, mostly I do:

  • both LibreOffice fresh and LibreOffice stable series are available from PPAs for Ubuntu and are updated regularly and quickly when an rc2 is available.
  • prereleases are made available as bibisect repositories rather quick (build on Ubuntu 12.04 LTS). In addition, fully packaged versions of LibreOffice are build in the prereleases PPA as early as starting with beta1.

So, you are invited to run or test builds from these PPAs — or download the bibisect repositories — to keep LibreOffice releases coming in the steady and stable fashion they do. Finally, there is a bug hunting session for LibreOffice this week and as said above, no matter if you are running a huge deployment or installing on your own, you are helping LibreOffice — and yourself, as a user of LibreOffice — a lot by testing the prereleases:


Read more
Prakash Advani

Ubuntu 32-Bit or 64-Bit ?

I often get asked this question 32 or 64-Bit? In the past I have told people to stick to 32-Bit because of compatibility issues but things have changed now. Quoting for Phoronix which recently did a benchmark test between 32-Bit and 64-Bit.

Going back years we have run 32-bit vs. 64-bit Linux benchmarks. While the results seldom change, we keep running them as the question of choosing between a 32-bit and 64-bit Linux distribution image is still a popular question… These tests drive in a surprising amount of traffic and I continue to be flabbergasted by the number of people still asking this question when nearly all modern x86 Intel/AMD hardware fully supports x86_64 and it generally means much better performance. Usually the only caveat in not using a 64-bit Linux image is if running a system with less than 2GB of RAM.

In the past there were issues surrounding the Java and Flash support for 64-bit Linux along with an assortment of other possible problems (e.g. with Wine), but all those major issues are a matter of the past. 64-bit Linux is in great shape and as long as you have a decent amount of RAM you really should be running 64-bit Linux.

If you are still in doubt, read the full article for the benchmark results.

Read more