Canonical Voices

Posts tagged with 'ubuntu'

Dustin Kirkland


Awww snap!

That's right!  Snappy Ubuntu images are now on AWS, for your EC2 computing pleasure.

Enjoy this screencast as we start a Snappy Ubuntu instance in AWS, and install the xkcd-webserver package.


And a transcript of the commands follows below.

kirkland@x230:/tmp⟫ cat cloud.cfg
#cloud-config
snappy:
ssh_enabled: True
kirkland@x230:/tmp⟫ aws ec2 describe-images \
> --region us-east-1 \
> --image-ids ami-5c442634

{
"Images": [
{
"ImageType": "machine",
"Description": "ubuntu-core-devel-1418912739-141-amd64",
"Hypervisor": "xen",
"ImageLocation": "ucore-images/ubuntu-core-devel-1418912739-141-amd64.manifest.xml",
"SriovNetSupport": "simple",
"ImageId": "ami-5c442634",
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"BlockDeviceMappings": [],
"State": "available",
"VirtualizationType": "hvm",
"Name": "ubuntu-core-devel-1418912739-141-amd64",
"OwnerId": "649108100275",
"Public": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ # NOTE: This AMI will almost certainly have changed by the time you're watching this ;-)
kirkland@x230:/tmp⟫ clear
kirkland@x230:/tmp⟫ aws ec2 run-instances \
> --region us-east-1 \
> --image-id ami-5c442634 \
> --key-name id_rsa \
> --instance-type m3.medium \
> --user-data "$(cat cloud.cfg)"
{
"ReservationId": "r-c6811e28",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"OwnerId": "357813986684",
"Instances": [
{
"KeyName": "id_rsa",
"PublicDnsName": null,
"ProductCodes": [],
"StateTransitionReason": null,
"LaunchTime": "2014-12-18T17:29:07.000Z",
"Monitoring": {
"State": "disabled"
},
"ClientToken": null,
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"PrivateDnsName": null,
"ImageId": "ami-5c442634",
"BlockDeviceMappings": [],
"Placement": {
"GroupName": null,
"AvailabilityZone": "us-east-1e",
"Tenancy": "default"
},
"AmiLaunchIndex": 0,
"VirtualizationType": "hvm",
"NetworkInterfaces": [],
"SecurityGroups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"State": {
"Name": "pending",
"Code": 0
},
"Hypervisor": "xen",
"InstanceId": "i-af43de51",
"InstanceType": "m3.medium",
"EbsOptimized": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ aws ec2 describe-instances --region us-east-1 | grep PublicIpAddress
"PublicIpAddress": "54.145.196.209",
kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
ssh: connect to host 54.145.196.209 port 22: Connection refused
255 kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
The authenticity of host '54.145.196.209 (54.145.196.209)' can't be established.
RSA key fingerprint is 91:91:6e:0a:54:a5:07:b9:79:30:5b:61:d4:a8:ce:6f.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '54.145.196.209' (RSA) to the list of known hosts.
Welcome to Ubuntu Vivid Vervet (development branch) (GNU/Linux 3.16.0-25-generic x86_64)

* Documentation: https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to the Ubuntu Core rolling development release.

* See https://ubuntu.com/snappy

It's a brave new world here in snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snappy --help'
for app installation and transactional updates.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-10-153-149-47:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=1923976k,nr_inodes=480994,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=385432k,mode=755)
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
/dev/xvda3 on /writable type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,mode=755)
tmpfs on /etc/fstab type tmpfs (rw,nosuid,noexec,relatime,mode=755)
/dev/xvda3 on /etc/systemd/system type ext4 (rw,relatime,discard,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=385432k,mode=755)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/xvda3 on /etc/hosts type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/sudoers.d type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /root type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /usr/share/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/snappy type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/click type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/initramfs-tools type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/writable type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ssh type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/tmp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/cache/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/apparmor.d/cache type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ufw type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/log type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/system-image type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /var/lib/sudo type tmpfs (rw,relatime,mode=700)
/dev/xvda3 on /var/lib/logrotate type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dhcp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dbus type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/cloud type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apps type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /mnt type tmpfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/xvda3 on /apps type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /home type ext4 (rw,relatime,discard,data=ordered)
/dev/xvdb on /mnt type ext3 (rw,relatime,data=ordered)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=385432k,mode=700,uid=1000,gid=1000)
ubuntu@ip-10-153-149-47:~$ mount | grep " / "
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
ubuntu@ip-10-153-149-47:~$ sudo touch /foo
touch: cannot touch ‘/foo’: Read-only file system
ubuntu@ip-10-153-149-47:~$ sudo apt-get update
Ubuntu Core does not use apt-get, see 'snappy --help'!
ubuntu@ip-10-153-149-47:~$ sudo snappy --help
Usage:snappy [-h] [-v]
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
...

snappy command line interface

optional arguments:
-h, --help show this help message and exit
-v, --version Print this version string and exit

Commands:
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
info
versions
search
update-versions
update
rollback undo last system-image update.
install
uninstall
tags
build
chroot
framework
fake-version ==SUPPRESS==
nap ==SUPPRESS==
ubuntu@ip-10-153-149-47:~$ sudo snappy info
release: ubuntu-core/devel
frameworks:
apps:
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
ubuntu@ip-10-153-149-47:~$ sudo snappy search docker
Part Version Description
docker 1.3.2.007 The docker app deployment mechanism
ubuntu@ip-10-153-149-47:~$ sudo snappy install docker
docker 4 MB [=============================================================================================================] OK
Part Tag Installed Available Fingerprint Active
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy search webserver
Part Version Description
go-example-webserver 1.0.1 Minimal Golang webserver for snappy
xkcd-webserver 0.3.1 Show random XKCD compic via a build-in webserver
ubuntu@ip-10-153-149-47:~$ sudo snappy install xkcd-webserver
xkcd-webserver 21 kB [=====================================================================================================] OK
Part Tag Installed Available Fingerprint Active
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
ubuntu@ip-10-153-149-47:~$ exit
logout
Connection to 54.145.196.209 closed.
kirkland@x230:/tmp⟫ ec2-instances
i-af43de51 ec2-54-145-196-209.compute-1.amazonaws.com
kirkland@x230:/tmp⟫ ec2-terminate-instances i-af43de51
INSTANCE i-af43de51 running shutting-down
kirkland@x230:/tmp⟫

Cheers!
Dustin

Read more
Michael Hall

There’s a saying in American political debate that is as popular as it is wrong, which happens when one side appeals to our country’s democratic ideal, and the other side will immediately counter with “The United States is a Republic, not a Democracy”. I’ve noticed a similar misunderstanding happening in open source culture around the phrase “meritocracy” and the negatively-charged “oligarchy”. In both cases, though, these are not mutually exclusive terms. In fact, they don’t even describe the same thing.

Authority

One of these terms describes where the authority to lead (or govern) comes from. In US politics, that’s the term “republic”, which means that the authority of the government is given to it by the people (as opposed to divine-right, force of arms, of inheritance). For open source, this is where “meritocracy” fits in, it describes the authority to lead and make decisions as coming from the “merit” of those invested with it. Now, merit is hard to define objectively, and in practice it’s the subjective opinion of those who can direct a project’s resources that decides who has “merit” and who doesn’t. But it is still an important distinction from projects where the authority to lead comes from ownership (either by the individual or their employer) of a project.

Enfranchisement

History can easily provide a long list of Republics which were not representative of the people. That’s because even if authority comes from the people, it doesn’t necessarily come from all of the people. The USA can be accurately described as a democracy, in addition to a republic, because participation in government is available to (nearly) all of the people. Open source projects, even if they are in fact a meritocracy, will vary in what percentage of their community are allowed to participate in leading them. As I mentioned above, who has merit is determined subjectively by those who can direct a project’s resources (including human resource), and if a project restricts that to only a select group it is in fact also an oligarchy.

Balance and Diversity

One of the criticisms leveled against meritocracies is that they don’t produce diversity in a project or community. While this is technically true, it’s not a failing of meritocracy, it’s a failing of enfranchisement, which as has been described above is not what the term meritocracy defines. It should be clear by now that meritocracy is a spectrum, ranging from the democratic on one end to the oligarchic on the other, with a wide range of options in between.

The Ubuntu project is, in most areas, a meritocracy. We are not, however, a democracy where the majority opinion rules the whole. Nor are we an oligarchy, where only a special class of contributors have a voice. We like to use the term “do-ocracy” to describe ourselves, because enfranchisement comes from doing, meaning making a contribution. And while it is limited to those who do make contributions, being able to make those contributions in the first place is open to anybody. It is important for us, and part of my job as a Community Manager, to make sure that anybody with a desire to contribute has the information, resources, and access to to so. That is what keeps us from sliding towards the oligarchic end of the spectrum.

 

Read more
Dustin Kirkland


As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin

Read more
Daniel Holbach

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

Read more
Daniel Holbach

I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.

Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.

Problem statements

  • Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
  • Perception of contributor drop in “older” parts of the community.
    • Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
    • Less activity in LoCos (lacking a sense of purpose?)
    • No drop in members/developers.
  • Less activity in Canonical-led projects.
  • We don’t spend marketing money on social media. Build a pavement online.
  • Downloading a CD image is too much of a barrier for many.
  • Our “community infrastructure” did not scale with the amount of users.
  • Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
  • We don’t have enough time to train newcomers.
  • Language barriers make it hard for some to get involved.
  • Canonical does a bad job announcing their presence at events.

Questions

  • Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
  • Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
  • Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?

Proposals

  • community.ubuntu.com: More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
  • Make Ubuntu events more about “doing things with Ubuntu”?
  • Ubuntu Leadership Mentoring programme.
  • Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.

Join the hangout on ubuntuonair.com on Friday, 12th December 2014, 16 UTC.

Read more
Daniel Holbach

It’s fantastic that a we have more discussion about where we want our community to go. We get ideas out of it, people communicate and get a common understanding of issues. Jono’s blog post and the ubuntu-community-team mailing list generated a lot of good stuff already. Last week we had an IRC meeting with the CC and discussed governance and leadership in there.

We took quite a bit of notes, and Elfy set up a doc where we note down actions. I would like to suggest we have

Please

  • use Elfy’s action’s doc for submitting agenda items,
  • your agenda item is a concrete proposal or something which could be turned into work items,
  • make sure you’re there,
  • add your name to it!

Looking forward to seeing you there! :-)

Read more
Dustin Kirkland



A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive


You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.


$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help



And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...


While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Enjoy!
Dustin

Read more
jdstrand

Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

A developer doc will be published soon.


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
Nicholas Skaggs

I thought I would add a little festivity to the holiday season, quality style. In case your holidays just are not the same without a little quality in your life, allow me to share how you can get involved.

There are opportunities for every role listed on the QA wiki. Testers and test writers are both needed. Testing and writing manual tests can be learned by anyone, no coding required. That said if you have skills or interest in technical work, I would encourage you help out. You will learn by doing and get help from others while you do it.

Now onto the good stuff! What can you do to help ubuntu this cycle from a quality perspective?

Dogfooding
There is an ever present need for brave folks willing to simply run the development version of ubuntu and use it as a daily machine throughout the cycle. It's one of the best ways for us as a community to uncover bugs and issues, in particular things that regress from the previous release. Upgrade to vivid today and see what you can break!

QATracker
This tool is written in drupal7 and runs the iso.qa.ubuntu.com and packages.qa.ubuntu.com sites. These sites are used to record and view the results of all of our manual testing efforts. Currently dkessel is leading the effort on implementing some needed UI changes. The code and more information about the project can be found on launchpad. The tracker is one of our primary tools and needs your help to become friendly for everyone to use.

In addition a charm would be useful to simplify setting up a development environment. The charm can be based upon the existing drupal charm. At the moment this work is ready for someone to jump in.

Unity8
Running unity8 as a full-time desktop is a personal goal I have for this cycle. I hope some others might also want to be early adopters and join me in this goal. For now you can help by testing the unity8 desktop. Have a look at running unity in lxc for an easy way to run unity8 today on your machine. Use it, test it, and offer feedback. I'll be talking more about unity8 as the cycle progresses and opportunities to test new features aimed at the desktop appear.

Core Apps
The core apps project is an excellent way to get involved. These applications have been lovingly developed by community members just like you. Many of the teams are looking for help in writing tests and for someone who can help bring a testing mindset and eye to the work. As of this writing specifically the docviewer, terminal and calculator teams would love your help. The core apps hackdays are happening this week, drop by and introduce yourself to get started!

Manual Tests
Like the sound of writing tests but the idea of writing code turns you off? Manual tests are needed as well! They are written in English and are easy to understand and write. Manual tests include everything you see on the qatracker and are managed as a launchpad project. This means you can pick a bug and "fix it" by submitting a merge request. The bugs involve both fixing existing tests as well as requests for new testcases.

Images
As always there are images that need testing. Testing milestones occur later in the cycle which involve everyone helping to test a specific set of images. In the meantime, daily images are generated that have made it through the automated tests and are ready for manual testing. Booting an image in a live session is a great way to check for regressions on your machine. Doing this early in the cycle can help make sure your hardware and others like it experience a regression free upgrade when the time comes.

Triaging
After subjecting software to testing, bugs are naturally found. These bugs then need to be verified and triaged. The bugsquadders, as they are called, would be happy to help you learn to categorize or triage bugs and do other tasks.

No matter how you choose to get involved, feel free to contact me for help if needed. Most of all, Happy Testing!


Read more
Daniel Holbach

The call for an Ubuntu Foundation has come up again. It has been discussed many times before, ever since an announcement was made many years ago which left a number of people confused about the state of things.

The way I understood the initial announcement was that a trust had been set up, so that if aliens ever kidnapped our fearless leader, or if he decided that beekeeping was more interesting than Ubuntu, we could still go on and bring the best flavour of linux to the world.

Ok, now back to the current discussion. An Ubuntu Foundation seems to have quite an appeal to some. The question to me is: which problems would it solve?

Looking at it from a very theoretical point of view, an Ubuntu foundation could be a place where you separate “commercial” from “public” interests, but how would this separation work? Who would work for which of the entities? Would people working for the Ubuntu foundation have to review Canonical’s paperwork before they can close deals? Would there be a board where decisions have to be pre-approved? Which separation would generally happen?

Right now, Ubuntu’s success is closely tied to Canonical’s success. I consider this a good thing. With every business win of Canonical, Ubuntu gets more exposure in the world. Canonical’s great work in the support team, in the OEM sector or when closing deals with governments benefits Ubuntu to a huge degree. It’s like two sides of a coin right now. Also: Canonical pays the bills for Ubuntu’s operations. Data centers, engineers, designers and others have to be paid.

In theory it all sounds fine: “you get to have a say”, “more transparency”, etc. I don’t think many realise though, that this will mean that additional people will have to sift through legal and other documents, that more people will be busy writing reports, summarising discussions, that there will be more need for admin , that customers will have to wait longer, that this will in general have to cost more time and money.

I believe that bringing in a new layer will bring incredible amounts of work and open up endless possibilities for politics and easily bring things to a stand-still.

Will this fix Ubuntu’s problems? I absolutely don’t think so. Could we be more open, more inspiring and more inviting? Sure, but demanding more transparency and more separation is not going to bring that.

Read more
bmichaelsen

To Win in Toulouse

Now the only thing a gambler needs
Is a suitcase and a trunk.
– Animals, The House of the Rising Sun

So, as many others, I have been to the LibreOffice Hackfest in Toulouse which — unlike many of our other Hackfests — was part of a bigger event: Capitole du Libre. As we had our own area and were not 30+ hackers, this also had the advantage that we got quicker to work. And while I had still some boring administrative work to do, this is a Hackfest were I actually got to do some coding. I looked for some bookmark related bugs in Writer, but the first bugs I looked at were just too well suited to be Easy Hacks: fdo#51741 (“Deleting bookmark is not seen as modification of document”) and fdo#56116 (“Names of bookmarks should allow all characters which are valid in HTML anchor names (missing: ‘:’ and ‘.’)”). Both were made Easy Hacks and both are fixed on master now. I then fixed fdo#85542 (“DOCX import of overlapping bookmarks”), which proved slightly more work then expected and provided a unittest for it to never come back. I later learned that the second part was entirely nonoptional, as Markus promised he would not have let me leave Toulouse without writing a unittest for commited code. I have to admit that that is a supportable position.

Toulouse Hackfest Room

Toulouse Hackfest Room

Scenes like the above were actually rather rare as we were mostly working over our notebooks. One thing I came up with at the Hackfest, but didnt finish there was some clang plugins for finding cascading conditional ops and and conditional ops that have assignments as a sideeffect in their midst. While I found nothing as mindboggling as the tweet that gave inspiration to these plugins in sw (Writer), I found some impressive expressions that certainly wouldnt be a joy to step through in gdb (or even better: set a breakpoint in) when debugging and fixed those. We probably could make a few EasyHacks out of what these (or similar) plugins find outside of sw/ (I only looked there for now) — those are reasonably easy to refactor, but you dont want to do that in the middle of a debugging session. While at it, I also looked at clangs “value assigned, but never read” hints. Most were harmless, but also trivial to get rid of. On the other hand, some of those pointed to real logic errors that are otherwise hard to see. Like this one, which has been hiding — if git is to be believed — in plain sight ever since OpenOffice.org was originally open sourced in 2000. All in all, this experience is encouraging. Now that there are our coverity defect density is a just a rounding error above zero getting more fancy clang plugins might be promising.

Just one week after the Hackfest in Toulouse, there was another event LibreOffice took part in: The Bug Squashing Party in Munich — its encouraging to see Jonathan Riddell being a commiter to LibreOffice too now, but that is not all, we have more events coming up: The Document Foundation and LibreOffice will have an assembly at 31c3 in Hamburg, you are most welcome to drop by there! And next then, there will be FOSDEM 2015 in Bruessels, where LibreOffice will be present as usual.


Read more
Nicholas Skaggs

Creating mutli-arch click packages

Click packages are one of the pieces of new technology that drives the next version of ubuntu on the phone and desktop. In a nutshell click packages allow for application developers to easily package and deliver application updates independent of the distribution release or archive. Without going into the interesting technical merits and de-merits of click packages, this means the consumer can get faster application updates. But much of the discussion and usage of click packages until now has revolved around mobile. I wanted to talk about using click packages on the desktop and packaging clicks for multiple architectures.

The manifest file
Click packages follow a specific format. Click packages contain a payload of an application's libraries, code, artwork and resources, along with its needed external dependencies. The description of the package is found in the manifest file, which is what I'd like to talk about. The file must contain a few keys, but one of the recognized optional keys is architecture. This key allows specifying architectures the package will run on.

If an application contains no compiled code, simply use 'all' as the value for architecture. This accomplishes the goal of running on all supported architectures and many of the applications currently in the ubuntu touch store fall into this category. However, an increasing number of applications do contain compiled code. Here's how to enable support across architectures for projects with compiled code.

Fat packages
The click format along with the ubuntu touch store fully support specifying one or more values for specific architecture support inside the application manifest file. Those values follow the same format as dpkg architecture names. Now in theory if a project containing compiled code lists the architectures to support, click build should be able to build one package for all. However, for now this process requires a little manual intervention. So lets talk about building a fat (or big boned!) package that contains support for multiple architectures inside a single click package.

Those who just want to skip ahead can check out the example package I put together using clock. This same package can be found in the store as multi-arch clock test. Feel free to install the click package on the desktop, the i386 emulator and an armhf device.

Building a click for a different architecture
To make a multi-arch package a click package needs to be built for each desired architecture. Follow this tutorial on developer.ubuntu.com for more information on how to create a click target for each architecture. Once all the targets are setup, use the ubuntu sdk to build a click for each target. The end result is a click file specific to each architecture.

For example in creating the clock package above, I built a click for amd64, i386 and armhf. Three files were generated:

com.ubuntu.clock_3.2.176_amd64.click
com.ubuntu.clock_3.2.176_i386.click
com.ubuntu.clock_3.2.176_armhf.click

Notice the handy naming scheme allows for easy differentiation as to which click belongs to which architecture. Next, extract the compiled code from each click package. This can be accomplished by utilizing dpkg. For example,

dpkg -x com.ubuntu.clock_3.2.176_amd64.click amd64

Do this for each package. The result should be a folder corresponding to each package architecture.

Next copy one version of the package for use as the base of multi-arch click package. In addition, remove all the compiled code under the lib folder. This folder will be populated with the extracted compiled code from the architecture specific click packages.

cp amd64 multi
rm -rf multi/lib/*

Now there is a folder for each click package, and a new folder named multi that contains the application, minus any compiled code.

Creating the multi-arch click
Inside the extracted click packages is a lib folder. The compiled modules should be arranged inside, potentially inside an architecture subfolder (depending on how the package is built).

Copy all of the compiled modules into a new folder inside the lib folder of the multi directory. The folder name should correspond to the architecture of the complied code. Here's a list of the architectures for ARM, i386, and amd64 respectively.


arm-linux-gnueabihf
i386-linux-gnu
x86_64-linux-gnu


You can check the naming from an intended device by looking in the application-click.conf file.

grep ARCH /usr/share/upstart/sessions/application-click.conf

To use the clock package as an example again, here's a quick look at the folder structure:

lib/arm-linux-gnueabihf/...
lib/i386-linux-gnu/...
lib/x86_64-linux-gnu/...

The contents of lib/* from each click package I built earlier is under a corresponding folder inside the multi/lib directory. So for example, the lib folder from com.ubuntu.clock_3.2.176_i386.click became lib/i386-linux-gnu/.

Presto, magic package time! 
Finally the manifest.json file needs to be updated to reflect support for the desired architectures. Inside the manifest.json file under the multi directory, edit the architecture key values to list all supported architectures for the new package. For example to list support for ARM and x86 architectures,

"architecture": ["armhf", "i386", "amd64"],

To build the new package, execute click build multi. The resulting click should build and be named with a _multi.click prefix. This click can be installed on any of the specified architectures and is ready to be uploaded to the store.

Caveats, nibbly bits and bugs
So apart from click not automagically building these packages, there is one other bug as of this writing. The resulting multi-arch click will fail the automated store review and instead enter manual review. To workaround this request a manual review. Upon approval, the application will enter the store as usual.

Summary
In summary to create a multi-arch click package build a click for each supported architecture. Then pull the compiled library code from each click and place into a single click package. Next modify the click manifest file to state all of the architectures supported. Finally, rebuild the click package!

I trust this explanation and example provides encouragement to include support for x86 platforms when creating and uploading a click package to the store. Undoubtedly there are other ways to build a multi-arch click; simply ensure all the compiled code for each architecture is included inside the click package. Feel free to experiment!

If you have any questions as usual feel free to contact me. I look forward to seeing more applications in the store from my unity8 desktop!

Read more
Daniel Holbach

Despite being an “old” technology and having its problems, we still use mailing lists… a lot.  Some of the lists have been cleaned up by the Community Council some time ago, especially if they were created and then forgotten some time later.

We do have a number of mailing lists though which are still active, but have the problem of not having enough (or enough active) moderators on board. What then happens is this:

List moderation

… which sucks.

It’s not very nice if you have lots and lots of good discussion not happening just because you had no time to tend to the moderation queue.

Some mailing lists receive quite a bit of spam, others get a lot of mails from folks who are not subscribed yet, but this really shouldn’t be a problem. If you run a popular mailing list and moderation gets too much of a hassle, please consider adding more moderators – if you ask nicely a bunch of folks will be happy to help out.

So my advice:

  1. If you every registered a mailing list, please have a look at its moderation queue and see if you need help.
  2. If yes, please add more moderators.
  3. If you don’t do it yet, use listadmin – it’s the best thing since sliced bread and keeping up with moderation in the future will be no problem at all.

Read more
Daniel Holbach

I’m very happy that the ubuntu-community-team mailing list is seeing lots of discussion right now. It shows how many people deeply care about the direction of Ubuntu’s community and have ideas for how to improve things.

Looking back through the discussion of the last weeks, I can’t help but notice a few issues we are running into – issues all to common on open source project mailing lists. Maybe you all have some ideas on how we could improve the discussion?

  • Bikeshedding
    The term bikeshedding has a negative connotation, but it’s a very natural phenomenon. Rouven, a good friend of mine, recently pointed out that the recent proposal to change the statutes of the association behind our coworking space (which took a long time to put together) received no comments on the internal mailing list, whereas a change of the coffee brand seemed to invite comments from everyone.
    It is quite natural for this to happen. In a bigger proposal it’s natural for us to comment on anything that is tangible. Discussions in our community of more technical people you will often see discussions about which technology to use, rather than an answer which tries to comment on all aspects.
  • Idea overload
    Being a creative community can sometimes be a bit of a curse. You end up with different proposals plus additional ideas and nobody or few to actually implement them.
  • Huge proposals
    Sometimes you see a mail on a list which lists a huge load of different things. Without somebody who tracks where the discussion is going, summing things up, making lists of work items, etc. it will be very hard to convert a discussion into an actual project.
  • Derailing the conversation
    You’ve all seen this happen: you start the conversation with a specific problem or proposal and end up discussing something entirely different.

All of the above are nothing new, but in a part of our project where discussions tend to be quite general and where we have contributors from many different parts of the community some of the above are even more true.

Personally I feel that all of the above are fine problems to have. We are creative and we have ideas on how to improve things – that’s great. In my mind I always treated the ubuntu-community-team mailing list as a place to kick around ideas, to chat and to hang out and see what others are doing.

As I care a lot about our community and I’d still like to figure out how we can avoid the risk of some of the better ideas falling through the cracks. What do you think would help?

Maybe a meeting, maybe every two weeks to pick up some of the recent discussion and see together as a group if we can convert some of the discussion into something which actually flies?

Read more
Michael Hall

The Ubuntu Core Apps project has proven that the Ubuntu community is not only capable of building fantastic software, but they’re capable of the meeting the same standards, deadlines and requirements that are expected from projects developed by employees. One of the things that I think made Core Apps so successful was the project management support that they all received from Alan Pope.

Project management is common, even expected, for software developed commercially, but it’s just as often missing from community projects. It’s time to change that. I’m kicking off a new personal[1] project, I’m calling it the Ubuntu Incubator.

get_excited_banner_banner_smallThe purpose of the Incubator is to help community projects bootstrap themselves, obtain the resources they need to run their project, and put together a solid plan that will set them on a successful, sustainable path.

To that end I’m going to devote one month to a single project at a time. I will meet with the project members regularly (weekly or every-other week), help define a scope for their project, create a spec, define work items and assign them to milestones. I will help them get resources from other parts of the community and Canonical when they need them, promote their work and assist in recruiting contributors. All of the important things that a project needs, other than direct contributions to the final product.

I’m intentionally keeping the scope of my involvement very focused and brief. I don’t want to take over anybody’s project or be a co-founder. I will take on only one project at a time, so that project gets all of my attention during their incubation period. The incubation period itself is very short, just one month, so that I will focus on getting them setup, not on running them.  Once I finish with one project, I will move on to the next[2].

How will I choose which project to incubate? Since it’s my time, it’ll be my choice, but the most important factor will be whether or not a project is ready to be incubated. “Ready” means they are more than just an idea: they are both possible to accomplish and feasible to accomplish with the person or people already involved, the implementation details have been mostly figured out, and they just need help getting the ball rolling. “Ready” also means it’s not an existing project looking for a boost, while we need to support those projects too, that’s not what the Incubator is for.

So, if you have a project that’s ready to go, but you need a little help taking that first step, you can let me know by adding your project’s information to this etherpad doc[3]. I’ll review each one and let you know if I think it’s ready, needs to be defined a little bit more, or not a good candidate. Then each month I’ll pick one and reach out to them to get started.

Now, this part is important: don’t wait for me! I want to speed up community innovation, not slow it down, so even if I add your project to the “Ready” queue, keep on doing what you would do otherwise, because I have no idea when (or if) I will be able to get to yours. Also, if there are any other community leaders with project management experience who have the time and desire to help incubate one of these project, go ahead and claim it and reach out to that team.

[1] While this compliments my regular job, it’s not something I’ve been asked to do by Canonical, and to be honest I have enough Canonical-defined tasks to consume my working hours. This is me with just my community hat on, and I’m inclined to keep it that way.

[2] I’m not going to forget about projects after their month is up, but you get 100% of the time I spend on incubation during your month, after that my time will be devoted to somebody else.

[3] I’m using Etherpad to keep the process as lightweight as possible, if we need something better in the future we’ll adopt it then.

Read more
Dustin Kirkland

Try These 7 Tips in Your Next Blog Post


In a presentation to my colleagues last week, I shared a few tips I've learned over the past 8 years, maintaining a reasonably active and read blog.  I'm delighted to share these with you now!

1. Keep it short and sweet


Too often, we spend hours or days working on a blog post, trying to create an epic tome.  I have dozens of draft posts I'll never finish, as they're just too ambitious, and I should really break them down into shorter, more manageable articles.

Above, you can see Abraham Lincoln's Gettysburg Address, from November 19, 1863.  It's merely 3 paragraphs, 10 sentences, and less than 300 words.  And yet it's one of the most powerful messages ever delivered in American history.  Lincoln wrote it himself on the train to Gettysburg, and delivered it as a speech in less than 2 minutes.

2. Use memorable imagery


Particularly, you need one striking image at the top of your post.  This is what most automatic syndicates or social media platforms will pick up and share, and will make the first impression on phones and tablets.

3. Pen a catchy, pithy title


More people will see or read your title than the post itself.  It's sort of like the chorus to that song you know, but you don't know the rest of the lyrics.  A good title attracts readers and invites re-shares.

4. Publish midweek


This is probably more applicable for professional, rather than hobbyist, topics, but the data I have on my blog (1.7 million unique page views over 8 years), is that the majority of traffic lands on Tuesday, Wednesday, and Thursday.  While I'm writing this very post on a rainy Saturday morning over a cup of coffee, I've scheduled it to publish at 8:17am (US Central time) on the following Tuesday morning.

5. Share to your social media circles


My posts are generally professional in nature, so I tend to share them on G+, Twitter, and LinkedIn.  Facebook is really more of a family-only thing for me, but you might choose to share your posts there too.  With the lamentable death of the Google Reader a few years ago, it's more important than ever to share links to posts on your social media platforms.

6. Hope for syndication, but never expect it

So this is the one "tip" that's really out of your control.  If you ever wake up one morning to an overflowing inbox, congratulations -- your post just went "viral".  Unfortunately, this either "happens", or it "doesn't".  In fact, it almost always "doesn't" for most of us.

7. Engage with comments only when it makes sense


If you choose to use a blog platform that allows comments (and I do recommend you do), then be a little careful about when and how to engage in the comments.  You can easily find yourself overwhelmed with vitriol and controversy.  You might get a pat on the back or two.  More likely, though, you'll end up under a bridge getting pounded by a troll.  Rather than waste your time fighting a silly battle with someone who'll never admit defeat, start writing your next post.  I ignore trolls entirely.

A Case Study

As a case study, I'll take as an example the most successful post I've written: Fingerprints are Usernames, Not Passwords, with nearly a million unique page views.

  1. The entire post is short and sweet, weighing in at under 500 words and about 20 sentences
  2. One iconic, remarkable image at the top
  3. A succinct, expressive title
  4. Published on Tuesday, October 1, 2013
  5. 1561 +1's on G+, 168 retweets on Twitter
  6. Shared on Reddit and HackerNews (twice)
  7. 434 comments, some not so nice
Cheers!
Dustin


Read more
Daniel Holbach

I Am Who I Am Because Of Who We All Are

I read the “We Are Not Loco” post  a few days ago. I could understand that Randall wanted to further liberate his team in terms of creativity and everything else, but to me it looks feels the wrong approach.

The post makes a simple promise: do away with bureaucracy, rename the team to use a less ambiguous name, JFDI! and things are going to be a lot better. This sounds compelling. We all like simplicity; in a faster and more complicated world we all would like things to be simpler again.

What I can also agree with is the general sense of empowerment. If you’re member of a team somewhere or want to become part of one: go ahead and do awesome things – your team will appreciate your hard work and your ideas.

So what was it in the post that made me sad? It took me a while to find out what specifically it was. The feeling set in when I realised somebody turned their back on a world-wide community and said “all right, we’re doing our own thing – what we used to do together to us is just old baggage”.

Sure, it’s always easier not having to discuss things in a big team. Especially if you want to agree on something like a name or any other small detail this might take ages. On the other hand: the world-wide LoCo community has achieved a lot of fantastic things together: there are lots of coordinated events around the world, there’s the LoCo team portal, and most importantly, there’s a common understanding of what teams can do and we all draw inspiration from each other’s teams. By making this a global initiative we created numerous avenues where new contributors find like-minded individuals (who all live in different places on the globe, but share the same love for Ubuntu and organising local events and activities). Here we can learn from each other, experiment and find out together what the best practices for local community awesomeness are.

Going away and equating the global LoCo community with bureaucracy to me is desolidarisation – it’s quite the opposite of “I Am Who I Am Because Of Who We All Are”.

Personally I would have preferred a set of targeted discussions which try to fix processes, improve communication channels and inspire a new round leaders of Ubuntu LoCo teams. Not everything you do in a LoCo team has to be approved by the entire set of other teams, actual reality in the LoCo world is quite different from that.

If you have ideas to discuss or suggestions, feel free to join our loco-contacts mailing list and bring it up there! It’s your chance to hang out with a lot of fun people from around the globe. :-)

Read more
Dustin Kirkland


I had the great pleasure to deliver a 90 minute talk at the USENIX LISA14 conference, in Seattle, Washington.

During the course of the talk, we managed to:

  • Deploy OpenStack Juno across 6 physical nodes, on an Orange Box on stage
  • Explain all of the major components of OpenStack (Nova, Neutron, Swift, Cinder, Horizon, Keystone, Glance, Ceilometer, Heat, Trove, Sahara)
  • Explore the deployed OpenStack cloud's Horizon interface in depth
  • Configured Neutron networking with internal and external networks, as well as a gateway and a router
  • Setup our security groups to open ICMP and SSH ports
  • Upload an SSH keypair
  • Modify the flavor parameters
  • Update a bunch of quotas
  • Add multiple images to Glance
  • Launch some instances until we max out our hypervisor limits
  • Scale up the Nova Compute nodes from 3 units to 6 units
  • Deploy a real workload (Hadoop + Hive + Kibana + Elastic Search)
  • Then, we deleted the entire environment, and ran it all over again from scratch, non-stop
Slides and a full video are below.  Enjoy!




Cheers,
Dustin

Read more
Prakash

My setup:

Laptop with Ubuntu 14.04LTS  and Nexus 4.

I also assume you are comfortable with the command prompt. You need to run some commands from terminal.

If you are already running Android 5.0, you can skip Step 1 and go directly to Step 2 to root your device. In my case, I didn’t wait for the OTA update, but if you prefer to play it safe, get the Android 5.0 update and then start.

Pre-Preparation.

Make sure your laptop is charged or plugged into the power and your phone is charged too.

Take a full system backup, because these steps wipes the Android clean.

And ensure your laptop doesn’t go into suspend mode while you do this.

Now install a few packages:

sudo apt-get install android-tools-adb

sudo apt-get install fastboot

sudo add-apt-repository ppa:phablet-team/tools
sudo apt-get update
sudo apt-get install phablet-tools

Step 1: Installing Android 5.0

NOTE: This will wipe the system clean so if you haven’t backed up, go and back up first.

Download the correct file from the Google site:

https://developers.google.com/android/nexus/images

For Nexus 4 it is called.

occam-lrx21t-factory-51cee750.tgz

Now extract the files, it will show you a directory

occam-lrx21t

Change to this directory occam-lrx21t

Now boot into the bootloader, remember this step as you will need to do it a few times.

adb reboot bootloader

If it’s not able to find the device,  you can boot manually.

For nexus 4, while holding the volume button down, press the power button.

Now unlock the device with the following command.

fastboot oem unlock

Now flash the Android 5.0 Image by running this command from the directory above.

./flash-all.sh

After a few minutes, On the phone you will see a new boot up logo for Android 5.0. this will take a few minutes to complete. Grab a coffee or your favourite beverage.

Once this is complete, wait you still have to root the device.

wait, don’t do much on your android, as the unlock process may wipe your data!

Step 2: Rooting Android.

Download the rooting script from chainfire: http://download.chainfire.eu/297/CF-Root/CF-Auto-Root/CF-Auto-Root-mako-occam-nexus4.zip

When you scroll down, you see the actual link to download.

This will download this file: CF-Auto-Root-mako-occam-nexus4.zip

Extract this directory, change to this directory.

Now boot into the boot loader with your preferred method. I used volume down button + power on.

Now type these commands.

chmod +x root-linux.sh 
./root-linux.sh

Step 3: Installing MultROM manager

Find the app in Google Play and Install.

Start the app. This will ask you to install 3 things, go ahead and install.

Screenshot_2014-11-23-23-52-01

It will also boot into recovery mode.

Once its done, it will reboot and start android.

Now start the MultROM manager again.

You should see the option to install Ubuntu Touch.

Screenshot_2014-11-23-23-55-30

Step 4: Installing Ubuntu Touch

If you want demo files, select the -demo

you can choose stable OR development version, you can also install both one by one.

This step took the longest amount of time for me. Go get a nap!

It will ask you to reboot once.

Screenshot_2014-11-24-07-49-06

Once this is done, there is no intimation that it is completed.

When you reboot, it will give you an option to boot internal (Android) or Ubuntu Touch. Here you can select Ubuntu Touch and boot into it and setup.

Final Housekeeping. Boot into bootloader and lock the boot loader again:

fastboot oem lock

click on start to reboot your system.

Note: If you upgrade your Android, you will lose the dual-boot and have to start again from step 2 which may differ with your android version.

Read more
Nicholas Skaggs

Virtual Hugs of appreciation!

Because I was asleep at the wheel (err, keyboard) yesterday I failed to express my appreciation for some folks. It's a day for hugging! And I missed it!

I gave everyone a shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!

Thank you to:

David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.

And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!

To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!

Read more