Canonical Voices

Posts tagged with 'ubuntu'

Victor Palau

Not long back Chris Wayne published a post about a scope creator tool.  Last week, I was visiting bq and we decided with Victor Gonzalez  that we should have a scope for Canal bq. The folks at bq do an excellent job at creating “how to” and “first steps” videos, and they have started publishing some for the bq Aquaris E4.5 Ubuntu Edition.

Here is a few screenshots of the scope that is now available to download from the store:

bq1bq2bq3bq4

The impressive thing is that it took us about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, we run:
    scopecreator create youtube com.ubuntu.developer.victorbq canalbq
    cd canalbq
  • Next, we configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    {
    "name": "com.ubuntu.developer.victorbq.canalbq",
    "description": "Canal bq",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "title": "Canal bq",
    "hooks": {
    "canalbq": {
    "scope": "canalbq",
    "apparmor": "scope-security.json"
    }
    },
    "version": "0.3",
    "maintainer": "Victor Gonzalez <anemailfromvictor@bq.com>"
    }
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    [ScopeConfig]
    DisplayName=Canal bq
    Description=Youtube custommized channel
    Author=Canonical Ltd.
    Art=images/logo.png
    Icon=images/logo.png
    SearchHint=Buscar
    LocationDataNeeded=true
    [Appearance]
    PageHeader.Background=color:///#000000
    PageHeader.ForegroundColor=#FFFFFF
    PreviewButtonColor=#FFFFFF
    PageHeader.Logo=./images/logo.png
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the youtube scope template. You can either use “playlists” or “channels” (or both) as departments. The id PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd corresponds to a play list from youtube, with url= https://www.youtube.com/playlist?list=PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd
    scopecreator edit channels{
    “maxResults”: “20”,
    “playlists”: [
    {
    “id”: “PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd”,
    “reminder”:”Aquaris E4,5 Ubuntu Edition”
    },
    {
    “id”: “PLjQOV_HHlukzBhuG97XVYsw96F-pd9P2I”,
    “reminder”: “Tecnópolis”
    },
    {
    “id”: “PLC46C98114CA9991F”,
    “reminder”: “aula bq”
    },
    {
    “id”: “PLE7ACC7492AD7D844″,
    “reminder”: “primeros pasos”
    },
    {
    “id”: “PL551D151492F07D63″,
    “reminder”: “accesorios”
    },
    {
    “id”: “PLjQOV_HHlukyIT8Jr3aI1jtoblUTD4mn0″,
    “reminder”: “3d”
    }
    ]
    }

After this, the only thing left to do is replace the placeholder icon, with the bq logo:
~/canalbq/canalbq/images/logo.png
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on http://jsonlint.com/. It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a youtube channel! so what are you going to create next?


Read more
Prakash

Raspberry Pi 2 is here.

  • A 900MHz quad-core ARM Cortex-A7 CPU
  • 1GB LPDDR2 SDRAM
  • Compatible with Raspberry Pi 1
  • $35

And now runs Snappy Ubuntu Core.

 


Read more
Nicholas Skaggs

Unity 8 Desktop Testing

While much of the excitement around unity8 and the next generation of ubuntu has revolved around mobile, again I'd like to point your attention to the desktop. The unity8 desktop is starting to evolve and gain more "desktopy" features. This includes things like window management and keyboard shortcuts for unity8, and MIR enhancements with things like native library support for rendering and support for X11 applications.

I hosted a session with Stephen Webb at UOS last year where we discussed the status of running unity8 on the desktop. During the session I mentioned my own personal goal of having some brave community members running unity8 as there default desktop this cycle. Now, it's still a bit early to realize that goal, but it is getting much closer! To help get there, I would encourage you to have a look at unity8 on your desktop and start running it. The development teams are ready for feedback and anxious to get it in shape on the desktop.

So how do you get it? Check out the unity8 desktop wiki page which explains how you can run unity8, even if you are on a stable version of ubuntu like the LTS. Install it locally in an lxc container and you can login to a unity8 desktop on your current pc. Check it out! After you finish playing, please don't forget to file bugs for anything you might find. The wiki page has you covered there as well. Enjoy unity8!

Read more
Daniel Holbach

Did you always want to write an app for Ubuntu and thought that HTML5 might be a good choice? Well picked!

finished

We now have training materials up on developer.ubuntu.com which will get you started in all things related to Ubuntu devices. The great thing is that you just write this app once and it’ll work on the phone, the desktop and whichever device Ubuntu is going to run next on.

The example used in the materials is a RSS reader written by my friend, Adnane Belmadiaf. If you go through the steps one by one you’ll notice how easy it is to get stuff done. :-)

This is also a good workshop you could give in your LUG or LoCo or elsewhere. Maybe next weekend at Ubuntu Global Jam too? :-)

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


Similarly, Github itself now also "suggests" random repo names.



I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's

unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

versus

b9643037-4a79-412c-b7fc-80baa7233a31

Shell

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install petname
$ petname
itchy-Marvin
$ petname -w 3
listlessly-easygoing-Radia
$ petname -s ":" -w 5
onwardly:unflinchingly:debonairly:vibrant:Chandler

Python

That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install python-petname
$ python-petname
flaky-Megan
$ python-petname -w 4
mercifully-grimly-fruitful-Salma
$ python-petname -s "" -w 2
filthyLaurel

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)
boomingly_tangible_Mikayla

Golang


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/dustinkirkland/golang-petname

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install golang-petname
$ golang-petname
quarrelsome-Cullen
$ golang-petname -words=1
Vivian
$ golang-petname -separator="|" -words=10
snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

Using it in your own Golang code looks as simple as this:

package main
import (
"fmt"
"math/rand"
"time"
"github.com/dustinkirkland/golang-petname"
)
func main() {
flag.Parse()
rand.Seed(time.Now().UnixNano())
fmt.Println(petname.Generate(2, ""))
}
Gratuitous picture of my pets, 7 years later.
Cheers,
happily-hacking-Dustin

Read more
beuno

After a few weeks of being coffee-deprived, I decided to disassemble my espresso machine and see if I could figure out why it leaked water while on, and didn't have enough pressure to produce drinkable coffee.
I live a bit on the edge of where other people do, so my water supply is from my own pump, 40 meters into the ground. It's as hard as water gets. That was my main suspicion. I read a bit about it on the interwebz and learned about descaling, which I'd never heard about. I tried some of the home-made potions but nothing seemed to work.
Long story short, I'm enjoying a perfect espresso as I write this.

I wanted to share a bit with the internet people about what was hard to solve, and couldn't find any instructions on. All I really did was disassemble the whole thing completely, part by part, clean them, and make sure to put it back together tightening everything that seemed to need pressure.
I don't have the time and energy to put together a step-by-step walk-through, so here's the 2 tips I can give you:

1) Remove ALL the screws. That'll get you there 95% there. You'll need a philips head, a torx head, a flat head and some small-ish pliers.
2) The knob that releases the steam looks unremovable and blocks you from getting the top lid off. It doesn't screw off, you just need to pull upwards with some strength and care. It comes off cleanly and will go back on easily. Here's a picture to prove it:

DeLongi eco310.r

Hope this helps somebody!

Read more
Daniel Holbach

In a recent conversation we thought it’d be a good idea to share tips and tricks, suggestions and ideas with users of Ubuntu devices. Because it’d help to have it available immediately on the phone, an app could be a good idea.

I had a quick look at it and after some discussion with Rouven in my office space, it looked like hyde could fit the bill nicely. To edit the content, just write a bit of Markdown, generate the HTML (nice and readable templates – great!) and done.

Unfortunately I’m not a CSS or HTML wizard, so if you could help out making it more Ubuntu-y, that’d be great! Also: if you’re interested in adding content, that’d be great.

I pushed the code for it up on Launchpad, there are also the first bugs open already. Let’s make it look pretty and let’s share our knowledge with new Ubuntu devices users. :-)

Oh, and let’s see that we translate the content as well! :-)

Read more
jdstrand

Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

Background

Ubuntu Touch has several goals that all apply to Snappy:

  • we want system-image upgrades
  • we want to replace the distro archive model with an app store model for Snappy systems
  • we want developers to be able to get their apps to users quickly
  • we want a dependable application lifecycle
  • we want the system to be easy to understand and to develop on
  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Snappy adds a few things to the above (that pertain to this conversation):

  • we want the system to be bulletproof (transactional updates with rollbacks)
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

Let’s look at what all these mean more closely.

system-image upgrades

  • we want system-image upgrades
  • we want the system to be bulletproof (transactional updates with rollbacks)

We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

app store

  • we want to replace the distro archive model with an app store model
  • we want developers to be able to get their apps to users quickly

Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

application lifecycle

  • dependable application lifecycle

This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

(I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

security

  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

easy to use

  • we want the system to be easy to understand and to develop on
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

  1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
  2. apps declaring specific exceptions to the security policy
  3. apps declaring to use restricted security policy
  4. apps declaring to run (effectively) unconfined
  5. apps shipping hand-crafted policy (that can be strict or lenient)

(Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

Moving forward

The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

immediate term

The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

The user experience for this will be discussed and refined on the mailing list in the coming days.

short term

After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

future

Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

Read more
Daniel Holbach

What do Kinshasa, Omsk, Paris, Mexico City, Eugene, Denver, Tempe, Catonsville, Fairfax, Dania Beach, San Francisco and various places on the internet have in common?

Right, they’re all participating in the Ubuntu Global Jam on the weekend of 6-8 February! See the full list of teams that are part of the event here. (Please add yours if you haven’t already.)

What’s great about the event is that there are just two basic aims:

  1. do something with Ubuntu
  2. get together and have fun!

What I also like a lot is that there’s always something new to do. Here are just 3 quick examples of that:

App Development Schools

We have put quite a bit of work into putting training materials together, now, you can take them out to your team and start writing Ubuntu apps easily.

Snappy

As one tech news article said “Robots embrace Ubuntu as it invades the internet of things“. Ubuntu’s newest foray, making it possible to bring a stable and secure OS to small devices where you can focus on apps and functionality, is attracting a number of folks on the mailing lists (snappy-devel, snappy-app-devel)  and elsewhere. Check out the mailing lists and the snappy site to find out more and have a play with it.

Unity8 on Desktop

Convergence is happening and what’s working great on the phone is making its way onto the desktop. You can help making this happen, by installing and testing it. Your feedback will be much appreciated.

Unity-8-Is-Starting-to-Look-More-Like-a-Desktop-for-Ubuntu-Video-465329-5

maxresdefault

 

Read more
Ben Howard

One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.


Screenshot
https://cloud-images.ubuntu.com/locator
In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..



The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

Read more
Ben Howard

A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

And we were wrong.

So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

If you want to install the LTS kernel on your existing instance(s), simply run:

  • sudo apt-get update
  • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
  • sudo reboot


[1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

Read more
Nicholas Skaggs

It's time for a testing jam!

Ubuntu Global Jam, Vivid edition is a few short weeks away. It's time to make your event happen. I can help! Here's my officially unofficial guide to global jam success.

Steps:

  1. Get your jam pack! Get the request in right away so it gets to you on time. 
  2. Pick a cool location to jam
  3. Tell everyone! (be sure to mention free swag, who can resist!?)
But wait, what are you going to do while jamming? I've got that covered too! Hold a testing jam! All you need to know can be found on the ubuntu global jam wiki. The wiki even has more information for you as a jam host in case you have questions or just like details.

Ohh and just in case you don't like testing (seems crazy, I know), there are other jam ideas available to you. The important thing is you get together with other ubuntu aficionados and celebrate ubuntu! 

P.S. Don't forget to share pictures afterwards. No one will know you had the coolest jam in the world unless you tell them :-)

P.P.S. If I'm invited, bring cupcakes! Yum!

Read more
Prakash

WhatsApp on Ubuntu ?

Yes you can install WhatApp on Ubuntu Desktop!

It currently works with Chrome browser only (no Chromium support yet).

In Chrome go to : https://web.whatsapp.com

On your WhatApp on your phone, go to Menu (top left) – WhatsApp Web and add your web client.

That’s all, now all your WhatsApp messages will also show up on your Ubuntu Desktop.

Have fun!

 

Read more
Dustin Kirkland


With the recent introduction of Snappy Ubuntu, there are now several different ways to extend and update (apt-get vs. snappy) multiple flavors of Ubuntu (Core, Desktop, and Server).

We've put together this matrix with a few examples of where we think Traditional Ubuntu (apt-get) and Transactional Ubuntu (snappy) might make sense in your environment.  Note that this is, of course, not a comprehensive list.

Ubuntu Core
Ubuntu Desktop
Ubuntu Server
Traditional apt-get
Minimal Docker and LXC imagesDesktop, Laptop, Personal WorkstationsBaremetal, MAAS, OpenStack, General Purpose Cloud Images
Transactional snappy
Minimal IoT Devices and Micro-Services Architecture Cloud ImagesTouch, Phones, TabletsComfy, Human Developer Interaction (over SSH) in an atomically updated environment

I've presupposed a few of the questions you might ask, while you're digesting this new landscape...

Q: I'm looking for the smallest possible Ubuntu image that still supports apt-get...
A: You want our Traditional Ubuntu Core. This is often useful in building Docker and LXC containers.

Q: I'm building the next wearable IoT device/drone/robot, and perhaps deploying a fleet of atomically updated micro-services to the cloud...
A: You want Snappy Ubuntu Core.

Q: I want to install the best damn Linux on my laptop, desktop, or personal workstation, with industry best security practices, 30K+ freely available open source packages, freely available, with extensive support for hardware devices and proprietary add-ons...
A: You want the same Ubuntu Desktop that we've been shipping for 10+ years, on time, every time ;-)

Q: I want that same converged, tasteful Ubuntu experience on your personal, smart devices like my Phones and Tablets...
A: You want Ubuntu Touch, which is a very graphical human interface focused expression of Snappy Ubuntu.

Q: I'm deploying Linux onto bare metal servers at scale in the data center, perhaps building IaaS clouds using OpenStack or PaaS cloud using CloudFoundry? And I'm launching general purpose Linux server instances in public clouds (like AWS, Azure, or GCE) and private clouds...
A: You want the traditional apt-get Ubuntu Server.

Q: I'm developing and debugging applications, services, or frameworks for Snappy Ubuntu devices or cloud instances?
A: You want Comfy Ubuntu Server, which is a command line human interface extension of Snappy Ubuntu, with a number of conveniences and amenities (ssh, byobu, manpages, editors, etc.) that won't be typically included in the minimal Snappy Ubuntu Core build. [*Note that the Comfy images will be available very soon]

Cheers,
:-Dustin

Read more
pitti

ROS what?

Robot Operating System (ROS) is a set of libraries, services, protocols, conventions, and tools to write robot software. It’s about seven years old now, free software, and a growing community, bringing Linux into the interesting field of robotics. They primarily target/support running on Ubuntu (current Indigo ROS release runs on 14.04 LTS on x86), but they also have some other experimental platforms like Ubuntu ARM and OS X.

ROS, meet Snappy

It appears that their use cases match Ubuntu Snappy’s vision really well: ROS apps usually target single-function devices which require absolutely robust deployments and upgrades, and while they of course require a solid operating system core they mostly implement their own build system and libraries, so they don’t make too many assumptions about the underlying OS layer.

So I went ahead and created a snapp package for the Turtle ROS tutorial, which automates all the setup and building. As this is a relatively complex and big project, it helped to uncover quite a number of bugs, of which the most important ones got fixed now. So while the building of the snap still has quite a number of workarounds, installing and running the snap is now reasonably clean.

Enough talk, how can I get it?

If you are interested in ROS, you can look at bzr branch lp:~snappy-dev/snappy-hub/ros-tutorials. This contains documentation and a script build.sh which builds the snapp package in a clean Ubuntu Vivid environment. I recommend a schroot for this so that you can simply do e. g.

  $ schroot -c vivid ./build.sh

This will produce a /tmp/ros/ros-tutorial_0.2_<arch>.snap package. You can download a built amd64 snapp if you don’t want to build it yourself.

Installing and running

Then you can install this on your Snappy QEMU image or other installation and run the tutorial (again, see README.md for details):

  yourhost$ ssh -o UserKnownHostsFile=/dev/null -p 8022 -R 6010:/tmp/.X11-unix/X0 ubuntu@localhost
  snappy$ scp <yourhostuser>@10.0.2.2:/tmp/ros/ros-tutorial_0.2_amd64.snap
  snappy$ sudo snappy install ros-tutorial_0.2_amd64.snap

You need to adjust <yourhostuser> accordingly; if you didn’t build yourself but downloaded the image, you might also need to adjust the host path where you put the .snap.

Finally, run it:

  snappy$ ros-tutorial.rossnap roscore &
  snappy$ DISPLAY=localhost:10.0 ros-tutorial.rossnap rosrun turtlesim turtlesim_node &
  snappy$ ros-tutorial.rossnap rosrun turtlesim turtle_teleop_key

You might prefer ssh’ing in three times and running the commands in separate shells. Only turtlesim_node needs $DISPLAY (and is quite an exception — an usual robotics app of course wouldn’t!). Also, note that this requires ssh from at least Ubuntu 14.10 – if you are on 14.04 LTS, see README.md.

Enjoy!

Read more
Victor Palau

Just a quick note to tell you that I have published a new scope called uBrick that brings you the awesomeness of Lego, as a catalogue powered by brickset.com, directly to your Ubuntu phone home screen.

I wrote the scope in Go cause I find it easier to work with for a quick scope ( took about 8 hours with interruptions over 2 days to write this scope).  The scope is now available at the store, just search for uBrick.

Here are some pics:

lego1lego2lego3 lego4

Also I have to congratulate the folks at Brickset for a very nice API, even if it is using SOAP :)


Read more
Dustin Kirkland


Forget about The Year of the Linux Desktop...This is The Year of the Linux Countertop!

I'm talking about Linux on every form of Internet-connected embedded devices.  The Internet-of-Things is already upon us.  Sensors, smart watches, TVs, thermostats, security cameras, drones, printers, routers, switches, robots -- you name it.  

And with that backdrop, we are thrilled to introduce Snappy Ubuntu for Devices.  Ubuntu is now a possibility, on almost any device, anywhere.  Now that's exciting!

This is the same Snappy Ubuntu, with its atomic, transactional updates that we launched on each major public cloud last month -- extended and updated for 64-bit Intel, AMD and ARM devices.


Now, if you want a detailed, developer's look at building a Snappy Ubuntu image and running it on a BeagleBone, you're in luck!  I shot this little instructional video (using Cheese, GTK-RecordMyDesktop, and OpenShot).  Enjoy!


A transcript of the video follows...


  1. What is Snappy Ubuntu?
    • A few weeks ago, we introduced a new flavor of Ubuntu that we call “Snappy” -- an atomically, transactionally updated Operating System -- and showed how to launch, update, rollback, and install apps in cloud instances of Snappy Ubuntu in Amazon EC2, Microsoft Azure, and Google Compute Engine public clouds.
    • And now we’re showing how that same Snappy Ubuntu experience is the perfect operating system for today’s Cambrian Explosion of smart devices that some people are calling “the Internet of Things”!
    • Snappy Ubuntu Core bundles only the essentials of a modern, appstore powered Linux OS stack and hence leaves room both in size as well as flexibility to build, maintain and monetize very own device solution without having to care about the overhead of inventing and maintaining your own OS and tools from scratch. Snappy Ubuntu Core comes right in time for you to put your very own stake into stake into still unconquered worlds of things
    • We think you’ll love Snappy on your smart devices for many of the same reasons that there are already millions of Ubuntu machine instances in hundreds of public and private clouds, as well as the millions of your own Ubuntu desktops, tablets, and phones!
  2. Unboxing the BeagleBone
    • Our target hardware for this Snappy Ubuntu demo is the BeagleBone Black -- an inexpensive, open platform for hardware and software developers.
    • I paid $55 for the board, and $8 for a USB to TTL Serial Cable
    • The board is about the size of a credit card, has a 1GHz ARM Cortex A8 processor, 512MB RAM, and on board ethernet.
    • While Snappy Ubuntu will run on most any armhf or amd64 hardware (including the Intel NUC), the BeagleBone is perhaps the most developer friendly solution.
  3. The easiest way to get your Snappy Ubuntu running on your Beaglebone
    • The world of Devices has so many opportunities that it won’t be possible to give everyone the perfect vertical stack centrally. Hence Canonical is trying to enable all of you and provide you with the elements that get you started doing your innovation as quickly as possible. Since there will be many devices that won’t need a screen and input devices, we have developed “webdm”. webdm gives you the ability to manage your snappy device and consume apps without any development effort.
    • To installl you simply download our prebuilt WEB .img and dd it to your sd card.
    • After that all you ahve to do is to connect your beaglebone to a DHCP enabled local network and power it on.
    • After 1-2 minutes you go to http://webdm.local:8080 and can get onto installing apps from the snappy appstore without any further effort
    • Of course, we are still in beta and will continue give you more features and a greater experience over time; we will not only make the UI better, but also work on various customization options that allow you to deliver your own app store powered product without investing your development resources in something that already got solved.
  4. Downloading Snappy and writing to an sdcard
    • Now we’re going to build a Snappy Ubuntu image to run on our device.
    • Soon, we’ll publish a library of Snappy Ubuntu images for many popular devices, but for this demo, we’re going to roll our own using the tool, ubuntu-device-flash.
    • ls -halF mysnappy.img
    • sudo dd if=mysnappy.img of=/dev/mmblk0 bs=1M oflag=dsync
  5. Hooking up the BeagleBone
    • Insert the microsd card
    • Network cable
    • USB debug
    • Power/USB
  6. Booting Snappy and command line experience
    • Okay, so we’re ready for our first boot of Snappy!
    • Let’s attach to the USB/serial console using screen
    • Now, I’ll attach the power, and if you watch very carefully, you might get to see some a few boot messages.
    • snappy help
    • ifconfig
    • ssh ubuntu@10.0.0.105
  7. WebDM experience
    • snappy info
    • Shows we have the webdm framework installed
    • point browser to http://10.0.0.105:8080
    • Configuration
    • Store
  8. Conclusion
    • Hey how cool is that!  Snappy Ubuntu running on devices :-)
    • I’ve spent plenty of time and money geeking out over my Nest and Dropcam and Netatmo and WeMo lightswitches, playing with their APIs and hooking them up to If-This-Then-That.
    • But I’m really excited about a world where those types of devices are as accessible to me as my Ubuntu servers and desktops!
    • And from what I’ve shown you here, with THIS, I think we can safely say that that we’ve blown right past the year of the Linux desktop.
    • This is the year of the Linux countertop!

Cheers,
Dustin

Read more
Michael Hall

For a long time now Canonical has provided Ubuntu LoCo Teams with material to use in the promotion of Ubuntu. This has come in the form of CDs and DVDs for Ubuntu releases, as well as conference packs for booths and shows.

We’ve also been sent several packages, when requested by an Ubuntu Member, to LoCo Teams for their own events, such as release parties or global jams.

Ubuntu Mauritius Team 14.10 Global Jam

This cycle we are extending this offer to any LoCo team that is hosting an in-person Global Jam event. It doesn’t matter how many people are going, or what you’re planning on doing for your jam. The Jam Packs will include DVDs, stickers, pens and other giveaways for your attendees, as well as an Ubuntu t-shirt for the organizers (or as a giveaway, if you choose).

Since there is only a few weeks before Global Jam weekend, and these will be shipped from London, please take your country’s customs process into consideration before ordering. Countries in North America and Europe shouldn’t have a problem, but if you’ve experienced long customs delays in the past please consider waiting and making your request for the next Global Jam.

To get an Ubuntu Global Jam Pack for your event, all you need to do is the following:

  • Register you Global Jam event on the LoCo Team Portal
    • Your event must be in-person, and have a venue associated with it
  • Fill out the community donation request form
    • Include a link to your LoCo Team Portal event in your request
  • Promote your event, before and after
    • Blog about it, post pictures, and share your excitement on social media
      • Use the #ubuntu hashtag when available

You can find all kinds of resources, activities and advice for running your Global Jam event on the Ubuntu Wiki, where we’ve collected the cumulative knowledge from all across the community over many years. And you can get live help and advice any time on the #ubuntu-locoteams IRC channel on Freenode.

Read more
Daniel Holbach

Building a great community

Xubuntu
In last night’s Community Council meeting, we met up with the Xubuntu team. They have done a great job inviting new members into their part of the community. Just read this:

<pleia2> elfy notes all contributors in his announcements
<dholbach> that's really really nice
<pleia2> we do blog posts, emails directly to all the testing members and to -devel list
<dholbach> wow
<pleia2> this cycle we're giving out stickers to some of our top testers
<elfy> if we get that sorted
<pleia2> share on social media too

This is just fantastic. I’m very happy with what the Xubuntu folks are doing and I’m glad to be part of such an open and welcoming community as well.

If you think that’s great too and want to get involved, have a look at their “Get involved” page. They particularly need testers for the new release.

Xubuntu team: keep up the great work! :-)

Read more
bmichaelsen

No-no-no, light speed is too slow!
Yes, we’ll have to go right to… ludicrous speed!

— Dark Helmet, Spaceballs

So, I recently brought up the topic of writers notes in the LibreOffice ESC call. More specifically: the SwNodeIndex class, which is, if one broadly simplifies an iterator over the container holding all the paragraphs of a text document. Before my modifications, the SwNodes container class had all these SwNodeIndices in a homegrown intrustive double linked list, to be able to ensure these stay valid e.g. if a SwNode gets deleted/removed. Still — as usual with performance topics — wild guesses arent helpful, and measurements should trump over intuition. I used valgrind for that, and measured the number of instructions needed for loading the ODF spec. Since I did the same years and years ago on the old OpenOffice.org performance project, I just checked if we regressed against that. Its comforting that we did not at all — we were much faster, but that measurement has to be taken with a few pounds of salt, as a lot of other things differ between these two measurements (e.g. we now have a completely new build system, compiler versions etc.). But its good we are moving in the right direction.

implementation SwNodes SwNodeIndex total instructions performance linedelta
DEV300_m45 71,727,655 73,784,052 9,823,158,471 ? ?
master@fc93c17a 84,553,232 60,987,760 6,170,762,825 0% 0
std::list 18,461,317 103,461,317 14,502,230,571 -5,725%
(-235% of total)
+12/-70
std::vector 18,986,848 3,707,286,032 9,811,541,380 -2,502% +22/-70
std::unordered_map 18,984,984 82,843,000 7,083,620,244 -627%
(-15% of total)
+16/-70
std::vector rbegin 18,986,848 143,851,229 6,214,602,532 -30%
(-7% of total)
+23/-70
sw::Ring<> 23,447,256 inlined 6,154,660,709 11%
(2.6% of total)
+108/-229

With that comforting knowledge, I started to play around with the code. The first thing I did was to replace the handcrafted intrusive list with a std::list pointing to the SwNodeIndex instances as a member in the SwNodes class. This is expected to slow down things, as now two allocs are needed: one for the SwNodeIndex class and one for the node entry in the std::list. To be honest though, I didnt expect this to slow down the code handling the nodes by a factor of ~57 for the loading of the example document. This whole document loading time (not just the node handling) slows by a factor of ~2.4. So ok, this establishes for certain that this part of the code is highly performance sensitive.

The next thing I tried to get a feel for how the performance reacts was using a std::vector in the SwNodes class. When reserving some memory early, this should severely reduce the amount of allocs needed. And indeed this was quicker than the std::list even with a naive approach just doing a push_back() for insertion and a std::find()/std::erase() for removal. However, the node indices are often temporarily created and quickly destroyed again. Thus adding new indices at the end and searching from the start certainly is not ideal: Thus this is also slower than the intrusive list that was on master by a factor of ~25 for the code doing the node handling.

Searching for a SwNodeIndex from the end of the vector, where we likely just inserted it and then swapping it with the last entry makes the std::vector almost compatitive with the original implementation: but still 30% slower than the original implementation. (The total loading time would only have increased by 0.7% using the vector like this.)

For completeness, I also had a look at a std::unordered_map. It did a bit better than I expected, but still would have slowed down loading by 15% for the example experiment.

Having ruled out that standard containers would do much good here without lots of tweaking, I tried the sw::Ring<> class that I recently rewrote based on Boost.Intrusive as a inline header class. This was 11% quicker than the old implementation, resulting in 2.6% quicker loading for the whole document. Not exactly a heroic archivement, but also not too bad for just some 200 lines touched. So this is now on master.

Why do this linked list outperform the old linked list? Inlining. Especially, the non-inlined constructors and the destructor calling a trivial non-inlined member function. And on top of that, the contructors and the function called by the destructor called two non-inlined friend functions from a different compilation unit, making it extra hard for a compiler to optimize that. Now, link time optimization (LTO) could maybe do something about that someday. However, with LTO being in different states on different platforms and with developers possibly building without LTO for build time performance for some time, requiring the compiler/linker to be extra clever might be a mixed blessing: The developers might run into “the map is not the territory” problems.

my personal take-aways:

  • The SwNodeIndex has quite a relevant impact on performance. If you touch it, handle with care (and with valgrind).
  • The current code has decent performance, further improvement likely need deeper structual work (see e.g. Kendys bplustree stuff).
  • Intrusive linked lists might be cumbersome, but for some scenarios, they are really fast.
  • Inlining can really help (doh).
  • LTO might help someday (or not).
  • friend declarations for non-inline functions across compilation units can be a code smell for possible performance optimization.

Please excuse the extensive writing for a meager 2.6% performance improvement — the intention is to avoid somebody (including me) to redo some or all of the work above just to come to the same conclusion.


Note: Here is how this was measured:

  • gcc 4.8.3
  • boost 1.55.0
  • test document: ODF spec
  • valgrind --tool=callgrind "--toggle-collect=*LoadOwnFormat*" --callgrind-out-file=somefilename.cg ./instdir/program/soffice.bin
  • ./autogen.sh --disable-gnome-vfs --disable-odk --disable-postgresql-sdbc --disable-report-builder --disable-scripting-beanshell --enable-gio --enable-symbols --with-external-tar=... --with-junit=... --with-hamcrest=... --with-system-libs --without-doxygen --without-help --without-myspell-dicts --without-system-libmwaw --without-system-mdds --without-system-orcus --without-system-sane --without-system-vigra --without-system-libodfgen --without-system-libcmis --disable-firebird-sdbc --without-system-libebook --without-system-libetonyek --without-system-libfreehand --without-system-libabw --disable-gnome-vfs --without-system-glm --without-system-glew --without-system-librevenge --without-system-libcdr --without-system-libmspub --without-system-libvisio --without-system-libwpd --without-system-libwps --without-system-libwpg --without-system-libgltf --without-system-libpagemaker --without-system-coinmp --with-jdk-home=...


Read more