Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

What does being an Ubuntu member mean to you? Why did you do it back then?

I became an Ubuntu member about 10 years ago. It was part of the process of becoming member of the MOTU team. Before you could apply for upload rights, you had to be an Ubuntu member though.

That wasn’t all of it though. For me it wasn’t the @ubuntu.com mail address or “fulfilling the requirements for upload rights”. As I had helped out and contributed for months already, I felt part of the tribe and luckily many encouraged me to take the next step and apply for membership. I had grown to like the people I worked with and learned from a lot. It was a bit daunting, but being recognised for my contributions was a great experience. Afterwards I would say I did my fair share of encouraging others to apply as well. :-)

Which brings me to the two calls of action I wanted to get out there.

1) Encourage members of your team who haven’t applied for Ubuntu membership!

There are so many people doing fantastic work on AskUbuntu, the Forums, in Flavour teams, the Docs team, the QA world and all over the place when it comes to phones, desktops, IoT bits, servers, the cloud and more. Many many of them should really be Ubuntu members, but they haven’t heard of it, or don’t know how or are concerned of not “having done enough”.

If you have people like that in a project you are working in, please do encourage them. In an open source project we should aim to do a good job at recognising the great work of others.

2) Join the Ubuntu Membership Boards!

If you are an Ubuntu member, seriously consider joining the Ubuntu Membership Boards. The call for nominations is still open and it’s a great thing to be involved with.

When I joined the Community Council, the CC was still in charge of approving Ubuntu members and I enjoyed the meeting (even if they were quite looooooooooooooooooooong), when we got to talk to many contributors from all parts of the globe and from all parts of the Ubuntu landscape. Welcoming many of them to Ubuntu members team was just beautiful.

Nominate yourself and be quick about it! :-)

Read more
Nicholas Skaggs


Whoosh, Spring is in the air, Winter is over (at least for us Northern Hemisphere folks). With that, it's time for polishing the final beta image for vivid.

How can I help? 
To help test, visit the iso tracker milestone page for final beta.  The goal is to verify the images in preparation for the release. Find those bugs! The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

Isotracker? 
There's a first time for everything! Check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

What if I'm late?
The testing runs through this Thursday March 26th, when the the images for final beta will be released. If you miss the deadline we still love getting results! Test against the daily image milestone instead.

Thanks and happy testing everyone!

Read more
Michael Hall

Way back at the dawn of the open source era, Richard Stallman wrote the Four Freedoms which defined what it meant for software to be free. These are:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

For nearly three decades now they have been the foundation for our movement, the motivation for many of us, and the guiding principle for the decisions we make about what software to use.

But outside of our little corner of humanity, these freedoms are not seen as particularly important. In fact, the fast majority of people are not only happy to use software that violates them, but will often prefer to do so. I don’t even feel the need to provide supporting evidence for this claim, as I’m sure all of you have been on one side or the other of a losing arguement about why using open source software is important.

The problem, it seems, is that people who don’t plan on exercising any of these freedoms, from lack of interest or lack of ability, don’t place the same value on them as those of us who do. That’s why software developers are more likely to prefer open source than non-developers, because they might actually use those freedoms at some point.

But the people who don’t see a personal value in free software are missing a larger, more important freedom. One implied by the first four, though not specifically stated. A fifth freedom if you will, which I define as:

  • Freedom 4: The freedom to have the program improved by a person or persons of your choosing, and make that improvement available back to you and to the public.

Because even though the vast majority of proprietary software users will never be interested in studying or changing the source of the software they use, they will likely all, at some point in time, ask someone else if they can fix it. Who among us hasn’t had a friend or relative ask us to fix their Windows computer? And the true answer is that, without having the four freedoms (and implied fifth), only Microsoft can truly “fix” their OS, the rest of us can only try and undo the damage that’s been done.

So the next time you’re trying to convince someone of the important of free and open software, and they chime in with the fact that don’t want to change it, try pointing out that by using proprietary code they’re limiting their options for getting it fixed when it inevitably breaks.

Read more
bmichaelsen

When logic and proportion have fallen sloppy dead
And the white knight is talking backwards
And the red queen’s off with her head
Remember what the dormouse said
Feed your head, feed your head

— Jefferson Airplane, White Rabbit

So, this was intended as a quick and smooth addendum to the “50 ways to fill your vector” post, bringing callgrind into the game and ensuring everyone that its instructions counts are a good proxy for walltime performance of your code. This started out as mostly as expected, when measuring the instructions counts in two scenarios:

implementation/cflags -O2 not inlined -O3 inlined
A1 2610061438 2510061428
A2 2610000025 2510000015
A3 2610000025 2510000015
B1 3150000009 2440000009
B2 3150000009 2440000009
B3 3150000009 2440000009
C1 3150000009 2440000009
C3 3300000009 2440000009

The good news here is, that this mostly faithfully reproduces some general observations on the timings from the last post on this topic, although the differences in callgrind are more pronounced in callgrind than in reality:

  • The A implementations are faster than the B and C implementations on -O2 without inlining
  • The A implementations are slower (by a smaller amount) than the B and C implementations on -O3 with inlining

The last post also suggested the expectation that all implementations could — and with a good compiler: should — have the same code and same speed when everything is inline. Apart from the A implementations still differing from the B and C ones, callgrinds instruction count suggest to actually be the case. Letting gcc compile to assembler and comparing the output, one finds:

  • Inline A1-3 compile to the same output on -Os, -O2, -O3 each. There is no difference between -O2 and -O3 for these.
  • Inline B1-3 compile to the same output on -Os, -O2, -O3 each, but they differ between optimization levels.
  • Inline C3 output differs from the others and between optimization levels.
  • Without inlinable constructors, the picture is the same, except that A3 and B3 now differ slightly from their kin as expected.

So indeed most of the implementations generate the same assembler code. However, this is quite a bit at odd with the significant differences in performance measured in the last post, e.g. B1/B2/B3 on -O2 created widely different walltimes. So time to test the assumption that running one implementation for a minute is producing reasonable statistically stable result, by doing 10 1-minute runs for each implementation and see what the standard deviation is. The following is found for walltimes (no inline constructors):

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 80.6 s 78.9 s 78.9 s 79.0 s
A2 78.7 s 78.1 s 78.0 s 79.2 s
A3 80.7 s 78.9 s 78.9 s 78.9 s
B1 84.8 s 80.8 s 78.0 s 78.0 s
B2 84.8 s 86.0 s 78.0 s 78.1 s
B3 84.8 s 82.3 s 79.7 s 79.7 s
C1 84.4 s 85.4 s 78.0 s 78.0 s
C3 86.6 s 85.7 s 78.0 s 78.9 s
no inline measurements

no inline measurements

And with inlining:

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 76.4 s 74.5 s 74.7 s 73.8 s
A2 75.4 s 73.7 s 73.8 s 74.5 s
A3 76.3 s 74.6 s 75.5 s 73.7 s
B1 80.6 s 77.1 s 72.7 s 73.7 s
B2 81.4 s 78.9 s 72.0 s 72.0 s
B3 80.6 s 78.9 s 72.8 s 73.7 s
C1 81.4 s 78.9 s 72.0 s 72.0 s
C3 79.7 s 80.5 s 72.9 s 77.8 s
inline measurements

inline measurements

The standard deviation for all the above values is less than 0.2 seconds. That is … interesting: For example, on -O2 without inlining, B1 and B2 generate the same assembler output, but execute with a very significant difference in hardware (5.2 s difference, or more than 25 standard deviations). So how have logic and proportion fallen sloppy dead here? If the same code is executed — admittedly from two different locations in the binary — how can that create such a significant difference in walltime performance, while not being visible at all on callgrind? A wild guess, which I have not confirmed yet, is cache locality: When not inlining constructors, those might be in CPU cache from one copy of the code in the binary, but not for the other. And by the way, it might also hint at the reasons for the -march= flag (which creates bigger code) seeming so uneffective. And it might explain, why performance is rather consistent when using inline constructors. If so, the impact of this is certainly interesting. It also suggest that allowing inlining of hotspots, like recently done with the low-level sw::Ring class, produces much more performance improvement on real hardware than the meager results measured with callgrind. And it reinforces the warning made in that post about not falling in the trap of mistaking the map for the territory: callgrind is not a “map in the scale of a mile to the mile”.

Addendum: As said in the previous post, I am still interested in such measurements on other hardware or compilers. All measurements above done with gcc 4.8.3 on Intel i5-4200U@1.6GHz.


Read more
Victor Palau

I recently blogged about making a scopes in 5 minutes using youtube. I have seen also a fair amount of new scopes being created using RSS. By far, my favourite way to use scopecreator is Twitter.

If you want to check a few examples, I have published previously twitter-based scopes like breaking news, la liga and a few others. Today, I give you Formula One:

f1 f1_2 f1_3

The interesting thing about twitter is that many brands upload minute by minute new updates, which make a really good source for scopes.

To create a Formula One scope,I started by going to twitter and creating a list under my scope account (you can use your personal account). The list contains several relevant “official” Formula One accounts.  Using Twitter, I can then update the sources by adding and removing accounts from the list without the user needing to download an update for the scope.

Again, it took me about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, I run:
    scopecreator create twitter vtuson f1
    cd f1
  • Next, I configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    {
    "description": "Formula One scope",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "hooks": {
    "f1": {
    "scope": "f1",
    "apparmor": "scope-security.json"
    }
    },
    "icon": "icon",
    "maintainer": "Your Name <yourname@packagedomain>",
    "name": "f1.vtuson",
    "title": "Formula One",
    "version": "0.2"
    }
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    [ScopeConfig]
    ScopeRunner=./f1.vtuson_f1 --runtime %R --scope %S
    DisplayName=Formula One
    Description=This is an Ubuntu search plugin that enables information from Yelp $
    Author=Canonical Ltd.
    Art=
    Icon=images/icon.png
    SearchHint=Search
    [Appearance]
    PageHeader.Background=color:///#D51318
    PageHeader.ForegroundColor=#FFFFFF
    PreviewButtonColor=#D51318
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the twitter scope template. You can either use “list” or “account” (or both) as departments.  The id is the twitter handle for the list or the account. For lists you will need to specify in the configuration section what account holds the list. As I defined a single entry, the formula one scope will have no drop down menu.
    scopecreator edit channels
    {
    “departments”: [
    {
    “name”:”Formula One”,
    “type”:”list”,
    “id”:”f1″
    }
    ],
    “configuration”: {
    “list-account”:”canonical_scope”,
    “openontwitter”:”See in Twitter”,
    “openlink”:”Open”,
    “retweet”:”Retweet”,
    “favorite”: “Favourite”,
    “reply”:”Reply”
    }
    }

After this, the only thing left to do is replace the placeholder icon, with a relevant logo:
~/f1/f1/images/logo.png
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on http://jsonlint.com/. It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a twitter list! so what are you going to create next?


Read more
Daniel Holbach

In the past weeks Nick, David, a few others and I worked on an app / a website, which could easily collect information which will give users of an Ubuntu device a head-start. All our collective experience and knowledge, easily added and translated.

We achieved quite a bit. We’re now very close to getting a first version of it online (both as an app in the store and as a website). We can quite reliably integrate translations and add new content.

We still have a few TODO items and it would be great if you could help out. If you can write a bit of documentation, translate content or fix some HTML/CSS bits or help out with testability. Any help at all will be appreciated.

Tasks:

  • Add content. Just check out our branch and propose a merge. Read the HACKING doc beforehand.
  • Translate. The content is likely going to change a bit in the next days still, but every edit or translation will be appreciated.
  • Hack! We have a number of things we still want to improve. Read the HACKING doc beforehand. Here’s a list of things:
    • Styling/theming/navigation:
      • Bug 1416385: Fix bullet points in the phone theme
      • Bug 1428671: Remove traces of developer.ubuntu.com
      • Bug 1428669: Clean up required CSS/JS
      • Bug 1425025: Automatically load translated pages according to the user language.
    • Testing
    • and there’s more.

Ping me on IRC, or balloons or dpm if you want to get involved. We look forward to working with you and we’ll post more updates soon.

Read more
bmichaelsen

Around the world, Around the world
— Daft Punk, Around the world

So, you still heard that unfounded myth that it is hard to get involved with and to start contributing to LibreOffice? Still? Even though that there are our Easy Hacks and the LibreOffice developer are a friendly bunch that will help you get started on mailing lists and on IRC? If those alone do not convince you, it might be because it is admittedly much easier to get started if you meet people face to face — like on one of our upcoming Events! Especially our Hackfests are a good way to get started. The next one will be at the University de Las Palmas de Gran Canaria were we had been guests last year already. We presented some introduction talks to the students of the university and then went on to hack on LibreOffice from fixing bugs to implementing new stuff. Here is how that looked like last year:

LibreOffice Hackfest Gran Canaria 2014

LibreOffice Hackfest Gran Canaria 2014

One thing we learned from previous Hackfests was that it is great if newcomers have a way to start working on code right away. While it is rather easy to do that as the 5 minute video on our wiki shows, it might still take some time on some notebooks. So what if you spontaneously show up at the event without a pre-build LibreOffice? Well for that, we now have — thanks to Christian Lohmaier of the Document Foundation staffremote virtual machines prepared for Hackfests, that allow you to get started right away with everything prepared — on rather beefy hardware even, that is.

If you are a student at ULPGC or live in Las Palmas or on the Canary Islands, we invite you to join us to learn how to get started. For students, this is also a very good opportunity get involved and prepare for a Google Summer of Code on LibreOffice. Furthermore, if you are a even casual contributor to LibreOffice code already and want to help out sharing and deepen knowledge on how to work on LibreOffice code, you should get in contact with the Document Foundation — while the event is already very soon now, there still might be travel reimbursal available. You will find all the details on the wiki page for the Hackfest in Las Palmas de Gran Canaria 2015.

LibreOffice Evening Hacking

LibreOffice Evening Hacking in Las Palmas 2014

On the other hand, if two weeks is too short a notice for you, but the rest of this sounds really tempting, there is already the next Hackfest planned, which will take place in Cambridge in the United Kingdom in May. We will be there with a Hackfest for the first time and invite you to join us from anywhere in Europe if you either are a LibreOffice code contributor or if you are interested in learning more on how to become one. Again, there is a wiki page with the details on the LibreOffice Hackfest in Cambridge 2015, and travel reimbursals are available. Contact us!

LibreOffice Evening Hacking

How I imagine Cambridge in May — Photo by Andrew Dunn CC-BY-SA 2.0 via Wikimedia


Read more
Michael Hall

A couple of weeks ago I had the opportunity to attend the thirteenth Southern California Linux Expo, more commonly known at SCaLE 13x. It was my first time back in five years, since I attended 9x, and my first time as a speaker. I had a blast at SCaLE, and a wonderful time with UbuCon. If you couldn’t make it this year, it should definitely be on your list of shows to attend in 2016.

UbuCon

Thanks to the efforts of Richard Gaskin, we had a room all day Friday to hold an UbuCon. For those of you who haven’t attended an UbuCon before, it’s basically a series of presentations by members of the Ubuntu community on how to use it, contribute to it, or become involved in the community around it. SCaLE was one of the pioneering host conferences for these, and this year they provided a double-sized room for us to use, which we still filled to capacity.

image20150220_100226891I was given the chance to give not one but two talks during UbuCon, one on community and one on the Ubuntu phone. We also had presentations from my former manager and good friend Jono Bacon, current coworkers Jorge Castro and Marco Ceppi, and inspirational community members Philip Ballew and Richard Gaskin.

I’d like thank Richard for putting this all together, and for taking such good care of those of us speaking (he made sure we always had mints and water). UbuCon was a huge success because of the amount of time and work he put into it. Thanks also to Canonical for providing us, on rather short notice, a box full of Ubuntu t-shirts to give away. And of course thanks to the SCaLE staff and organizers for providing us the room and all of the A/V equipment in it to use.

The room was recorded all day, so each of these sessions can be watched now on youtube. My own talks are at 4:00:00 and 5:00:00.

Ubuntu Booth

In addition to UbuCon, we also had an Ubuntu booth in the SCaLE expo hall, which was registered and operated by members of the Ubuntu California LoCo team. These guys were amazing, they ran the booth all day over all three days, managed the whole setup and tear down, and did an excellent job talking to everybody who came by and explaining everything from Ubuntu’s cloud offerings, to desktops and even showing off Ubuntu phones.

image20150221_162940413Our booth wouldn’t have happened without the efforts of Luis Caballero, Matt Mootz, Jose Antonio Rey, Nathan Haines, Ian Santopietro, George Mulak, and Daniel Gimpelevich, so thank you all so much! We also had great support from Carl Richell at System76 who let us borrow 3 of their incredible laptops running Ubuntu to show off our desktop, Canonical who loaned us 2 Nexus 4 phones running Ubuntu as well as one of the Orange Box cloud demonstration boxes, Michael Newsham from TierraTek who sent us a fanless PC and NAS, which we used to display a constantly-repeating video (from Canonical’s marketing team) showing the Ubuntu phone’s Scopes on a television monitor provided to us by Eäär Oden at Video Resources. Oh, and of course Stuart Langridge, who gave up his personal, first-edition Bq Ubuntu phone for the entire weekend so we could show it off at the booth.

image20150222_132142752Like Ubuntu itself, this booth was not the product of just one organization’s work, but the combination of efforts and resources from many different, but connected, individuals and groups. We are what we are, because of who we all are. So thank you all for being a part of making this booth amazing.

Read more
bmichaelsen

“The problem is all inside your head” she said to me
“The answer is easy if you take it logically”
— Paul Simon, 50 ways to leave your lover

So recently I tweaked around with these newfangled C++11 initializer lists and created an EasyHack to use them to initialize property sequences in a readable way. This caused a short exchange on the LibreOffice mailing list, which I assumed had its part in motivating Stephans interesting post “On filling a vector”. For all the points being made (also in the quick follow up on IRC), I wondered how much the theoretical “can use a move constructor” discussed etc. really meant when the C++ is translated to e.g. GENERIC, then GIMPLE, then amd64 assembler, then to the internal RISC instructions of the CPU — with multiple levels of caching in addition.

So I quickly wrote the following (thanks so much for C++11 having the nice std::chrono now).

data.hxx:

#include <vector>
struct Data {
    Data();
    Data(int a);
    int m_a;
};
void DoSomething(std::vector<Data>&);

data.cxx:

#include "data.hxx"
// noop in different compilation unit to prevent optimizing out what we want to measure
void DoSomething(std::vector<Data>&) {};
Data::Data() : m_a(4711) {};
Data::Data(int a) : m_a(a+4711) {};

main.cxx:

#include "data.hxx"
#include <iostream>
#include <vector>
#include <chrono>
#include <functional>

void A1(long count) {
    while(--count) {
        std::vector<Data> vec { Data(), Data(), Data() };
        DoSomething(vec);
    }
}

void A2(long count) {
    while(--count) {
        std::vector<Data> vec { {}, {}, {} };
        DoSomething(vec);
    }
}

void A3(long count) {
    while(--count) {
        std::vector<Data> vec { 0, 0, 0 };
        DoSomething(vec);
    }
}

void B1(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.push_back(Data());
        vec.push_back(Data());
        vec.push_back(Data());
        DoSomething(vec);
    }
}

void B2(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.push_back({});
        vec.push_back({});
        vec.push_back({});
        DoSomething(vec);
    }
}

void B3(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.push_back(0);
        vec.push_back(0);
        vec.push_back(0);
        DoSomething(vec);
    }
}

void C1(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.emplace_back(Data());
        vec.emplace_back(Data());
        vec.emplace_back(Data());
        DoSomething(vec);
    }
}

void C3(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.emplace_back(0);
        vec.emplace_back(0);
        vec.emplace_back(0);
        DoSomething(vec);
    }
}

double benchmark(const char* name, std::function<void (long)> testfunc, const long count) {
    const auto start = std::chrono::system_clock::now();
    testfunc(count);
    const auto end = std::chrono::system_clock::now();
    const std::chrono::duration<double> delta = end-start;
    std::cout << count << " " << name << " iterations took " << delta.count() << " seconds." << std::endl;
    return delta.count();
}

int main(int, char**) {
    long count = 10000000;
    while(benchmark("A1", &A1, count) < 60l)
        count <<= 1;
    std::cout << "Going with " << count << " iterations." << std::endl;
    benchmark("A1", &A1, count);
    benchmark("A2", &A2, count);
    benchmark("A3", &A3, count);
    benchmark("B1", &B1, count);
    benchmark("B2", &B2, count);
    benchmark("B3", &B3, count);
    benchmark("C1", &C1, count);
    benchmark("C3", &C3, count);
    return 0;
}

Makefile:

CFLAGS?=-O2
main: main.o data.o
    g++ -o $@ $^

%.o: %.cxx data.hxx
    g++ $(CFLAGS) -std=c++11 -o $@ -c $<

Note the object here is small and trivial to copy as one would expect from objects passed around as values (as expensive to copy objects mostly can be passed around with a std::shared_ptr). So what did this measure? Here are the results:

Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 without inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 89.1 s 79.0 s 78.9 s 78.9 s
A2 89.1 s 78.1 s 78.0 s 80.5 s
A3 90.0 s 78.9 s 78.8 s 79.3 s
B1 103.6 s 97.8 s 79.0 s 78.0 s
B2 99.4 s 95.6 s 78.5 s 78.0 s
B3 107.4 s 90.9 s 79.7 s 79.9 s
C1 99.4 s 94.4 s 78.0 s 77.9 s
C3 98.9 s 100.7 s 78.1 s 81.7 s

creating a three element vector without inlined constructors
And, for comparison, following are the results, if one allows the constructors to be inlined.
Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 with inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 85.6 s 74.7 s 74.6 s 74.6 s
A2 85.3 s 74.6 s 73.7 s 74.5 s
A3 91.6 s 73.8 s 74.4 s 74.5 s
B1 93.4 s 90.2 s 72.8 s 72.0 s
B2 93.7 s 88.3 s 72.0 s 73.7 s
B3 97.6 s 88.3 s 72.8 s 72.0 s
C1 93.4 s 88.3 s 72.0 s 73.7 s
C3 96.2 s 88.3 s 71.9 s 73.7 s

creating a three element vector without inlined constructors
Some observations on these measurements:

  • -march=... is at best neutral: The measured times do not change much in general, they only even slightly improve performance in five out of 16 cases, and the two cases with the most significant change in performance (over 3%) are actually hurting the performance. So for the rest of this post, -march=... will be ignored. Sorry gentooers. ;)
  • There is no silver bullet with regard to the different implementations: A1, A2 and A3 are the faster implementations when not inlining constructors and using -Os or -O2 (the quickest A* is ~10% faster than the quickest B*/C*). However when inlining constructors and using -O3, the same implementations are the slowest (by 2.4%).
  • Most common release builds are still done with -O2 these days. For those, using initializer lists (A1/A2/A3) seem too have a significant edge over the alternatives, whether constructors are inlined or not. This is in contrast to the conclusions made from “constructor counting”, which assumed these to be slow because of additional calls needed.
  • The numbers printed in bold are either the quickest implementation in a build scenario or one that is within 1.5% of the quickest implementation. A1 and A2 are sharing the title here by being in that group five times each.
  • With constructors inlined, everything in the loop except DoSomething() could be inline. It seems to me that the compiler could — at least in theory — figure out that it is asked the same thing in all cases. Namely, reserve space for three ints on the heap, fill them each with 4711 and make the ::std::vector<int> data structure on the stack reflect that, then hand that to the DoSomething() function that you know nothing about. If the compiler would figure that out, it would take the same time for all implementations. This doesnt happen either on -O2 (differ by ~18% from quickest to slowest) nor on -O3 (differ by ~3.6%).

One common mantra in applications development is “trust the compiler to optimize”. The above observations show a few cracks in the foundations of that, esp. if you take into account that this is all on the same version of the same compiler running on the same platform and hardware with the same STL implementation. For huge objects with expensive constructors, the constructor counting approach might still be valid. Then again, those are rarely statically initialized as a bigger bunch into a vector. For the more common scenario of smaller objects with cheap constructors, my tentative conclusion so far would be to go with A1/A2/A3 — not so much because they are quickest in the most common build scenarios on my platform, but rather because the readability of them is a value on its own while the performance picture is muddy at best.

And hey, if you want to run the tests above on other platforms or compilers, I would be interested in results!

Note: I did these runs for each scenario only once, thus no standard deviation is given. In general, they seemed to be rather stable, but this being wallclock measurements, one or the other might be outliers. caveat emptor.


Read more
Daniel Holbach

I already blogged about the help app I was working on a bit in the last time. I wanted to go into a bit more detail now that we reached a new milestone.

What’s the idea behind it?

In a conversation in the Community team we noticed that there’s a lot of knowledge we gathered in the course of having used Ubuntu on a phone for a long time and that it might make sense to share tips and tricks, FAQ, suggestions and lots more with new device users in a simple way.

The idea was to share things like “here’s how to use edge swipes to do X” (maybe an animated GIF?) and “if you want to do Y, install the Z app from the store” in an organised and clever fashion. Obviously we would want this to be easily editable (Markdown) and have easy translations (Launchpad), work well on the phone (Ubuntu HTML5 UI toolkit) and work well on the web (Ubuntu Design Web guidelines) too.

What’s the state of things now?

There’s not much content yet and it doesn’t look perfect, but we have all the infrastructure set up. You can now start contributing! :-)

screenshot of web editionscreenshot of web edition screenshot of phone app editionscreenshot of phone app edition

 

What’s still left to be done?

  • We need HTML/CSS gurus who can help beautifying the themes.
  • We need people to share their tips and tricks and favourite bits of their Ubuntu devices experience.
  • We need hackers who can help in a few places.
  • We need translators.

What you need to do? For translations: you can do it in Launchpad easily. For everything else:

$ bzr branch lp:ubuntu-devices-help
$ cd ubuntu-devices-help
$ less HACKING

We’ve come a long way in the last week and with the easy of Markdown text and easy Launchpad translations, we should quickly be in a state where we can offer this in the Ubuntu software store and publish the content on the web as well.

If you want to write some content, translate, beautify or fix a few bugs, your help is going to be appreciated. Just ping myself, Nick Skaggs or David Planella on #ubuntu-app-devel.

Read more
Ben Howard

Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"


Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

Users who want to upgrade their existing instance can simply run:
  • sudo apt-get update
  • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
  • reboot

Read more
Victor Palau

Not long back Chris Wayne published a post about a scope creator tool.  Last week, I was visiting bq and we decided with Victor Gonzalez  that we should have a scope for Canal bq. The folks at bq do an excellent job at creating “how to” and “first steps” videos, and they have started publishing some for the bq Aquaris E4.5 Ubuntu Edition.

Here is a few screenshots of the scope that is now available to download from the store:

bq1bq2bq3bq4

The impressive thing is that it took us about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, we run:
    scopecreator create youtube com.ubuntu.developer.victorbq canalbq
    cd canalbq
  • Next, we configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    {
    "name": "com.ubuntu.developer.victorbq.canalbq",
    "description": "Canal bq",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "title": "Canal bq",
    "hooks": {
    "canalbq": {
    "scope": "canalbq",
    "apparmor": "scope-security.json"
    }
    },
    "version": "0.3",
    "maintainer": "Victor Gonzalez <anemailfromvictor@bq.com>"
    }
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    [ScopeConfig]
    DisplayName=Canal bq
    Description=Youtube custommized channel
    Author=Canonical Ltd.
    Art=images/logo.png
    Icon=images/logo.png
    SearchHint=Buscar
    LocationDataNeeded=true
    [Appearance]
    PageHeader.Background=color:///#000000
    PageHeader.ForegroundColor=#FFFFFF
    PreviewButtonColor=#FFFFFF
    PageHeader.Logo=./images/logo.png
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the youtube scope template. You can either use “playlists” or “channels” (or both) as departments. The id PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd corresponds to a play list from youtube, with url= https://www.youtube.com/playlist?list=PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd
    scopecreator edit channels{
    “maxResults”: “20”,
    “playlists”: [
    {
    “id”: “PLjQOV_HHlukyNGBFaSVGFVWrbj3vjtMjd”,
    “reminder”:”Aquaris E4,5 Ubuntu Edition”
    },
    {
    “id”: “PLjQOV_HHlukzBhuG97XVYsw96F-pd9P2I”,
    “reminder”: “Tecnópolis”
    },
    {
    “id”: “PLC46C98114CA9991F”,
    “reminder”: “aula bq”
    },
    {
    “id”: “PLE7ACC7492AD7D844″,
    “reminder”: “primeros pasos”
    },
    {
    “id”: “PL551D151492F07D63″,
    “reminder”: “accesorios”
    },
    {
    “id”: “PLjQOV_HHlukyIT8Jr3aI1jtoblUTD4mn0″,
    “reminder”: “3d”
    }
    ]
    }

After this, the only thing left to do is replace the placeholder icon, with the bq logo:
~/canalbq/canalbq/images/logo.png
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on http://jsonlint.com/. It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a youtube channel! so what are you going to create next?


Read more
Prakash

Raspberry Pi 2 is here.

  • A 900MHz quad-core ARM Cortex-A7 CPU
  • 1GB LPDDR2 SDRAM
  • Compatible with Raspberry Pi 1
  • $35

And now runs Snappy Ubuntu Core.

 


Read more
Nicholas Skaggs

Unity 8 Desktop Testing

While much of the excitement around unity8 and the next generation of ubuntu has revolved around mobile, again I'd like to point your attention to the desktop. The unity8 desktop is starting to evolve and gain more "desktopy" features. This includes things like window management and keyboard shortcuts for unity8, and MIR enhancements with things like native library support for rendering and support for X11 applications.

I hosted a session with Stephen Webb at UOS last year where we discussed the status of running unity8 on the desktop. During the session I mentioned my own personal goal of having some brave community members running unity8 as there default desktop this cycle. Now, it's still a bit early to realize that goal, but it is getting much closer! To help get there, I would encourage you to have a look at unity8 on your desktop and start running it. The development teams are ready for feedback and anxious to get it in shape on the desktop.

So how do you get it? Check out the unity8 desktop wiki page which explains how you can run unity8, even if you are on a stable version of ubuntu like the LTS. Install it locally in an lxc container and you can login to a unity8 desktop on your current pc. Check it out! After you finish playing, please don't forget to file bugs for anything you might find. The wiki page has you covered there as well. Enjoy unity8!

Read more
Daniel Holbach

Did you always want to write an app for Ubuntu and thought that HTML5 might be a good choice? Well picked!

finished

We now have training materials up on developer.ubuntu.com which will get you started in all things related to Ubuntu devices. The great thing is that you just write this app once and it’ll work on the phone, the desktop and whichever device Ubuntu is going to run next on.

The example used in the materials is a RSS reader written by my friend, Adnane Belmadiaf. If you go through the steps one by one you’ll notice how easy it is to get stuff done. :-)

This is also a good workshop you could give in your LUG or LoCo or elsewhere. Maybe next weekend at Ubuntu Global Jam too? :-)

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


Similarly, Github itself now also "suggests" random repo names.



I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's

unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

versus

b9643037-4a79-412c-b7fc-80baa7233a31

Shell

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install petname
$ petname
itchy-Marvin
$ petname -w 3
listlessly-easygoing-Radia
$ petname -s ":" -w 5
onwardly:unflinchingly:debonairly:vibrant:Chandler

Python

That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install python-petname
$ python-petname
flaky-Megan
$ python-petname -w 4
mercifully-grimly-fruitful-Salma
$ python-petname -s "" -w 2
filthyLaurel

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)
boomingly_tangible_Mikayla

Golang


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/dustinkirkland/golang-petname

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install golang-petname
$ golang-petname
quarrelsome-Cullen
$ golang-petname -words=1
Vivian
$ golang-petname -separator="|" -words=10
snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

Using it in your own Golang code looks as simple as this:

package main
import (
"fmt"
"math/rand"
"time"
"github.com/dustinkirkland/golang-petname"
)
func main() {
flag.Parse()
rand.Seed(time.Now().UnixNano())
fmt.Println(petname.Generate(2, ""))
}
Gratuitous picture of my pets, 7 years later.
Cheers,
happily-hacking-Dustin

Read more
beuno

After a few weeks of being coffee-deprived, I decided to disassemble my espresso machine and see if I could figure out why it leaked water while on, and didn't have enough pressure to produce drinkable coffee.
I live a bit on the edge of where other people do, so my water supply is from my own pump, 40 meters into the ground. It's as hard as water gets. That was my main suspicion. I read a bit about it on the interwebz and learned about descaling, which I'd never heard about. I tried some of the home-made potions but nothing seemed to work.
Long story short, I'm enjoying a perfect espresso as I write this.

I wanted to share a bit with the internet people about what was hard to solve, and couldn't find any instructions on. All I really did was disassemble the whole thing completely, part by part, clean them, and make sure to put it back together tightening everything that seemed to need pressure.
I don't have the time and energy to put together a step-by-step walk-through, so here's the 2 tips I can give you:

1) Remove ALL the screws. That'll get you there 95% there. You'll need a philips head, a torx head, a flat head and some small-ish pliers.
2) The knob that releases the steam looks unremovable and blocks you from getting the top lid off. It doesn't screw off, you just need to pull upwards with some strength and care. It comes off cleanly and will go back on easily. Here's a picture to prove it:

DeLongi eco310.r

Hope this helps somebody!

Read more
Daniel Holbach

In a recent conversation we thought it’d be a good idea to share tips and tricks, suggestions and ideas with users of Ubuntu devices. Because it’d help to have it available immediately on the phone, an app could be a good idea.

I had a quick look at it and after some discussion with Rouven in my office space, it looked like hyde could fit the bill nicely. To edit the content, just write a bit of Markdown, generate the HTML (nice and readable templates – great!) and done.

Unfortunately I’m not a CSS or HTML wizard, so if you could help out making it more Ubuntu-y, that’d be great! Also: if you’re interested in adding content, that’d be great.

I pushed the code for it up on Launchpad, there are also the first bugs open already. Let’s make it look pretty and let’s share our knowledge with new Ubuntu devices users. :-)

Oh, and let’s see that we translate the content as well! :-)

Read more
jdstrand

Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

Background

Ubuntu Touch has several goals that all apply to Snappy:

  • we want system-image upgrades
  • we want to replace the distro archive model with an app store model for Snappy systems
  • we want developers to be able to get their apps to users quickly
  • we want a dependable application lifecycle
  • we want the system to be easy to understand and to develop on
  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Snappy adds a few things to the above (that pertain to this conversation):

  • we want the system to be bulletproof (transactional updates with rollbacks)
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

Let’s look at what all these mean more closely.

system-image upgrades

  • we want system-image upgrades
  • we want the system to be bulletproof (transactional updates with rollbacks)

We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

app store

  • we want to replace the distro archive model with an app store model
  • we want developers to be able to get their apps to users quickly

Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

application lifecycle

  • dependable application lifecycle

This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

(I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

security

  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

easy to use

  • we want the system to be easy to understand and to develop on
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

  1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
  2. apps declaring specific exceptions to the security policy
  3. apps declaring to use restricted security policy
  4. apps declaring to run (effectively) unconfined
  5. apps shipping hand-crafted policy (that can be strict or lenient)

(Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

Moving forward

The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

immediate term

The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

The user experience for this will be discussed and refined on the mailing list in the coming days.

short term

After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

future

Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

Read more
Daniel Holbach

What do Kinshasa, Omsk, Paris, Mexico City, Eugene, Denver, Tempe, Catonsville, Fairfax, Dania Beach, San Francisco and various places on the internet have in common?

Right, they’re all participating in the Ubuntu Global Jam on the weekend of 6-8 February! See the full list of teams that are part of the event here. (Please add yours if you haven’t already.)

What’s great about the event is that there are just two basic aims:

  1. do something with Ubuntu
  2. get together and have fun!

What I also like a lot is that there’s always something new to do. Here are just 3 quick examples of that:

App Development Schools

We have put quite a bit of work into putting training materials together, now, you can take them out to your team and start writing Ubuntu apps easily.

Snappy

As one tech news article said “Robots embrace Ubuntu as it invades the internet of things“. Ubuntu’s newest foray, making it possible to bring a stable and secure OS to small devices where you can focus on apps and functionality, is attracting a number of folks on the mailing lists (snappy-devel, snappy-app-devel)  and elsewhere. Check out the mailing lists and the snappy site to find out more and have a play with it.

Unity8 on Desktop

Convergence is happening and what’s working great on the phone is making its way onto the desktop. You can help making this happen, by installing and testing it. Your feedback will be much appreciated.

Unity-8-Is-Starting-to-Look-More-Like-a-Desktop-for-Ubuntu-Video-465329-5

maxresdefault

 

Read more