Canonical Voices

Robin Winslow

Despite some reservations, it looks like HTTP/2 is very definitely the future of the Internet.

Speed improvements

HTTP/2 may not be the perfect standard, but it will bring with it many long-awaited speed improvements to internet communication:

  • Sending of many different resources in the first response
  • Multiplexing requests to prevent blocking
  • Header compression
  • Keep connections alive
  • Bi-directional communication

Changes in long-held performance practices

I read a very informative post today (via Web Operations Weekly) which laid out all the ways this will change some deeply embedded performance principles for front-end developers. Namely:

Each of these practices are hacks which make website setups more complex and more opaque, but with the goal of speeding up front-end performance by working around limitations in HTTP. Fortunately, these somewhat ugly practices are no longer necessary with HTTP/2.

Importantly, Matt Wilcox points out that in an HTTP/2 world, these practices might actually slow down your website, for the following reasons:

  • If you serve concatenated CSS, Javascript or image files, it’s likely you’re sending more content than you strictly need to for each page
  • Serving assets from different domains prevents HTTP/2 from reusing existing connections, forcing it to open extra ones

But not yet…

This is all very exciting, but note that we can’t and shouldn’t start changing our practices yet. Even server-side support for HTTP/2 is still patchy, with nginx only promising full support by the end of 2015 (with Microsoft’s IIS, surprisingly, putting other servers to shame).

But of course the main limiting factor will, as usual, be browsers:

  • Firefox leads the way, with support since version 36
  • Chrome has support for spdy4 (the precursor to HTTP/2), but it isn’t enabled by default yet
  • Internet Explorer 11 supports HTTP/2 only in Windows 10 beta

As usual the main limiting factor will be waiting for market share of older versions of Internet Explorer to drop off. Braver organisations may want to be progressive by deliberately slowing down the experience for people on older browsers to speed up the more up-to-date and hence push adoption of good technology.

If you want to get really clever, you could serve a different website structure based on the user agent string, but this would really be a pain to implement and I doubt many people would want to do this.

Even with the most progressive strategy, I doubt anyone will be brave enough to drop decent HTTP/1 performance until at least 2016, as this is when nginx support should land; Windows 10 and therefore IE 11 will have had some time to gain traction and of course Internet Explorer market share in general will have continued to drop in favour of Chrome and Firefox.

TL;DR: We front-end developers should be ready to change our ways, but we don’t need to worry about it just yet.

Originally posted on robinwinslow.co.uk.

Read more
Victor Palau

I recently blogged about making a scopes in 5 minutes using youtube. I have seen also a fair amount of new scopes being created using RSS. By far, my favourite way to use scopecreator is Twitter.

If you want to check a few examples, I have published previously twitter-based scopes like breaking news, la liga and a few others. Today, I give you Formula One:

f1 f1_2 f1_3

The interesting thing about twitter is that many brands upload minute by minute new updates, which make a really good source for scopes.

To create a Formula One scope,I started by going to twitter and creating a list under my scope account (you can use your personal account). The list contains several relevant “official” Formula One accounts.  Using Twitter, I can then update the sources by adding and removing accounts from the list without the user needing to download an update for the scope.

Again, it took me about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, I run:
    scopecreator create twitter vtuson f1
    cd f1
  • Next, I configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    {
    "description": "Formula One scope",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "hooks": {
    "f1": {
    "scope": "f1",
    "apparmor": "scope-security.json"
    }
    },
    "icon": "icon",
    "maintainer": "Your Name <yourname@packagedomain>",
    "name": "f1.vtuson",
    "title": "Formula One",
    "version": "0.2"
    }
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    [ScopeConfig]
    ScopeRunner=./f1.vtuson_f1 --runtime %R --scope %S
    DisplayName=Formula One
    Description=This is an Ubuntu search plugin that enables information from Yelp $
    Author=Canonical Ltd.
    Art=
    Icon=images/icon.png
    SearchHint=Search
    [Appearance]
    PageHeader.Background=color:///#D51318
    PageHeader.ForegroundColor=#FFFFFF
    PreviewButtonColor=#D51318
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the twitter scope template. You can either use “list” or “account” (or both) as departments.  The id is the twitter handle for the list or the account. For lists you will need to specify in the configuration section what account holds the list. As I defined a single entry, the formula one scope will have no drop down menu.
    scopecreator edit channels
    {
    “departments”: [
    {
    “name”:”Formula One”,
    “type”:”list”,
    “id”:”f1″
    }
    ],
    “configuration”: {
    “list-account”:”canonical_scope”,
    “openontwitter”:”See in Twitter”,
    “openlink”:”Open”,
    “retweet”:”Retweet”,
    “favorite”: “Favourite”,
    “reply”:”Reply”
    }
    }

After this, the only thing left to do is replace the placeholder icon, with a relevant logo:
~/f1/f1/images/logo.png
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on http://jsonlint.com/. It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a twitter list! so what are you going to create next?


Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150310 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

We have rebased our Vivid kernel to the first upstream stable
release and uploaded, ie. 3.19.0-8.8. Please test and let us
results once it’s available.
This is also a reminder that kernel freeze for Vivid is ~4wks
Apr 9. If you have any patches which need to land for
please make sure to submit those sooner rather than later.
—–
Important upcoming dates:
Thurs Mar 26 – Final Beta (~2 weeks away)
Thurs Apr 09 – Kernel Freeze (~4 weeks away)
Thurs Apr 23 – 15.04 Release (~7 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates

Status for the main kernels, until today:

  • Lucid – None (no update)
  • Precise – Verification and Testing
  • Trusty – Verification and Testing
  • Utopic – Verification and Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 27-Feb through 21-Mar

    27-Feb Last day for kernel commits for this cycle
    01-Mar – 07-Mar Kernel prep week.
    08-Mar – 21-Mar Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be

No open discussion.

Read more
Kyle Nitzsche

AptBrowser QML/C++ App

I've made a QML/C++ app called aptBrowser as an exercise in:

  • QML declarative GUI that drives
  • C++ backend threads

That is, the GUI provides buttons (five) that kick off C++ threads that do the backend work and provide the results back to QML.

So the GUI is always responsive (non-blocking).

What aptbrowser does

The user enters a debian package name (and is told if it is not valid) and taps one of five buttons that do the following:
  • Show the packages this package depends on ("Depends")
  • Show the packages this package recommends ("Recommends")
  • Show the packages that depend on this package ("Parent Depends")
  • Show that packages that recommend this package ("Parent Recommends")
  • Show the  apt-cache policy for this package ("Policy")
The data for all but the last ("Policy") are returned as flickable lists of buttons. When you click any one, it becomes the current package and the GUI and displayed data adjusts appropriately.

When you click any of the buttons, the orange indicator square to its left turns purple and starts spinning, and when the c++ backend returns data, its indicator turns orange again and stops spinning.

Note that the Parent Depends and Parent Recommends actions can take a long time. This has nothing to do with this app. This is simply how long it takes to first get a package's parents and then, for each, find its type of relationship (depends or recommends) to our package of interest. Querying the apt cache is time consuming.

Where is aptbrowser

Store

Because the app queries the apt cache, it must run unconfined at the moment, and therefore it cannot go into the store.

The click

This an armhf click pkg for framework ubuntu-sdk.14.10 (compiled against vivid)

    The source 


    • bzr branch lp:aptbrowser

    Screenshots

     



    Read more
    Hardik Dalwadi

    Hello world!

    Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

    Read more
    Jouni Helminen

    Ubuntu community devs Andrew Hayzen and Victor Thompson chat with lead designer Jouni Helminen. Andrew and Victor have been working in open source projects for a couple of years and have done a great job on the Music application that is now rolling out on phone, tablet and desktop. In this chat they are sharing their thoughts on open source, QML, app development, and tips on how to get started contributing and developing apps.

    If you want to start writing apps for Ubuntu, it’s easy. Check out http://developer.ubuntu.com, get involved on Google+ Ubuntu App Dev – https://plus.google.com/communities/1… – or contact alan.pope@canonical.com – you are in good hands!

    Check out the video interview here :)

    Read more
    Daniel Holbach

    In the past weeks Nick, David, a few others and I worked on an app / a website, which could easily collect information which will give users of an Ubuntu device a head-start. All our collective experience and knowledge, easily added and translated.

    We achieved quite a bit. We’re now very close to getting a first version of it online (both as an app in the store and as a website). We can quite reliably integrate translations and add new content.

    We still have a few TODO items and it would be great if you could help out. If you can write a bit of documentation, translate content or fix some HTML/CSS bits or help out with testability. Any help at all will be appreciated.

    Tasks:

    • Add content. Just check out our branch and propose a merge. Read the HACKING doc beforehand.
    • Translate. The content is likely going to change a bit in the next days still, but every edit or translation will be appreciated.
    • Hack! We have a number of things we still want to improve. Read the HACKING doc beforehand. Here’s a list of things:
      • Styling/theming/navigation:
        • Bug 1416385: Fix bullet points in the phone theme
        • Bug 1428671: Remove traces of developer.ubuntu.com
        • Bug 1428669: Clean up required CSS/JS
        • Bug 1425025: Automatically load translated pages according to the user language.
      • Testing
      • and there’s more.

    Ping me on IRC, or balloons or dpm if you want to get involved. We look forward to working with you and we’ll post more updates soon.

    Read more
    facundo


    Algunas, varias y sueltas.

    A nivel de proyectos, le estuvimos metiendo bastante con Nico a fades. La verdad es que la versión 2 que sacamos la semana pasada está piolísima... si usás virtualenvs, no dejes de pegarle una mirada.

    Otro proyecto con el que estuve es CDPedia... la parte de internacionalización está bastante potable, y eso también me llevó a renovar la página principal que te muestra cuando la abrís, así que puse a tirar una nueva versión de la de español, y luego seguirá una de portugués (¡cada imagen tarda como una semana!).

    Hace un rato subí a la página de tutoriales de Python Argentina el Tutorial de Django en español (¡gracias Matías Bordese por el material!). Este tuto antes estaba en un dominio que ahora venció, y nos pareció interesante que esté todo en el mismo lugar, facilita que la gente lo encuentre.

    Finalmente, empecé a organizar mi Segundo Curso Abierto de Python. Esta vez lo quiero hacer por la zona de Palermo, o alrededores (la vez pasada fue en microcentro). Todavía no tengo reservado un lugar, y menos fechas establecidas, pero el formato va a ser similar al anterior. Con respecto al sitio, si alguien conoce un buen lugar para alquilar "aulas", me avisa, :)

    Read more
    bmichaelsen

    Around the world, Around the world
    — Daft Punk, Around the world

    So, you still heard that unfounded myth that it is hard to get involved with and to start contributing to LibreOffice? Still? Even though that there are our Easy Hacks and the LibreOffice developer are a friendly bunch that will help you get started on mailing lists and on IRC? If those alone do not convince you, it might be because it is admittedly much easier to get started if you meet people face to face — like on one of our upcoming Events! Especially our Hackfests are a good way to get started. The next one will be at the University de Las Palmas de Gran Canaria were we had been guests last year already. We presented some introduction talks to the students of the university and then went on to hack on LibreOffice from fixing bugs to implementing new stuff. Here is how that looked like last year:

    LibreOffice Hackfest Gran Canaria 2014

    LibreOffice Hackfest Gran Canaria 2014

    One thing we learned from previous Hackfests was that it is great if newcomers have a way to start working on code right away. While it is rather easy to do that as the 5 minute video on our wiki shows, it might still take some time on some notebooks. So what if you spontaneously show up at the event without a pre-build LibreOffice? Well for that, we now have — thanks to Christian Lohmaier of the Document Foundation staffremote virtual machines prepared for Hackfests, that allow you to get started right away with everything prepared — on rather beefy hardware even, that is.

    If you are a student at ULPGC or live in Las Palmas or on the Canary Islands, we invite you to join us to learn how to get started. For students, this is also a very good opportunity get involved and prepare for a Google Summer of Code on LibreOffice. Furthermore, if you are a even casual contributor to LibreOffice code already and want to help out sharing and deepen knowledge on how to work on LibreOffice code, you should get in contact with the Document Foundation — while the event is already very soon now, there still might be travel reimbursal available. You will find all the details on the wiki page for the Hackfest in Las Palmas de Gran Canaria 2015.

    LibreOffice Evening Hacking

    LibreOffice Evening Hacking in Las Palmas 2014

    On the other hand, if two weeks is too short a notice for you, but the rest of this sounds really tempting, there is already the next Hackfest planned, which will take place in Cambridge in the United Kingdom in May. We will be there with a Hackfest for the first time and invite you to join us from anywhere in Europe if you either are a LibreOffice code contributor or if you are interested in learning more on how to become one. Again, there is a wiki page with the details on the LibreOffice Hackfest in Cambridge 2015, and travel reimbursals are available. Contact us!

    LibreOffice Evening Hacking

    How I imagine Cambridge in May — Photo by Andrew Dunn CC-BY-SA 2.0 via Wikimedia


    Read more
    Joseph Salisbury

    Meeting Minutes

    IRC Log of the meeting.

    Meeting minutes.

    Agenda

    20150303 Meeting Agenda


    Release Metrics and Incoming Bugs

    Release metrics and incoming bug data can be reviewed at the following link:

    • http://people.canonical.com/~kernel/reports/kt-meeting.txt


    Status: Vivid Development Kernel

    We have officially uploaded our v3.19 kernel for Vivid to the archive,
    ie. 3.19.0-7.7. Please test and let us know your results.
    This is also an early reminder that kernel freeze for Vivid is on Thurs
    Apr 9. If you have any patches which need to land for 15.04′s release,
    please make sure to submit those sooner rather than later.
    —–
    Important upcoming dates:
    Thurs Mar 26 – Final Beta (~3 weeks away)
    Thurs Apr 09 – Kernel Freeze (~5 weeks away)
    Thurs Apr 23 – 15.04 Release (~7 weeks away)


    Status: CVE’s

    The current CVE status can be reviewed at the following link:

    http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


    Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

    Status for the main kernels, until today:

    • Lucid – Kernel Prep
    • Precise – Kernel Prep
    • Trusty – Kernel Prep
    • Utopic – Kernel Prep

      Current opened tracking bugs details:

    • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

      For SRUs, SRU report is a good source of information:

    • http://kernel.ubuntu.com/sru/sru-report.html

      Schedule:

      cycle: 27-Feb through 21-Mar
      ====================================================================
      27-Feb Last day for kernel commits for this cycle
      01-Mar – 07-Mar Kernel prep week.
      08-Mar – 21-Mar Bug verification; Regression testing; Release


    Open Discussion or Questions? Raise your hand to be recognized

    No open discussion.

    Read more
    Michael Hall

    A couple of weeks ago I had the opportunity to attend the thirteenth Southern California Linux Expo, more commonly known at SCaLE 13x. It was my first time back in five years, since I attended 9x, and my first time as a speaker. I had a blast at SCaLE, and a wonderful time with UbuCon. If you couldn’t make it this year, it should definitely be on your list of shows to attend in 2016.

    UbuCon

    Thanks to the efforts of Richard Gaskin, we had a room all day Friday to hold an UbuCon. For those of you who haven’t attended an UbuCon before, it’s basically a series of presentations by members of the Ubuntu community on how to use it, contribute to it, or become involved in the community around it. SCaLE was one of the pioneering host conferences for these, and this year they provided a double-sized room for us to use, which we still filled to capacity.

    image20150220_100226891I was given the chance to give not one but two talks during UbuCon, one on community and one on the Ubuntu phone. We also had presentations from my former manager and good friend Jono Bacon, current coworkers Jorge Castro and Marco Ceppi, and inspirational community members Philip Ballew and Richard Gaskin.

    I’d like thank Richard for putting this all together, and for taking such good care of those of us speaking (he made sure we always had mints and water). UbuCon was a huge success because of the amount of time and work he put into it. Thanks also to Canonical for providing us, on rather short notice, a box full of Ubuntu t-shirts to give away. And of course thanks to the SCaLE staff and organizers for providing us the room and all of the A/V equipment in it to use.

    The room was recorded all day, so each of these sessions can be watched now on youtube. My own talks are at 4:00:00 and 5:00:00.

    Ubuntu Booth

    In addition to UbuCon, we also had an Ubuntu booth in the SCaLE expo hall, which was registered and operated by members of the Ubuntu California LoCo team. These guys were amazing, they ran the booth all day over all three days, managed the whole setup and tear down, and did an excellent job talking to everybody who came by and explaining everything from Ubuntu’s cloud offerings, to desktops and even showing off Ubuntu phones.

    image20150221_162940413Our booth wouldn’t have happened without the efforts of Luis Caballero, Matt Mootz, Jose Antonio Rey, Nathan Haines, Ian Santopietro, George Mulak, and Daniel Gimpelevich, so thank you all so much! We also had great support from Carl Richell at System76 who let us borrow 3 of their incredible laptops running Ubuntu to show off our desktop, Canonical who loaned us 2 Nexus 4 phones running Ubuntu as well as one of the Orange Box cloud demonstration boxes, Michael Newsham from TierraTek who sent us a fanless PC and NAS, which we used to display a constantly-repeating video (from Canonical’s marketing team) showing the Ubuntu phone’s Scopes on a television monitor provided to us by Eäär Oden at Video Resources. Oh, and of course Stuart Langridge, who gave up his personal, first-edition Bq Ubuntu phone for the entire weekend so we could show it off at the booth.

    image20150222_132142752Like Ubuntu itself, this booth was not the product of just one organization’s work, but the combination of efforts and resources from many different, but connected, individuals and groups. We are what we are, because of who we all are. So thank you all for being a part of making this booth amazing.

    Read more
    bmichaelsen

    “The problem is all inside your head” she said to me
    “The answer is easy if you take it logically”
    — Paul Simon, 50 ways to leave your lover

    So recently I tweaked around with these newfangled C++11 initializer lists and created an EasyHack to use them to initialize property sequences in a readable way. This caused a short exchange on the LibreOffice mailing list, which I assumed had its part in motivating Stephans interesting post “On filling a vector”. For all the points being made (also in the quick follow up on IRC), I wondered how much the theoretical “can use a move constructor” discussed etc. really meant when the C++ is translated to e.g. GENERIC, then GIMPLE, then amd64 assembler, then to the internal RISC instructions of the CPU — with multiple levels of caching in addition.

    So I quickly wrote the following (thanks so much for C++11 having the nice std::chrono now).

    data.hxx:

    #include <vector>
    struct Data {
        Data();
        Data(int a);
        int m_a;
    };
    void DoSomething(std::vector<Data>&);

    data.cxx:

    #include "data.hxx"
    // noop in different compilation unit to prevent optimizing out what we want to measure
    void DoSomething(std::vector<Data>&) {};
    Data::Data() : m_a(4711) {};
    Data::Data(int a) : m_a(a+4711) {};

    main.cxx:

    #include "data.hxx"
    #include <iostream>
    #include <vector>
    #include <chrono>
    #include <functional>

    void A1(long count) {
        while(--count) {
            std::vector<Data> vec { Data(), Data(), Data() };
            DoSomething(vec);
        }
    }

    void A2(long count) {
        while(--count) {
            std::vector<Data> vec { {}, {}, {} };
            DoSomething(vec);
        }
    }

    void A3(long count) {
        while(--count) {
            std::vector<Data> vec { 0, 0, 0 };
            DoSomething(vec);
        }
    }

    void B1(long count) {
        while(--count) {
            std::vector<Data> vec;
            vec.reserve(3);
            vec.push_back(Data());
            vec.push_back(Data());
            vec.push_back(Data());
            DoSomething(vec);
        }
    }

    void B2(long count) {
        while(--count) {
            std::vector<Data> vec;
            vec.reserve(3);
            vec.push_back({});
            vec.push_back({});
            vec.push_back({});
            DoSomething(vec);
        }
    }

    void B3(long count) {
        while(--count) {
            std::vector<Data> vec;
            vec.reserve(3);
            vec.push_back(0);
            vec.push_back(0);
            vec.push_back(0);
            DoSomething(vec);
        }
    }

    void C1(long count) {
        while(--count) {
            std::vector<Data> vec;
            vec.reserve(3);
            vec.emplace_back(Data());
            vec.emplace_back(Data());
            vec.emplace_back(Data());
            DoSomething(vec);
        }
    }

    void C3(long count) {
        while(--count) {
            std::vector<Data> vec;
            vec.reserve(3);
            vec.emplace_back(0);
            vec.emplace_back(0);
            vec.emplace_back(0);
            DoSomething(vec);
        }
    }

    double benchmark(const char* name, std::function<void (long)> testfunc, const long count) {
        const auto start = std::chrono::system_clock::now();
        testfunc(count);
        const auto end = std::chrono::system_clock::now();
        const std::chrono::duration<double> delta = end-start;
        std::cout << count << " " << name << " iterations took " << delta.count() << " seconds." << std::endl;
        return delta.count();
    }

    int main(int, char**) {
        long count = 10000000;
        while(benchmark("A1", &A1, count) < 60l)
            count <<= 1;
        std::cout << "Going with " << count << " iterations." << std::endl;
        benchmark("A1", &A1, count);
        benchmark("A2", &A2, count);
        benchmark("A3", &A3, count);
        benchmark("B1", &B1, count);
        benchmark("B2", &B2, count);
        benchmark("B3", &B3, count);
        benchmark("C1", &C1, count);
        benchmark("C3", &C3, count);
        return 0;
    }

    Makefile:

    CFLAGS?=-O2
    main: main.o data.o
        g++ -o $@ $^

    %.o: %.cxx data.hxx
        g++ $(CFLAGS) -std=c++11 -o $@ -c $<

    Note the object here is small and trivial to copy as one would expect from objects passed around as values (as expensive to copy objects mostly can be passed around with a std::shared_ptr). So what did this measure? Here are the results:

    Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 without inline constructors:

    implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
    A1 89.1 s 79.0 s 78.9 s 78.9 s
    A2 89.1 s 78.1 s 78.0 s 80.5 s
    A3 90.0 s 78.9 s 78.8 s 79.3 s
    B1 103.6 s 97.8 s 79.0 s 78.0 s
    B2 99.4 s 95.6 s 78.5 s 78.0 s
    B3 107.4 s 90.9 s 79.7 s 79.9 s
    C1 99.4 s 94.4 s 78.0 s 77.9 s
    C3 98.9 s 100.7 s 78.1 s 81.7 s

    creating a three element vector without inlined constructors
    And, for comparison, following are the results, if one allows the constructors to be inlined.
    Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 with inline constructors:

    implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
    A1 85.6 s 74.7 s 74.6 s 74.6 s
    A2 85.3 s 74.6 s 73.7 s 74.5 s
    A3 91.6 s 73.8 s 74.4 s 74.5 s
    B1 93.4 s 90.2 s 72.8 s 72.0 s
    B2 93.7 s 88.3 s 72.0 s 73.7 s
    B3 97.6 s 88.3 s 72.8 s 72.0 s
    C1 93.4 s 88.3 s 72.0 s 73.7 s
    C3 96.2 s 88.3 s 71.9 s 73.7 s

    creating a three element vector without inlined constructors
    Some observations on these measurements:

    • -march=... is at best neutral: The measured times do not change much in general, they only even slightly improve performance in five out of 16 cases, and the two cases with the most significant change in performance (over 3%) are actually hurting the performance. So for the rest of this post, -march=... will be ignored. Sorry gentooers. ;)
    • There is no silver bullet with regard to the different implementations: A1, A2 and A3 are the faster implementations when not inlining constructors and using -Os or -O2 (the quickest A* is ~10% faster than the quickest B*/C*). However when inlining constructors and using -O3, the same implementations are the slowest (by 2.4%).
    • Most common release builds are still done with -O2 these days. For those, using initializer lists (A1/A2/A3) seem too have a significant edge over the alternatives, whether constructors are inlined or not. This is in contrast to the conclusions made from “constructor counting”, which assumed these to be slow because of additional calls needed.
    • The numbers printed in bold are either the quickest implementation in a build scenario or one that is within 1.5% of the quickest implementation. A1 and A2 are sharing the title here by being in that group five times each.
    • With constructors inlined, everything in the loop except DoSomething() could be inline. It seems to me that the compiler could — at least in theory — figure out that it is asked the same thing in all cases. Namely, reserve space for three ints on the heap, fill them each with 4711 and make the ::std::vector<int> data structure on the stack reflect that, then hand that to the DoSomething() function that you know nothing about. If the compiler would figure that out, it would take the same time for all implementations. This doesnt happen either on -O2 (differ by ~18% from quickest to slowest) nor on -O3 (differ by ~3.6%).

    One common mantra in applications development is “trust the compiler to optimize”. The above observations show a few cracks in the foundations of that, esp. if you take into account that this is all on the same version of the same compiler running on the same platform and hardware with the same STL implementation. For huge objects with expensive constructors, the constructor counting approach might still be valid. Then again, those are rarely statically initialized as a bigger bunch into a vector. For the more common scenario of smaller objects with cheap constructors, my tentative conclusion so far would be to go with A1/A2/A3 — not so much because they are quickest in the most common build scenarios on my platform, but rather because the readability of them is a value on its own while the performance picture is muddy at best.

    And hey, if you want to run the tests above on other platforms or compilers, I would be interested in results!

    Note: I did these runs for each scenario only once, thus no standard deviation is given. In general, they seemed to be rather stable, but this being wallclock measurements, one or the other might be outliers. caveat emptor.


    Read more
    Daniel Holbach

    I already blogged about the help app I was working on a bit in the last time. I wanted to go into a bit more detail now that we reached a new milestone.

    What’s the idea behind it?

    In a conversation in the Community team we noticed that there’s a lot of knowledge we gathered in the course of having used Ubuntu on a phone for a long time and that it might make sense to share tips and tricks, FAQ, suggestions and lots more with new device users in a simple way.

    The idea was to share things like “here’s how to use edge swipes to do X” (maybe an animated GIF?) and “if you want to do Y, install the Z app from the store” in an organised and clever fashion. Obviously we would want this to be easily editable (Markdown) and have easy translations (Launchpad), work well on the phone (Ubuntu HTML5 UI toolkit) and work well on the web (Ubuntu Design Web guidelines) too.

    What’s the state of things now?

    There’s not much content yet and it doesn’t look perfect, but we have all the infrastructure set up. You can now start contributing! :-)

    screenshot of web editionscreenshot of web edition screenshot of phone app editionscreenshot of phone app edition

     

    What’s still left to be done?

    • We need HTML/CSS gurus who can help beautifying the themes.
    • We need people to share their tips and tricks and favourite bits of their Ubuntu devices experience.
    • We need hackers who can help in a few places.
    • We need translators.

    What you need to do? For translations: you can do it in Launchpad easily. For everything else:

    $ bzr branch lp:ubuntu-devices-help
    $ cd ubuntu-devices-help
    $ less HACKING

    We’ve come a long way in the last week and with the easy of Markdown text and easy Launchpad translations, we should quickly be in a state where we can offer this in the Ubuntu software store and publish the content on the web as well.

    If you want to write some content, translate, beautify or fix a few bugs, your help is going to be appreciated. Just ping myself, Nick Skaggs or David Planella on #ubuntu-app-devel.

    Read more
    Ben Howard

    Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"


    Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

    By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

    Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

    Users who want to upgrade their existing instance can simply run:
    • sudo apt-get update
    • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
    • reboot

    Read more
    facundo

    Chau ACA


    Unos meses atrás, justo cuando yo estaba de viaje en Washington, en el último sprint del laburo del año pasado, Moni tuvo problemas con el auto.

    Un día que pasó a buscar a Felu por el jardín el auto no le arrancó. Pero lo importante no es el problema que tenía el auto, esta historia pasa por otro lado.

    Moni llamó al Automóvil Club Argentino (que tengo desde hace más de diez años), para que la vengan a socorrer, y a priori no lo quisieron dar servicio. Le dijeron que ella no era la titular (lo cual es cierto, está a mi nombre), y que matanga. Ante la insistencia de Moni, le dijeron que lo iban a hacer sólo por esa oportunidad. Finalmente, la fueron a buscar, hubo un cambio de batería, etc, etc, final feliz.

    Pero, ¿qué pasa si Moni vuelve a tener un problema con el auto?

    Yo siempre creí que el ACA me cubría el auto, más allá de quien lo manejara. Parece que no. Según le dijeron a ella en ese momento, lo que luego confirmé en el call center, y luego personalmente en una sucursal, es que para que la cubran a ella se debería dar una de dos situaciones.

    La primera, es que ella tenga una cédula azul del auto. Moni tiene la cédula verde del mismo (es tan dueña como yo), con lo cual no vamos a sacar la azul, y no tiene sentido que si la cédula es azul le den servicio, pero si es verde no.

    La segunda es que haga una extensión familiar del servicio. Averigüé precios de esto, y es casi como sacar un segundo plan del ACA. Hoy por hoy la cuota del ACA es un poco alta, y sube un poquito todos los meses (todos los meses, eso me molesta bastante); duplicar ese costo no tiene sentido.

    Bastante disgustado con toda esta situación, sopesé durante bastante tiempo la idea de darme de baja del servicio del Automóvil Club Argentino. Me cuesta un montón, porque me gustan un montón de cosas del ACA, su federalidad, la participación en el crecimiento de tantas ciudades pequeñas del país, etc... pero la verdad es que todo lo sucedido me rompió bastante las pelotas.

    Mi viejo tuvo la idea de que exprese todo esto en una carta a la Comisión Directiva del club, a ver qué me decían. Armé un documento y se los presenté a fines de Noviembre. Me contestaron a mitad de Enero, un tal Juan Jorge Agüero ("Jefe Administrativo de Iniciativas y Observaciones de Socios"), en una carta toda escrita en mayúsculas en la que básicamente mandaba fruta.

    ¿Por qué fruta? Porque contestó un montón de generalidades, con cosas como (convertido a minúsculas por respeto a ustedes) "se procedió a realizar el traslado de su observación al área de auxilio mecánico a fin de que se tomen las medidas correctivas pertinentes..."; claro, no hay ninguna medida correctiva pertinente, así que no me sirve para nada.

    En fin, tomé la decisión de irme del ACA.

    Me voy a quedar con el seguro de La Caja, sí, que siempre me respondió en tiempo y forma. Tampoco es que voy a ahorrar guita, porque el seguro directo (con el precio del auto actualizado) es sólo un poco menos que el seguro más la cuota social del ACA sumados. Pero el gran diferencial es que el servicio mecánico que me dan ("AuxiCaja") me sirve más allá de quien esté manejando el auto.

    Read more
    Joseph Salisbury

    Meeting Minutes

    IRC Log of the meeting.

    Meeting minutes.

    Agenda

    20150224 Meeting Agenda


    Release Metrics and Incoming Bugs

    Release metrics and incoming bug data can be reviewed at the following link:

    • http://people.canonical.com/~kernel/reports/kt-meeting.txt


    Status: Vivid Development Kernel

    We are preparing to shove our 3.19 based kernel following beta freeze.
    When it lands, please do test and let us know your results.
    —–
    Important upcoming dates:
    Thurs Feb 26 – Beta 1 (~2 days away)
    Thurs Mar 26 – Final Beta (~4 weeks away)
    Thurs Apr 09 – Kernel Freeze (~6 weeks away)


    Status: CVE’s

    The current CVE status can be reviewed at the following link:

    http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


    Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

    Status for the main kernels, until today:

    • Lucid – Testing
    • Precise – Testing
    • Trusty – Testing
    • Utopic – Testing

      Current opened tracking bugs details:

    • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

      For SRUs, SRU report is a good source of information:

    • http://kernel.ubuntu.com/sru/sru-report.html

      Schedule:

      Current cycle had ended. Waiting for next cycle to start on Mar. 08.

      cycle: 06-Feb through 28-Feb
      ====================================================================
      06-Feb Last day for kernel commits for this cycle
      08-Feb – 14-Feb Kernel prep week.
      15-Feb – 28-Feb Bug verification; Regression testing; Release


    Open Discussion or Questions? Raise your hand to be recognized

    No open discussions.

    Read more
    David Callé

    Announcing the Ubuntu Porting guide 2.0

    In the last few weeks, Ubuntu has reached a major milestone with the first flash sales of the BQ Aquarius - Ubuntu Edition. This is only the beginning of seeing Ubuntu on a wider selection of phones and tablets, and thanks to an incredibly enthusiast porting community, more devices have been part of that show. Some of these skilled porters have even set up their own image server to provide updates over-the-air!

    To ease the porting process, the Porting Guide has been updated to reflect the current procedure of enabling new devices. From setting up your dev environment, to configuring the kernel and debugging AppArmor, it covers the main points of making a fully working port. Currently focusing on AOSP ports, it will be extended in due time to detail CyanogenMod-specific processes.

    If you are interested in porting, please make sure you provide feedback on any issues and roadblocks that could arise, either on Launchpad or on the ubuntu-phone mailing-list.

    Thank you and good work, fellow devices adventurers!

    Read more
    David Planella

    This is a guest post from Jordi Allue, Senior SW Architect at Tuso technologies
     
    In September 2014, Ubuntu invited Tuso Technologies to be one of the first Ubuntu Phone OS developers with a version of Fiabee’s Cloud-Mobile Collaboration, Synchronisation and Sharing App for the new Ubuntu Phone OS. We jumped at the opportunity because it was in line with Tuso Technologies’ cross-platform compatibility strategy and found it to be an interesting challenge. The process was far simpler and faster than we originally expected, and the results exceeded our expectations.

    Fiabee is a carrier-grade, enterprise caliber, cloud-mobile Collaboration, Synchronization and Sharing Software-as-a-Service (SaaS). With Fiabee, Telecom Operators, Internet Service Providers and other Managed Service Providers generate new revenues and reduce churn by taking market share away from large OTT App (Over The Top Application) providers.

    Ubuntu Phone Apps are created natively or in HTML5 within a WebApp. Fiabee’s existing HTML5 app, which includes CSS3 and Javascript, was the ideal match for Ubuntu Phone.

    We started the process by installing Ubuntu’s Software Development Kit (SDK) and making ourselves familiar with it. The installation was straight forward with a simple "apt-get install ubuntu-sdk" instruction. Although we had no prior experience with Ubuntu’s Integration Development Environment (IDE), based on Qt Creator, the tutorials available on Ubuntu’s website helped us create our first HTML5 trial app. Next we tested the app directly on an Ubuntu Phone, getting familiar with Ubuntu’s Operating System (OS) and IDE was the most challenging of the tasks. That said, it only took us a few days to prepare the infrastructure to develop our own app. For those who don’t have an Ubuntu Phone, Ubuntu’s SDK provides a mobile device emulator.

    From there on, it was easy to adapt Fiabee’s web application to Ubuntu’s environment. With the help of the SDK instructions and manuals, we integrated the Web app into the development project, ran it on the device and created the deployment package. After configuration, we adapted the visual appearance and operation to Ubuntu’s standards.

    We were amazed to find that the functions of Fiabee’s App which are often difficult to adapt were almost automatic with Ubuntu Phone, such as accessing the phone’s file system, interacting with third party Apps or opening documents downloaded from the cloud. This demonstrates that Ubuntu OS is truly “open”. During the last steps of creating the package and uploading it to the Ubuntu App Directory, we had a minor problem with the definition of the App’s security policies but that was quickly resolved with the help of Ubuntu’s App Directory Tech Support team.

    From our perspective, bringing Fiabee to Ubuntu Phones was a piece of cake. It was a quicker and an easier experience than we’ve had with other platforms, despite it being new and notably different from the others with its App interaction menu. We were able to carry over all the functionalities of the Fiabee App without losing any, as was the case with other platforms, and without having to invest re-development time. It was as simple as tweeking Fiabee’s existing web app. With Fiabee’s App for Ubuntu Phones, we continue to deliver a good user experience to Fiabee users with a further extended range of mobile devices with which to access our service.

    About Tuso technologies SL. Founded in late 2008, with offices in Barcelona, Spain, and Palo Alto, USA, Tuso Technologies develops carrier-grade, enterprise-caliber, cloud-mobile Value-Added-Services (VAS), including Fiabee, Locategy & Open API, selected by leading Mobile Network Operators, ISV and corporations such as Telefonica (Movistar), France Telecom (Orange) , Ono (Vodafone), R Cable, Panda Security, Applus+ among others.

    Read more
    Colin Ian King

    Over the past year or more I was focused on identifying power consuming processes on various mobile devices.  One of many strategies to reducing power is to remove unnecessary file system activity, such as extraneous logging, repeated file writes, unnecessary file re-reads and to reduce metadata updates.

    Fnotifystat is a utility I wrote to help identify such file system activity. My desire was to make the tool as small as possible for small embedded devices and to be relatively flexible without the need of using perf just in case the target device did not have perf built into the kernel by default.

    By default, fnotifystat will dump out every second any file system open/close/read/write operations across all mounted file systems, however, one can specify the delay in seconds and the number of times to dump out statistics.   fnotifystat uses the fanotify(7) interface to get file activity across the system, hence it needs to be run with CAP_SYS_ADMIN capability.

    An open(2), read(2)/write(2) and close(2) sequence by a process can produce multiple events, so fnotifystat has a -m option to merge events and hence reduce the amount of output.  A verbose -v option will output all file events if one desires to see the full system activity.

    If one desires to just monitor a specific collection of processes, one can specify a list of the process ID(s) or process names using the -p option, for example:

    sudo fnotifystat -p firefox,thunderbird

    fnotifystat catch events on all mounted file systems, but one can restrict that by specifying just path(s) one is interested in using the -i (include) option, for example:

    sudo fnotifystat -i /proc

    ..and one can exclude paths using the -x option.

    More information and examples can be found on the fnotifystat project page and the manual also contains more details and some examples too.

    Fnotifystat 0.01.10 is available in Ubuntu Vivid Vervet 15.04 and can also be installed for older releases from my power management tools PPA.

    Read more
    Prakash

    Facebook said today that it’s giving away a tool it built to spot errors in Android application code.

    Facebook has gradually improved its main app for Android, as well as other apps for the mobile operating system, including Messenger, Facebook Groups, Facebook Pages Manager, and most recently Facebook at Work.

    Read More: http://venturebeat.com/2015/02/18/facebook-unleashes-stetho-its-android-debugging-tool-under-open-source-license

    Read more