Canonical Voices

pitti

Wir sind wieder zurück aus unserem tollen Winterurlaub! Es ging nach Lappland in Nord-Finnland und Nord-Norwegen. Das ganze Fotoalbum gibt es auch zu sehen.

Am Montag den 16. März starten wir unsere Reise nach Lappland, vom Flughafen München über Helsinki bis nach Ivalo. Auf dem Flug bekommt man schon einen guten Eindruck von Finnland: Außer an den südlichen Küstengebieten ist das Land sehr dünn bevölkert, und der ganze Norden besteht fast nur aus einem Flickenteppich aus Wald, Seen, und Flüssen. Außer den ganz großen Flüssen ist fast alles zugefroren, und man erkennt auch deutlich die Spuren der Lastwagen und Schneemobile über den Seen, die hier über den ganzen Winter als Straße genutzt werden können.

Vom Flughafen in Ivalo, mit einem winzigen Terminal (natürlich aus viel Holz!) geht es dann eine halbe Stunde nach Inari, einem kleinen Dorf an der Südwestecke des riesigen Inari-Sees. Hier ist die Heimat der Samen, aber auch ganzjährig viele Touristen aus Finnland, Europa, und sogar Asien, die Polarlichter, Skifahren, oder endlose Sommernächte erleben wollen.

Am Dienstag lernen wir wie man sich hierzulande durch die Winterlandschaft bewegt. Wir besuchen eine Husky-Farm. Schon beim Aussteigen werden wir vom aufgeregten Gebell von etwa einhundert Tieren begrüßt, die es kaum erwarten können dass sie losrennen dürfen. Das Einspannen der Schlitten dauert aber eine Weile, bis dahin beschäftigen wir die Huskys mit viel Kraulen und Spielen. Das sind sehr neugierige und liebe Tiere, selbst ich der bekkantermaßen kein großer Hunde-Narr ist verstehe mich prima mit ihnen. Und dann gehts endlich los! Je fünf Hunde ziehen einen Schlitten, Annett liegt drin, macht Fotos und feuert die Huskys an, ich stehe dahinter, lenke, und – am wichtigsten – bremse. Die Huskys haben eine enorme Kraft und haben nur eine einzige Geschwindigkeit: schnell. Alles andere regelt man dann mit der Fußbremse, einem Metallbügel mit zwei Stäben die sich in den Schnee graben. So sausen wir etwa eine Stunde durch die sonnige Winterlandschaft, dann helfen wir noch beim Ausspannen und bedanken uns bei den Tieren noch mit ein paar Streicheleinheiten.

Vor dem Abendessen erklärt uns Joachim, unser Reiseleiter, die wichtigsten Grundlagen der Polarlichter. So wissen wir wenigstens grob wie sie entstehen, vorausgesagt werden, und welche Formen sie annehmen. Die Vorfreude ist riesig, denn wir haben momentan fast perfekte Bedingungen: hohe Sonnenwind-Aktivität, die Erde steht in einem günstigen Bereich des Sonnenmagnetfeldes, und die Wettervorhersage verspricht uns einen klaren Himmel in der Nacht.

Am Abend steht uns dann das großartige Schauspiel bevor: Pünktlich um zehn verziehen sich die Wolken wieder die ein paar Stunden früher aufgezogen sind und geben den Blick frei auf einen gigantischen Sternenhimmel. Für drei Stunden sehen wir Polarlichter in Bändern, Streifen, Koronas, und allen möglichen Formen bewundern. Einige bleiben für Minuten bestehen, andere bewegen sich sehr schnell, und man kommt mit Staunen und Fotografieren kaum hinterher.

Hier oben gibt es kaum Lichtverschmutzung, so können wir zwischendurch durch den Feldstecher wunderbar Jupiter und die galileischen Monde, den Andromeda-Nebel, die Pleiaden und Hyaden, oder verschiedene Satelliten sehen.

Am Mittwoch schlafen wir erstmal aus und besuchen die lokale Rentier-Farm. Hier lernen wir so einiges über das Wildleben und die Haltung und Nutzung dieser halbwilden Tiere, inklusive Fütterung aus der Hand und einer Runde um den Block mit dem Rentierschlitten. Diese Fahrt läuft wesentlich ruhiger ab als mit den Huskys, Rentiere sind eher die gleichmäßigen und ausdauernden Lastenzieher. Die ganze Farm wird von einer Sami-Familie geführt die hier schon seit mehreren Generationen ansässig ist. Bei Tee und Gebäck in einer gemütlichen beheizten “Kota” (Hütte) erfahren wir viel über die Geschichte und aktuelle Kultur der Samen-Völker.

Am Nachmittag laufen wir noch eine schöne Runde am Fluss entlang, und zwischendurch auch darüber. Als Mitteleuropäer haben wir ein mulmiges Gefühl dabei, aber hier macht das jeder und 80 cm Eis und 60 cm Schnee darüber halten wesentlich mehr aus.

Nach dem Abendessen war eigentlich ein weiterer Vortrag geplant, aber kurz nach dem Nachtisch um Neun gibt es schon wieder Polarlicht-Alarm :-) Heute Abend haben sie andere Formen und Verhaltensweisen und ziehen in langen Bändern von Nord nach Süd über den gesamten Himmel. Das beobachten wir noch bis um eins.

Am Donnerstag gibts zur Abwechslung mal maschinelle Fortbewegung: Wir machen eine Schneemobil-Tour! Diese sind für die Einheimischen das Mittel der Wahl in den etwa sieben Monaten mit Schnee. Die Scooter-Bahnen ziehen sich kreuz und quer durch die Lappland-Wälder und über die Seen, sind gut sichtbar mit Stangen markiert, und es gibt sogar Stopschilder und Wegweiser. Und die gehen gut ab! Annett übernimmt die erste Etappe von Inari über den See zum “heiligen Berg”, einer großen Felseninsel mitten im Inarisee. Ich fahre dann durch den Wald zur “Holzkirche”, die dort schon seit 1647 steht und ein beliebter Ausflugsort ist. Dort veranstalten wir ein zünftiges Picknick auf finnische Art: Holzfeuer, rußige Kessel für Tee, dicke Bratwürste und Toast. Annett fährt uns dann zurück nach Inari. Die Dinger machen einen Heidenspaß und sind auch super-simpel zu bedienen (stufenlose Automatikschaltung, außer dem Gashebel muss man da nichts tun).

Am Abend erleben wir wieder Polarlichter, diesmal sehr langlebige Formen. Diese lassen uns viel Zeit zum Experimentieren mit Belichtungszeiten, Blitzstärke und Standorten, so dass wir ganz passable Erinnerungsfotos von allen mit Polarlicht und der Venus als Dreingabe bekommen. Ein Vortrag von Joachim über die Geschichte der Polarlichtforschung rundet den Abend ab.

Freitag steht gleich das nächste Ereignis am Himmel an: Eine partielle Sonnenfinsternis die hier um 12:13 etwa 91% Bedeckung erreicht. Wir versammeln uns vor dem Hotel mit Schutzbrillen und einem H-Alpha-Teleskop (in dem man Flares auf der Sonne sehen kann) und fotografieren auch eine Serie. Es ist sehr kalt und meine Hände frieren mir ein, aber dafür hat sich das locker gelohnt.

Am Nachmittag besuchen wir das Sami-Museum hier gleich um die Ecke. Das ist schön gemacht, eine große Halle in der jeder Wand eine Jahreszeit gewidmet ist, die die Tier- und Pflanzenwelt in jedem Monat zeigt. Wir erfahren auch viel über die Geschichte und Lebensweise der Samen.

Am Samstag wandern wir noch eine Runde über den Inari-See bevor dann um 13:00 unser Bus für den zweiten Teil der Reise startet. Wir überqueren bald die norwegische Grenze und finden uns in einer ganz anderen Landschaft wieder: es gibt auf einmal Berge, der Wald wird kleiner, lichter, und besteht fast nur noch aus Birken, und das erste Mal seit langem sehen wir auch wieder flüssiges Wasser in den Fjorden. Wir machen noch einen kleinen Abstecher nach Kirkenes, die zweitnördlichste Stadt Norwegens (nach Hammerfest) die vorwiegend vom Eisenerzbergbau und der Schifffahrt lebt. Die bekannte Hurtigruten-Tour startet hier auch.

Am frühen Abend kommen wir dann in Svanvik an, auf dem “Svanhovd”, ein ehemaliger Bauernhof der mittlerweile ein Naturschutz- und Bildungszentrum und ein Hotel ist. Nach einem überreichlichen Abendbuffet bekommen wir dann wieder eine gigantische Polarlicht-Show mit Lauflichtern und sich schnell ändernden hellen grünen Bändern zu bestaunen.

Sonntags ist Wandern angesagt. Noch ist schönes – aber kaltes – sonniges Wetter, unser Ziel ist ein Aussichtsturm etwa 8 km entfernt. Leider ist der letzte Kilometer durch den hohen Schnee kaum passierbar, deshalb laufen wir lieber noch ein Stück weiter die Straße entlang und machen ein kleines Picknick auf einem Waldarbeiter-Bauplatz.Am Abend erklärt und demonstriert uns Joachim an seiner selbstgebauten Armillarsphäre ein paar Lektionen Himmelsmechanik. Einfach genial so ein Teil, man kann jede Menge Phänomene wie Sommer/Wintertageslängen, Sonnen-/Mondfinsternisse, Polartag/-nacht, Planetenbewegungen, langfristige Verschiebung der Ekliptik usw. vom erdbezogenen Beobachter darstellen und verstehen. Nachts ist es dann leider bewölkt, so dass wir mal früh schlafen gehen.

Montag steht dann unser letzter Ausflug auf dem Programm: Es geht in das Schneehotel in Kirkenes! Das wird jeden September aus großen Ballons und Schneekanonen aufgebaut, und dann erhält jedes Zimmer und die Bar Eis- und Schneeskulpturen, die von extra eingeflogenen chinesischen Künstlern angefertigt werden. Am Nachmittag spektroskopieren wir dann noch ein bisschen die Sonne und genießen dann ein paar Runden Sauna inklusive Im-Schnee-Wälzen.

Den letzten Dienstag verbringen wir dann noch recht gemütlich mit einer Wanderung, Sauna, und natürlich abends wieder mit Polarlichern.

Read more
rvr

To celebrate the 10th anniversary of Arduino, and the Arduino Day, today I am proud to present Visualino. What is it? It's a visual programming environment for Arduino, a project that I begun last year and has been actively developing in the last months, with the help of my friends at Arduino Gran Canaria.

Arduino is a microcontroller board that allows to connect to sensors and other electronic components. It has a companion program called the Arduino IDE, which makes really easy to program the microcontroller. The language is based in C/C++ but the functions are quite easy to learn. This easiness is part of the revolution. Making LEDs blink and moving robots with Arduino is easy and fun. But it can be easier! Kids and adults who don't know programming often struggle with C/C++ coding strictness: commas and brackets must be correctly placed, or the program won't run. How to make it even more intuitive? Visual programming to the rescue!

Scratch is a popular visual programming environment for kits, developed at MIT. Instead of keyboards and codes, kids use the mouse and blocks to create games like a puzzle. And there is an extension called Scratch for Arduino that allows to control the board from Scratch. However, the program runs in Scratch, so the Arduino board must be always connected to the PC.

So, what does Visualino do? It's a Scratch-like program: it allows to create programs for Arduino like a puzzle. But it directly programs the Arduino board, and the PC connection is no longer needed for it to run. Also it generates the code in real time, so the user knows what's happening. The environment is very similar to Arduino IDE, with the same main options: Verify, Build, Save, Load and Monitor. Visualino can be seen at work in this screencast:

Visualino is based in Google Blockly and bq's bitbloqs. It is open source, multiplatform and multilanguage. It just requires Arduino 1.6, which is the actual engine used to program Arduino boards. You can download the beta version right now for Ubuntu, Mac and Windows. The code is available at github.com/vrruiz/visualino. Right now it works out of the box. It needs some documentation and translations to Catalan, Italian and Portuguese will be welcomed. 

  • Screenshot from 2015-03-25 15:27:30
  • Screenshot from 2015-03-25 15:28:04
Screenshot from 2015-03-25 15:28:04

Visualino was presented this week to a group of educators at an Arduino Workshop, and next month, we'll have a three-hour session to teach how to use it. So I hope it will be used soon at schools here at home.

So, go to download and use it. Feedback is welcome. And stay tuned, as there are some niceties coming very soon :)

Read more
Daniel Holbach

What does being an Ubuntu member mean to you? Why did you do it back then?

I became an Ubuntu member about 10 years ago. It was part of the process of becoming member of the MOTU team. Before you could apply for upload rights, you had to be an Ubuntu member though.

That wasn’t all of it though. For me it wasn’t the @ubuntu.com mail address or “fulfilling the requirements for upload rights”. As I had helped out and contributed for months already, I felt part of the tribe and luckily many encouraged me to take the next step and apply for membership. I had grown to like the people I worked with and learned from a lot. It was a bit daunting, but being recognised for my contributions was a great experience. Afterwards I would say I did my fair share of encouraging others to apply as well. :-)

Which brings me to the two calls of action I wanted to get out there.

1) Encourage members of your team who haven’t applied for Ubuntu membership!

There are so many people doing fantastic work on AskUbuntu, the Forums, in Flavour teams, the Docs team, the QA world and all over the place when it comes to phones, desktops, IoT bits, servers, the cloud and more. Many many of them should really be Ubuntu members, but they haven’t heard of it, or don’t know how or are concerned of not “having done enough”.

If you have people like that in a project you are working in, please do encourage them. In an open source project we should aim to do a good job at recognising the great work of others.

2) Join the Ubuntu Membership Boards!

If you are an Ubuntu member, seriously consider joining the Ubuntu Membership Boards. The call for nominations is still open and it’s a great thing to be involved with.

When I joined the Community Council, the CC was still in charge of approving Ubuntu members and I enjoyed the meeting (even if they were quite looooooooooooooooooooong), when we got to talk to many contributors from all parts of the globe and from all parts of the Ubuntu landscape. Welcoming many of them to Ubuntu members team was just beautiful.

Nominate yourself and be quick about it! :-)

Read more
Nicholas Skaggs


Whoosh, Spring is in the air, Winter is over (at least for us Northern Hemisphere folks). With that, it's time for polishing the final beta image for vivid.

How can I help? 
To help test, visit the iso tracker milestone page for final beta.  The goal is to verify the images in preparation for the release. Find those bugs! The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

Isotracker? 
There's a first time for everything! Check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

What if I'm late?
The testing runs through this Thursday March 26th, when the the images for final beta will be released. If you miss the deadline we still love getting results! Test against the daily image milestone instead.

Thanks and happy testing everyone!

Read more
Michael Hall

Way back at the dawn of the open source era, Richard Stallman wrote the Four Freedoms which defined what it meant for software to be free. These are:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

For nearly three decades now they have been the foundation for our movement, the motivation for many of us, and the guiding principle for the decisions we make about what software to use.

But outside of our little corner of humanity, these freedoms are not seen as particularly important. In fact, the fast majority of people are not only happy to use software that violates them, but will often prefer to do so. I don’t even feel the need to provide supporting evidence for this claim, as I’m sure all of you have been on one side or the other of a losing arguement about why using open source software is important.

The problem, it seems, is that people who don’t plan on exercising any of these freedoms, from lack of interest or lack of ability, don’t place the same value on them as those of us who do. That’s why software developers are more likely to prefer open source than non-developers, because they might actually use those freedoms at some point.

But the people who don’t see a personal value in free software are missing a larger, more important freedom. One implied by the first four, though not specifically stated. A fifth freedom if you will, which I define as:

  • Freedom 4: The freedom to have the program improved by a person or persons of your choosing, and make that improvement available back to you and to the public.

Because even though the vast majority of proprietary software users will never be interested in studying or changing the source of the software they use, they will likely all, at some point in time, ask someone else if they can fix it. Who among us hasn’t had a friend or relative ask us to fix their Windows computer? And the true answer is that, without having the four freedoms (and implied fifth), only Microsoft can truly “fix” their OS, the rest of us can only try and undo the damage that’s been done.

So the next time you’re trying to convince someone of the important of free and open software, and they chime in with the fact that don’t want to change it, try pointing out that by using proprietary code they’re limiting their options for getting it fixed when it inevitably breaks.

Read more
Antonio Rosales

Agenda

  • Review ACTION points from previous meeting
  • smoser follow up on #link http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-v-tracking-bug-tasks.html#ubuntu-server not working
  • arosales update QA Team rep to matsubara
  • V Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (?)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Summary

This weeks meeting focused on identifying critical bugs as Vivid nears Final Beta Freeze (March 26). There was also some good discussion on rounding out Blueprints given the Vivid end-of-cycle is nearing. matsubara[QA] brought up an interesting smoke test Jenkins failure [https://bugs.launchpad.net/ubuntu/+bug/1427821] that smb[kernel] help identify as a dup of https://bugs.launchpad.net/ubuntu/+source/debian-installer/+bug/1429849.

Open Compute Summit and Open Power Summit were also brought up as upcoming Ubuntu Server related events.

Info

  • User Interface Freeze this week
  • Final Beta Freeze on March 26

MEETING ACTIONS

* No specific actions identified this week.

AGREE ON NEXT MEETING DATE AND TIME

Next meeting will be on Tuesday, March 17th at 16:00 UTC in #ubuntu-meeting.

Logs @ https://wiki.ubuntu.com/MeetingLogs/Server/20150310

Read more
Robie Basak

Review ACTION points from previous meeting

The discussion about “Review ACTION points from previous meeting” started at 16:00.

  • dannf will look at bug 1427406 “data corruption on arm64″ soon
  • Bug 1432715 “tomcat7 ftbfs in vivd (test failures)” is waiting on a fix in Debian
  • hallyn has updated the QA Team section assignee to matsubara
  • matsubara did file a bug for libpam-systemd’s dependency problem, but this is no longer relevant, has been marked Invalid, and he will follow up on further test failures in the QA topic later in the meeting.

Vivid Development

The discussion about “Vivid Development” started at 16:06.

  • No discussion was required.

Weekly Updates & Questions for the QA Team (matsubara)

The discussion about “Weekly Updates & Questions for the QA Team (matsubara)” started at 16:09.

Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)” started at 16:12.

  • smb reported that:
    • The Utopic+ nested issues on Precise host fix should be on its way into Precise. Beside that any other reports are still ongoing.
    • There were recent updates to nested kvm softlockups (bug 1413540) and some container netdevice cleanup (bug 1403152) bug which we still need to evaluate.
    • The KSM issue (bug 1435363) which was recently reported needs feedback on latest kernel.

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:14.

  • No events to report.

Open Discussion

The discussion about “Open Discussion” started at 16:14.

  • Nothing was raised.

Announce next meeting date and time

The discussion about “Announce next meeting date and time” started at 16:15.

The next meeting will be at Tue Mar 31 16:00:00 UTC 2015. matsubara will chair.

Meeting Actions

None

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150324 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Our Vivid kernel has been rebased to v3.19.2 and uploaded, ie
3.19.0-10.10. We are approaching kernel freeze for Vivid.
~2 weeks away on Thurs Apr 9. If you have any patches which
land for 15.04′s release, please make sure to submit those
—–
Important upcoming dates:
Thurs Mar 26 – Final Beta (~2 days away)
Thurs Apr 09 – Kernel Freeze (~2 weeks away)
Thurs Apr 23 – 15.04 Release (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates

Status for the main kernels, until today:

  • Lucid – None (no update)
  • Precise – Prep
  • Trusty – Prep
  • Utopic – Prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 20-Mar through 11-Apr

    20-Mar Last day for kernel commits for this cycle
    22-Mar – 28-Mar Kernel prep week.
    29-Mar – 11-Apr Bug verification; Regression testing; Release


Review of sforshee upload rights

apw: sforshee, hi
sforshee: apw: hello
apw: could you introduce yourself, perhaps tell us a little about
sforshee: I’ve been a member of the kernel team for 4 years now
sforshee: working on various and sundry things, including some packaging
sforshee:
sforshee: I’m looking to get PPU rights to the linux-* packages to ease
apw: i believe we have most of your sponsors here today, so if anyone
apw: otherwise i think the majority of the approvers have works with you
kamal: I stand by my sponsorship statment: Seth is diligent and
ogasawara: I have not specifically sponsored a package of seth’s but
kamal: +1 from me
henrix: +1 from me too!
arges: From an SRU perspective, sforshee has generally shown attention
bjf: +1
cking: +1
apw: sforshee, welcome to the team
sforshee: thanks all!
cking: \o/
henrix: \o/
apw: thank you …
kamal: \o/
ogasawara: congrats sforshee!
arges: good job
apw: jsalisbury, all yours …
jsalisbury: apw, Thanks, and congrats to sforshee


Open Discussion or Questions? Raise your hand to be

No open discussion.

Read more
Zoltán Balogh

So if you are new to QtCreator the first thing that freaks you out will be the concept of Kits. Yes it does look complicated, big and you might want to ask why do I need this.

Right, let’s take few steps back and look at the bigger picture.

Most programmers start their hobby or carrier with basic (not that one) pc programming. You have the compiler and libraries on your machine, you start hacking around with your code and once you think it will pass at least the syntax check you go to the terminal, compile the code and be happy when you see the binary executable. If it runs without segfaults then you start to gain confidence and you are the happiest kid on Earth once the program does what you coded it for.

That is a fairly simple and common scenario, still it has all the components what actually make an SDK. And i guess you know that in the SDK, the K stands for Kit.

Let’s continue with this thinking. You want to show the program to your friends. That is nothing strange, even coders are social beings. If your program is using dynamically linked libraries from the system then your friends need a bit of luck to have the very same system libraries on their machine as you have on yours. Not to mention that you compiled your program for one specific processor architecture and nothing guarantees that your friends have the same architecture as you had.

So, we are safe and good as long our program stays on our computer but trouble with libraries, binary compatibility and processor architecture will pop up when we want to move our program around without recompiling it again. And imagine, we are still talking about PC to PC porting. Let’s raise the bar.

How does it go when you want to write an application for a mobile device? Most likely your computer is an x86 based PC and these days most mobile devices have some sort of ARM processor. So, here we go, our native local compiler what made us so happy just few paragraphs back is now obsolete and we will need a compiler what can produce an ARM binary for the specific device. It could be armv6, armv7 or whatever exotic ARM processor your target device is built with. Good, we now have a compiler but our code is still using a bunch of libraries. In the Ubuntu world and specially with the ultimate convergence on our roadmap this part of the story is a bit easier and will get even better soon. But still if your PC is running the LTS Ubuntu release (14.04 right now) you do not necessarily expect the same libraries and header files being present on your machine as on a target device what is on 15.04 or even newer.

I guess at this point many would say with a disappointed tone that after you learned that your good old compiler is obsolete now all your locally installed development libraries and header files are useless too. Think of Borat saying “nice”.

Okay, so we are left without compiler, libraries and header files. But they should come from somewhere, right?

And that is where the Kits come into the picture. The official definition of the QtCreator Kits sure sounds a bit academic and dry, so let’s skip it. In short, Kits are the set of values that define one environment, such as a device, compiler, Qt version, debugger command, and some metadata.

I love bicycling so I use cycling analogies whenever it is possible. Imagine that you are in the mood to have a ride downhill in the forest. You will take your mountain bike, knee and elbow pad, lots of water, some snacks and your clothes what take dirt better, a massive helmet and your camera. If you just cycle to your office you take your city bike, lighter helmet and you put on regular street wear. Different target, different set of equipment. How cool it would be just to snap your finger and say out loud “ride to the city”  and all the equipment would just appear in front of you.

 

That is exactly what happens when you have Kits set up in your QtCreator and you are building your application for and running them on different targets.

QtCreator is an IDE and developers who choose to work with IDEs do expect a certain level of comfort. For example we do not want to resolder and rewire our environment just because we want to build our project for a different target. We want to flip a switch and expect that the new binaries are made with a different compiler against a different set of libraries and headers. That is what QtCreator’s target selector is for. You simply change from the LTS Desktop Kit to the 15.04 based armhf target and you have a whole different compiler toolchain and API set at your service.

At this point Kits looks pretty and easy. You might ask what is the catch then. Why IDEs and SDKs do not come with such cool and well integrated Kits? Well there is a price for every cool feature.  At this moment each Kit in ready for action state is about 1.7GB. So kits are big and the SDK does not know what Kits you want to use. What means is that if we want to install all kits you might use the SDK would be 8-10GB easily.

Why kits are so big and can they be made smaller?

That is a fair question I got very often. First of all, the kits are fully functional chroots in the case of the Ubuntu SDK. It means that other than the compiler toolchain we have all the bells and whistles one needs when entering a chroot. Just enter the click chroot and issue the dpkg -l command to see that yes, we do have a full blown Ubuntu under the hood. In our SDK model the toolchain and the native developer tools live in the click chroots and these chroots are bootstrapped just as any other chroot. It means that each library, development package and API is installed as if it were installed on a desktop Ubuntu. And that means pulling in a good bunch of dependencies you might not need ever. Yes, we are working on making the Kits smaller and we are considering to support static kits next to the present dynamic bootstrapped kits.

Alright, so far we have covered what Kits are, what they contain. The most important question is do you need to care about all of these? Do you need to to configure and set up these kits yourself. Luckily the answer to these questions is no.

In the Ubuntu SDK these Kits are created on the first start of the SDK and set up automatically when a new emulator is deployed or a new device is plugged in. Of course you can visit the builder chroots under the Ubuntu and Build & Run sections in the dialog what opens with the Tools->Options… menu. But most of the application developers can be productive without knowing anything about these. Of course understanding what they are is good and if you are into development tools and SDKs then it is fun to look behind the curtains a bit.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150317 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Our Vivid kernel remains based on v3.19.1 and we uploaded a
kernel last week. We are approaching kernel freeze for
~4wks away on Thurs Apr 9. If you have any patches which
—–
Important upcoming dates:
Thurs Mar 26 – Final Beta (~1 week away)
Thurs Apr 09 – Kernel Freeze (~3 weeks away)
Thurs Apr 23 – 15.04 Release (~6 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates

Status for the main kernels, until today:

  • Lucid – None (no update)
  • Precise – Verification and Testing
  • Trusty – Verification and Testing
  • Utopic – Verification and Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 27-Feb through 21-Mar

    27-Feb Last day for kernel commits for this cycle
    01-Mar – 07-Mar Kernel prep week.
    08-Mar – 21-Mar Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be

No open discussion.

Read more
Zsombor Egri

Creating a theme for your application

The theming engine is one of the least documented features of Ubuntu UI Toolkit. While we are preparing to create the third generation theming engine, which will support sub-theming and runtime palette color customizations, there are more and more app developers asking about how to create their own theme for the application itself. There were also questions on how to create a shared theme so other applications can use these themes. But let’s get first the application theming.

The application themes are application specific, and should be located in the application’s installation folder. They can derive from a pre-defined system theme (Ambiance or SuruDark) as well as be standalone themes, not reusing any system defined themes. However this latest one is not recommended, as in this case you must implement the style of every component, which in one way requires lot of work, and secondly it uses few APIs which are not stable/documented.

Assuming the theme is located in a separate folder called MyTheme, the second step would be to create a file called “parent_theme” where you put the URI of the theme your application theme derives from. Your parent_theme would look like

// parent_theme
Ubuntu.Components.Themes.SuruDark

Now, let’s change some palette values. The way to do that is to create a Palette.qml file, and override some values you want.

// Palette.qml

import QtQuick 2.4
import Ubuntu.Components 1.2
import Ubuntu.Components.Themes.SuruDark 1.1 as SuruDark

SuruDark.Palette {
    normal.background: “#A21E1C”
    selected.backgroundText: “lightblue”

If you want to change some component styles, you have to look into the parent theme and check the style component you want to change. It can be that the parent theme doesn’t have the style component defined, in which case you must follow its parent theme, and search for the component there. This is the case if you want to change the Button’s style, SuruDark theme doesn’t have the style component defined, therefore you must take the one from its parent, Ambiance. So the redefined ButtonStyle would look like:

// ButtonStyle.qml

import QtQuick 2.4
import Ubuntu.Components 1.2

// Note: you must import the Ambiance theme!
import Ubuntu.Components.Themes.Ambiance 1.1 as Base

Base.ButtonStyle {
    // Let’s override the default color
    defaultColor: UbuntuColors.green
}

For now only a few style component is exported from the two supported system themes, in case you see one you’d like to override just file a bug. Then there are only a handful of style APIs made stable, therefore overriding the non-documented styles may be dangerous, as their API may change. The stable style APIs are listed in Ubuntu.Components.Styles module and their implementation and unstable APIs are in Ambiance and SuruDark themes.

And finally you can load the theme in the application as follows:

// main.qml

import QtQuick 2.4
import Ubuntu.Components 1.2

MainView {
    // Your code comes here

    // Set your theme
    Component.onCompleted: Theme.name = “MyTheme”

}

That’s it. Enjoy your colors!

P.S. A sample code is available here.

Read more
Prakash

Code Name: MT8173

Features: 

  • Quad Core
  • 4K Video Support
  • 64 Bit
  • Support for upto 20 Megapixel cameras

Read More: http://www.pcworld.com/article/2890656/mediatek-claims-new-64bit-chip-will-power-the-fastest-android-tablets-on-the-market.html

Read more
bmichaelsen

When logic and proportion have fallen sloppy dead
And the white knight is talking backwards
And the red queen’s off with her head
Remember what the dormouse said
Feed your head, feed your head

– Jefferson Airplane, White Rabbit

So, this was intended as a quick and smooth addendum to the “50 ways to fill your vector” post, bringing callgrind into the game and ensuring everyone that its instructions counts are a good proxy for walltime performance of your code. This started out as mostly as expected, when measuring the instructions counts in two scenarios:

implementation/cflags -O2 not inlined -O3 inlined
A1 2610061438 2510061428
A2 2610000025 2510000015
A3 2610000025 2510000015
B1 3150000009 2440000009
B2 3150000009 2440000009
B3 3150000009 2440000009
C1 3150000009 2440000009
C3 3300000009 2440000009

The good news here is, that this mostly faithfully reproduces some general observations on the timings from the last post on this topic, although the differences in callgrind are more pronounced in callgrind than in reality:

  • The A implementations are faster than the B and C implementations on -O2 without inlining
  • The A implementations are slower (by a smaller amount) than the B and C implementations on -O3 with inlining

The last post also suggested the expectation that all implementations could — and with a good compiler: should — have the same code and same speed when everything is inline. Apart from the A implementations still differing from the B and C ones, callgrinds instruction count suggest to actually be the case. Letting gcc compile to assembler and comparing the output, one finds:

  • Inline A1-3 compile to the same output on -Os, -O2, -O3 each. There is no difference between -O2 and -O3 for these.
  • Inline B1-3 compile to the same output on -Os, -O2, -O3 each, but they differ between optimization levels.
  • Inline C3 output differs from the others and between optimization levels.
  • Without inlinable constructors, the picture is the same, except that A3 and B3 now differ slightly from their kin as expected.

So indeed most of the implementations generate the same assembler code. However, this is quite a bit at odd with the significant differences in performance measured in the last post, e.g. B1/B2/B3 on -O2 created widely different walltimes. So time to test the assumption that running one implementation for a minute is producing reasonable statistically stable result, by doing 10 1-minute runs for each implementation and see what the standard deviation is. The following is found for walltimes (no inline constructors):

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 80.6 s 78.9 s 78.9 s 79.0 s
A2 78.7 s 78.1 s 78.0 s 79.2 s
A3 80.7 s 78.9 s 78.9 s 78.9 s
B1 84.8 s 80.8 s 78.0 s 78.0 s
B2 84.8 s 86.0 s 78.0 s 78.1 s
B3 84.8 s 82.3 s 79.7 s 79.7 s
C1 84.4 s 85.4 s 78.0 s 78.0 s
C3 86.6 s 85.7 s 78.0 s 78.9 s
no inline measurements

no inline measurements

And with inlining:

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 76.4 s 74.5 s 74.7 s 73.8 s
A2 75.4 s 73.7 s 73.8 s 74.5 s
A3 76.3 s 74.6 s 75.5 s 73.7 s
B1 80.6 s 77.1 s 72.7 s 73.7 s
B2 81.4 s 78.9 s 72.0 s 72.0 s
B3 80.6 s 78.9 s 72.8 s 73.7 s
C1 81.4 s 78.9 s 72.0 s 72.0 s
C3 79.7 s 80.5 s 72.9 s 77.8 s
inline measurements

inline measurements

The standard deviation for all the above values is less than 0.2 seconds. That is … interesting: For example, on -O2 without inlining, B1 and B2 generate the same assembler output, but execute with a very significant difference in hardware (5.2 s difference, or more than 25 standard deviations). So how have logic and proportion fallen sloppy dead here? If the same code is executed — admittedly from two different locations in the binary — how can that create such a significant difference in walltime performance, while not being visible at all on callgrind? A wild guess, which I have not confirmed yet, is cache locality: When not inlining constructors, those might be in CPU cache from one copy of the code in the binary, but not for the other. And by the way, it might also hint at the reasons for the -march= flag (which creates bigger code) seeming so uneffective. And it might explain, why performance is rather consistent when using inline constructors. If so, the impact of this is certainly interesting. It also suggest that allowing inlining of hotspots, like recently done with the low-level sw::Ring class, produces much more performance improvement on real hardware than the meager results measured with callgrind. And it reinforces the warning made in that post about not falling in the trap of mistaking the map for the territory: callgrind is not a “map in the scale of a mile to the mile”.

Addendum: As said in the previous post, I am still interested in such measurements on other hardware or compilers. All measurements above done with gcc 4.8.3 on Intel i5-4200U@1.6GHz.


Read more
Prakash

Indian food, with its hodgepodge of ingredients and intoxicating aromas, is coveted around the world. The labor-intensive cuisine and its mix of spices is more often than not a revelation for those who sit down to eat it for the first time. Heavy doses of cardamom, cayenne, tamarind and other flavors can overwhelm an unfamiliar palate. Together, they help form the pillars of what tastes so good to so many people.

Read More: http://www.washingtonpost.com/blogs/wonkblog/wp/2015/03/03/a-scientific-explanation-of-what-makes-indian-food-so-delicious/

Read more
facundo


They say that metaclasses make your head explode. They also say that if you're not absolutely sure what are metaclasses, then you don't need them.

And there you go, happily coding through life, jumping and singing in the meadow, until suddenly you get into a dark forest and find the most feared enemy: you realize that some magic needs to be done.


The necessity

Why you may need metaclasses? Let's see this specific case, my particular (real life) experience.

It happened that at work I have a script that verifies the remote scopes service for the Ubuntu Phone, checking that all is nice and crispy.

The test itself is simple, I won't put it here because it's not the point, but it's isolated in a method named _check, that receives the scope name and returns True if all is fine.

So, the first script version did (removed comments and docstrings, for brevity):

    class SuperTestCase(unittest.TestCase):

        def test_all_scopes(self):
            for scope in self._all_scopes:
                resp = self._check(scope)
                self.assertTrue(resp)

The problem with this approach is that all the checks are inside the same test. If one check fails, the rest is not executed (because the test is interrupted there, and fails).

Here I found something very interesting, the (new in Python 3) subTest call:

    class SuperTestCase(unittest.TestCase):

        def test_all_scopes(self):
            for scope in self._all_scopes:
                with self.subTest(scope=scope):
                    resp = self._check(scope)
                    self.assertTrue(resp)

Now, each "sub test" internally is executed independently of the other. So, they all are executed (all checks are done) no matter if one or more fail.

Awesome, right? Well, no.

Why not? Because even if internally everything is handled as independent subtest, from the outside point of view it still is one single test.

This has several consequences. One of those is that the all-inside test takes too long, and you can't know what was going on (note that each of these checks hit the network!), as the test runner just show progress per test (not subtest).

The other inconvenient is that there is not a way to call the script to run only one of those subtests... I can tell it to execute only the all-inside test, but that would mean to execute all the subtests... which, again, takes a lot of time.

So, what I really needed? Something that allows me to express the assertion in one test, but that in reality it were several methods. So, I needed something that, from a single method, reproduce them so the class actually had several ones. This is, write code for a class that Python would find different. This is, metaclasses.


Metaclasses, but easy

Luckily, since a couple of years ago (or more), Python provides a simpler way to achieve the same that could be done with metaclasses. This is: class decorators.

Class decorators, very similar to method decorators, receive the class that is defined below itself, and its response is considered by Python the real definition of the class. If you don't have the concept, you may read a little here about decorators, and a more deep article about decorators and metaclasses here, but it's not mandatory.

So, I wrote the following class decorator (explained below):

    def test_multiplier(klass):
        """Multiply those multipliable tests."""
        for meth_name in (x for x in dir(klass) if x.startswith("test_")):
            meth = getattr(klass, meth_name)
            argspec = inspect.getfullargspec(meth)

            # only get those methods that are to be multiplied
            if len(argspec.args) == 2 and len(argspec.defaults) == 1:
                param_name = argspec.args[1]
                mult_values = argspec.defaults[0]

                # "move" the usefult method to something not automatically executable
                delattr(klass, meth_name)
                new_meth_name = "_multiplied_" + meth_name
                assert not hasattr(klass, new_meth_name)
                setattr(klass, new_meth_name, meth)
                new_meth = getattr(klass, new_meth_name)

                # for each of the given values, create a new method which will call the given method
                # with only a value at the time
                for multv in mult_values:
                    def f(self, multv=multv):
                        return new_meth(self, **{param_name: multv})

                    meth_mult_name = meth_name + "_" + multv.replace(" ", "_")[:30]
                    assert not hasattr(klass, meth_mult_name)
                    setattr(klass, meth_mult_name, f)

        return klass

The basics are: it receives a class, it returns a slightly modified class ;). For each of the methods that starts with "test_", I checked those that had two args (not only 'self'), and that the second argument were named.

So, it would actually get the method defined in the following structure and leave the rest alone:

    @test_multiplier
    class SuperTestCase(unittest.TestCase):

        def test_all_scopes(self, scope=_all_scopes):
            resp = self.checker.hit_search(scope, '')
            self.assertTrue(resp)

For that kind of method, the decorator will move it to something not named "test_*" (so we can call it but it won't be called by automatic test infrastructure), and then create, for each value in the "_scopes" there, a method (with a particular name which doesn't really matter, but needs to be different and is nice to be informative to the user) that calls the original method, passing "scope" with the particular value.

So, for example, let's say that _all_scopes is ['foo', 'bar']. Then, the decorator will rename test_all_scopes to _multiplied_test_all_scopes, and then create two new methods like this::

    def test_all_scopes_foo(self, multv='foo'):
        return self._multiplied_test_all_scopes(scope=multv)

    def test_all_scopes_foo(self, multv='bar'):
        return self._multiplied_test_all_scopes(scope=multv)

The final effect is that the test infrastructure (internally and externally) finds those two methods (not the original one), and calls them. Each one individually, informing progress individually, the user being able to execute them individually, etc.

So, at the end, all gain, no loss, and a fun little piece of Python code :)

Read more
Robin Winslow

Despite some reservations, it looks like HTTP/2 is very definitely the future of the Internet.

Speed improvements

HTTP/2 may not be the perfect standard, but it will bring with it many long-awaited speed improvements to internet communication:

  • Sending of many different resources in the first response
  • Multiplexing requests to prevent blocking
  • Header compression
  • Keep connections alive
  • Bi-directional communication

Changes in long-held performance practices

I read a very informative post today (via Web Operations Weekly) which laid out all the ways this will change some deeply embedded performance principles for front-end developers. Namely:

Each of these practices are hacks which make website setups more complex and more opaque, but with the goal of speeding up front-end performance by working around limitations in HTTP. Fortunately, these somewhat ugly practices are no longer necessary with HTTP/2.

Importantly, Matt Wilcox points out that in an HTTP/2 world, these practices might actually slow down your website, for the following reasons:

  • If you serve concatenated CSS, Javascript or image files, it’s likely you’re sending more content than you strictly need to for each page
  • Serving assets from different domains prevents HTTP/2 from reusing existing connections, forcing it to open extra ones

But not yet…

This is all very exciting, but note that we can’t and shouldn’t start changing our practices yet. Even server-side support for HTTP/2 is still patchy, with nginx only promising full support by the end of 2015 (with Microsoft’s IIS, surprisingly, putting other servers to shame).

But of course the main limiting factor will, as usual, be browsers:

  • Firefox leads the way, with support since version 36
  • Chrome has support for spdy4 (the precursor to HTTP/2), but it isn’t enabled by default yet
  • Internet Explorer 11 supports HTTP/2 only in Windows 10 beta

As usual the main limiting factor will be waiting for market share of older versions of Internet Explorer to drop off. Braver organisations may want to be progressive by deliberately slowing down the experience for people on older browsers to speed up the more up-to-date and hence push adoption of good technology.

If you want to get really clever, you could serve a different website structure based on the user agent string, but this would really be a pain to implement and I doubt many people would want to do this.

Even with the most progressive strategy, I doubt anyone will be brave enough to drop decent HTTP/1 performance until at least 2016, as this is when nginx support should land; Windows 10 and therefore IE 11 will have had some time to gain traction and of course Internet Explorer market share in general will have continued to drop in favour of Chrome and Firefox.

TL;DR: We front-end developers should be ready to change our ways, but we don’t need to worry about it just yet.

Originally posted on robinwinslow.co.uk.

Read more
Victor Palau

I recently blogged about making a scopes in 5 minutes using youtube. I have seen also a fair amount of new scopes being created using RSS. By far, my favourite way to use scopecreator is Twitter.

If you want to check a few examples, I have published previously twitter-based scopes like breaking news, la liga and a few others. Today, I give you Formula One:

f1 f1_2 f1_3

The interesting thing about twitter is that many brands upload minute by minute new updates, which make a really good source for scopes.

To create a Formula One scope,I started by going to twitter and creating a list under my scope account (you can use your personal account). The list contains several relevant “official” Formula One accounts.  Using Twitter, I can then update the sources by adding and removing accounts from the list without the user needing to download an update for the scope.

Again, it took me about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, I run:
    scopecreator create twitter vtuson f1
    cd f1
  • Next, I configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    {
    "description": "Formula One scope",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "hooks": {
    "f1": {
    "scope": "f1",
    "apparmor": "scope-security.json"
    }
    },
    "icon": "icon",
    "maintainer": "Your Name <yourname@packagedomain>",
    "name": "f1.vtuson",
    "title": "Formula One",
    "version": "0.2"
    }
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    [ScopeConfig]
    ScopeRunner=./f1.vtuson_f1 --runtime %R --scope %S
    DisplayName=Formula One
    Description=This is an Ubuntu search plugin that enables information from Yelp $
    Author=Canonical Ltd.
    Art=
    Icon=images/icon.png
    SearchHint=Search
    [Appearance]
    PageHeader.Background=color:///#D51318
    PageHeader.ForegroundColor=#FFFFFF
    PreviewButtonColor=#D51318
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the twitter scope template. You can either use “list” or “account” (or both) as departments.  The id is the twitter handle for the list or the account. For lists you will need to specify in the configuration section what account holds the list. As I defined a single entry, the formula one scope will have no drop down menu.
    scopecreator edit channels
    {
    “departments”: [
    {
    “name”:”Formula One”,
    “type”:”list”,
    “id”:”f1″
    }
    ],
    “configuration”: {
    “list-account”:”canonical_scope”,
    “openontwitter”:”See in Twitter”,
    “openlink”:”Open”,
    “retweet”:”Retweet”,
    “favorite”: “Favourite”,
    “reply”:”Reply”
    }
    }

After this, the only thing left to do is replace the placeholder icon, with a relevant logo:
~/f1/f1/images/logo.png
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on http://jsonlint.com/. It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a twitter list! so what are you going to create next?


Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150310 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

We have rebased our Vivid kernel to the first upstream stable
release and uploaded, ie. 3.19.0-8.8. Please test and let us
results once it’s available.
This is also a reminder that kernel freeze for Vivid is ~4wks
Apr 9. If you have any patches which need to land for
please make sure to submit those sooner rather than later.
—–
Important upcoming dates:
Thurs Mar 26 – Final Beta (~2 weeks away)
Thurs Apr 09 – Kernel Freeze (~4 weeks away)
Thurs Apr 23 – 15.04 Release (~7 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates

Status for the main kernels, until today:

  • Lucid – None (no update)
  • Precise – Verification and Testing
  • Trusty – Verification and Testing
  • Utopic – Verification and Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 27-Feb through 21-Mar

    27-Feb Last day for kernel commits for this cycle
    01-Mar – 07-Mar Kernel prep week.
    08-Mar – 21-Mar Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be

No open discussion.

Read more
Kyle Nitzsche

AptBrowser QML/C++ App

I've made a QML/C++ app called aptBrowser as an exercise in:

  • QML declarative GUI that drives
  • C++ backend threads

That is, the GUI provides buttons (five) that kick off C++ threads that do the backend work and provide the results back to QML.

So the GUI is always responsive (non-blocking).

What aptbrowser does

The user enters a debian package name (and is told if it is not valid) and taps one of five buttons that do the following:
  • Show the packages this package depends on ("Depends")
  • Show the packages this package recommends ("Recommends")
  • Show the packages that depend on this package ("Parent Depends")
  • Show that packages that recommend this package ("Parent Recommends")
  • Show the  apt-cache policy for this package ("Policy")
The data for all but the last ("Policy") are returned as flickable lists of buttons. When you click any one, it becomes the current package and the GUI and displayed data adjusts appropriately.

When you click any of the buttons, the orange indicator square to its left turns purple and starts spinning, and when the c++ backend returns data, its indicator turns orange again and stops spinning.

Note that the Parent Depends and Parent Recommends actions can take a long time. This has nothing to do with this app. This is simply how long it takes to first get a package's parents and then, for each, find its type of relationship (depends or recommends) to our package of interest. Querying the apt cache is time consuming.

Where is aptbrowser

Store

Because the app queries the apt cache, it must run unconfined at the moment, and therefore it cannot go into the store.

The click

This an armhf click pkg for framework ubuntu-sdk.14.10 (compiled against vivid)

    The source 


    • bzr branch lp:aptbrowser

    Screenshots

     



    Read more
    Hardik Dalwadi

    Hello world!

    Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

    Read more