Canonical Voices

Michael Hall

There’s a saying in American political debate that is as popular as it is wrong, which happens when one side appeals to our country’s democratic ideal, and the other side will immediately counter with “The United States is a Republic, not a Democracy”. I’ve noticed a similar misunderstanding happening in open source culture around the phrase “meritocracy” and the negatively-charged “oligarchy”. In both cases, though, these are not mutually exclusive terms. In fact, they don’t even describe the same thing.

Authority

One of these terms describes where the authority to lead (or govern) comes from. In US politics, that’s the term “republic”, which means that the authority of the government is given to it by the people (as opposed to divine-right, force of arms, of inheritance). For open source, this is where “meritocracy” fits in, it describes the authority to lead and make decisions as coming from the “merit” of those invested with it. Now, merit is hard to define objectively, and in practice it’s the subjective opinion of those who can direct a project’s resources that decides who has “merit” and who doesn’t. But it is still an important distinction from projects where the authority to lead comes from ownership (either by the individual or their employer) of a project.

Enfranchisement

History can easily provide a long list of Republics which were not representative of the people. That’s because even if authority comes from the people, it doesn’t necessarily come from all of the people. The USA can be accurately described as a democracy, in addition to a republic, because participation in government is available to (nearly) all of the people. Open source projects, even if they are in fact a meritocracy, will vary in what percentage of their community are allowed to participate in leading them. As I mentioned above, who has merit is determined subjectively by those who can direct a project’s resources (including human resource), and if a project restricts that to only a select group it is in fact also an oligarchy.

Balance and Diversity

One of the criticisms leveled against meritocracies is that they don’t produce diversity in a project or community. While this is technically true, it’s not a failing of meritocracy, it’s a failing of enfranchisement, which as has been described above is not what the term meritocracy defines. It should be clear by now that meritocracy is a spectrum, ranging from the democratic on one end to the oligarchic on the other, with a wide range of options in between.

The Ubuntu project is, in most areas, a meritocracy. We are not, however, a democracy where the majority opinion rules the whole. Nor are we an oligarchy, where only a special class of contributors have a voice. We like to use the term “do-ocracy” to describe ourselves, because enfranchisement comes from doing, meaning making a contribution. And while it is limited to those who do make contributions, being able to make those contributions in the first place is open to anybody. It is important for us, and part of my job as a Community Manager, to make sure that anybody with a desire to contribute has the information, resources, and access to to so. That is what keeps us from sliding towards the oligarchic end of the spectrum.

 

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141216 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel remains rebased to the
final v3.18 upstream kernel. We have pushed uploads to our team’s PPA
for preliminary testing. We are still debating on uploading to the
archive after Alpha1 releases this week. However, we may opt to wait
until everyone returns from holiday after the new year.
—–
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~2 days away)
Fri Jan 9 – 14.04.2 Kernel Freeze (~3 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~5 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~7 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Prep
  • Precise – Prep
  • Trusty – Prep
  • Utopic – Prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 12-Dec through 10-Jan
    ====================================================================
    12-Dec Last day for kernel commits for this cycle
    14-Dec – 20-Dec Kernel prep week.
    21-Dec – 10-Jan Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Dustin Kirkland


As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin

Read more
Daniel Holbach

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

Read more
Daniel Holbach

I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.

Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.

Problem statements

  • Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
  • Perception of contributor drop in “older” parts of the community.
    • Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
    • Less activity in LoCos (lacking a sense of purpose?)
    • No drop in members/developers.
  • Less activity in Canonical-led projects.
  • We don’t spend marketing money on social media. Build a pavement online.
  • Downloading a CD image is too much of a barrier for many.
  • Our “community infrastructure” did not scale with the amount of users.
  • Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
  • We don’t have enough time to train newcomers.
  • Language barriers make it hard for some to get involved.
  • Canonical does a bad job announcing their presence at events.

Questions

  • Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
  • Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
  • Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?

Proposals

  • community.ubuntu.com: More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
  • Make Ubuntu events more about “doing things with Ubuntu”?
  • Ubuntu Leadership Mentoring programme.
  • Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.

Join the hangout on ubuntuonair.com on Friday, 12th December 2014, 16 UTC.

Read more
UbuntuTouch

[原]Ubuntu Scope简介及开发流程

在这个视频里,我们介绍了在Ubuntu平台上的Scope,并讲解了如何开发Scope。


视频地址:http://v.youku.com/v_show/id_XODQ3MDY5NTQ0.html

视频中的源码: bzr branch lp:~liu-xiao-guo/debiantrial/openmap

作者:UbuntuTouch 发表于2014-12-12 15:13:38 原文链接
阅读:182 评论:0 查看评论

Read more
Daniel Holbach

It’s fantastic that a we have more discussion about where we want our community to go. We get ideas out of it, people communicate and get a common understanding of issues. Jono’s blog post and the ubuntu-community-team mailing list generated a lot of good stuff already. Last week we had an IRC meeting with the CC and discussed governance and leadership in there.

We took quite a bit of notes, and Elfy set up a doc where we note down actions. I would like to suggest we have

Please

  • use Elfy’s action’s doc for submitting agenda items,
  • your agenda item is a concrete proposal or something which could be turned into work items,
  • make sure you’re there,
  • add your name to it!

Looking forward to seeing you there! :-)

Read more
Sergio Schvezov

Ubuntu Core

Ubuntu Core is what we’ve been working on this past time, it has been an interesting ride. It was developed completely in the open, there was just no real promotion about it until we were ready.

If you had noticed, we use ubuntu-device-flash to create this core image, and for development we used it across the board with the core subcommand. We did learn a couple of things from the phone and decided to just provide a static image that we could make sure would work for everyone giving it a try (aka more QA). In essence you can still upgrade and if something is not to your like, just rollback, it’s that neat. So in summary, ubuntu-device-flash today is just a step in the release process to get to the final image.

Yesterday I played around with creating a snap for calimstore and it was a breeze, To get it just snap install calimstore, all the command line tools are in there provided by the binary stanza from package.yaml. The camlistored daemon is created in the services list where I just needed to provide a start, which in the background creates a systemd unit.

The beauty here is that I don’t really need to know much of the underlying technology, and that is awesome for just quickly creating a snap.

What is missing here though, is an easy way to configure the package that was just intalled, for now, it should be as easy to look at the file system layout and going to /var/lib/apps/<app-name>/<version>/ which would be /var/lib/apps/calimstore/0.8 and within we’d have .config/camlistore/server-config.json, in most cases you’d want to setup your authentication in there.

And here’s the mandatory screenshot of this running on my kvm instance:

Read more
Dustin Kirkland



A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive


You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.


$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help



And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...


While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Enjoy!
Dustin

Read more
jdstrand

Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

A developer doc will be published soon.


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
facundo

Logging levels


Cuando empecé con el concepto de loguear, me parecía demasiado tener niveles. Con el tiempo y la experiencia me di cuenta que son imprescindibles, :)

En la biblioteca estándar de Python hay un módulo logging que trae varios niveles prefijados. Son estos, con una pequeña anotación de cómo los uso, más un ejemplo de la vida real (tomados de mi programa de Encuentro o de fades).

- CRITICAL: creo que nunca lo usé :)

- ERROR: problemas de todo tipo; cosas que no deberían pasar, y si pasan son un inconveniente; muchas veces el programa no continúa, o continua de forma parcial o limitada, luego de este tipo de linea logueada. En este ejemplo logueo que no se pudo bajar la lista de los backends durante una actualización (también en este caso se le avisa al usuario mediante una ventanita, y el programa sigue, aunque la actualización no se concretó):

    try:
        _, backends_file = yield utils.download(BACKENDS_URL)
    except Exception, e:
        logger.error("Problem when downloading backends: %s", e)
        tell_user("Hubo un PROBLEMA al bajar la lista de backends:", e)
        return

- WARNING: para indicar que sucedió algo que en general no debería pasar; en general no son cosas malas, sino más bien anómalas, y no presentan una situación problemática. En el siguiente ejemplo estoy dejando registro que ignoro la opción 'quiet' que pasó el usuario (porque también pasó la opción 'verbose', que es más importante):

    if verbose and quiet:
        l.warning("Overriding 'quiet' option ('verbose' also requested)")

- INFO: información general del funcionamiento del programa, cosas que son imprescindibles saber y que siempre queremos que sean registradas; en general no involucran gran cantidad de lineas, pero permite seguir el flujo de ejecución del programa desde un nivel alto. Normalmente los programas que se entregan a los usuarios o corren en los servidores están configurados para realmente mandar a disco desde este nivel. En las siguientes dos lineas muestro lo primero que loguea Encuentro al arrancar: con qué versión de Python está siendo ejecutado y qué versión de sí mismo es:

    log.info("Running Python %s on %r", sys.version_info, sys.platform)
    log.info("Encuentro version: %r", version)

- DEBUG: toda la información necesaria para analizar en detalle la ejecución del programa. Puede involucrar grandes cantidades de información, y hasta ser un problema con respecto al uso de disco o afectar la performance, pero en general no se corren los programas en este nivel, sólo durante el desarrollo o en casos de tratar de analizar un problema específico. No es raro, por ejemplo, pedirle al usuario que ejecute el programa con un parámetro especial que configura los logs en este nivel y que trate de reproducir el problema que tuvo, para luego hacer un análisis forense de la situación. En el siguiente ejemplo estoy dejando constancia que fades tuvo que instalar pip a mano en el virtualenv:

    logger.debug("Installing PIP manually in the virtualenv")

Me ha pasado en sistemas muy complejos de necesitar un nivel más abajo que DEBUG para loguear toda aquella información que podría llegar a ser útil para un análisis del comportamiento del programa, pero que normalmente sería un exceso de datos (lo cual complica desde la lectura de los registros hasta el mismo manejo de los archivos). Entonces, usábamos un nivel TRACE, que casi nunca se prendía, para este propósito.

La macana es que el módulo de logging no tiene un nivel TRACE, pero lo creábamos a mano:

    TRACE = 5
    logging.addLevelName('TRACE', TRACE)

Fíjense el 5 ese: es que DEBUG es 10, entonces queda "más abajo". Claro, para que funcione todo, teníamos que usar un Logger custom:

    class Logger(logging.Logger):
        """Logger that support our custom levels."""

        def trace(self, msg, *args, **kwargs):
            """log at TRACE level"""
            if self.isEnabledFor(TRACE):
                self._log(TRACE, msg, args, **kwargs)

Para más información sobre la infrastructura de logging de Python y consejos generales sobre qué, cómo, o cuándo dejar registro de lo que sucede, pueden ver mi charla sobre el tema (estos sons los slides, y en algún momento se publicará acá el video de esta misma charla que dí en la PyCon de Rafaela).

Read more
Nicholas Skaggs

I thought I would add a little festivity to the holiday season, quality style. In case your holidays just are not the same without a little quality in your life, allow me to share how you can get involved.

There are opportunities for every role listed on the QA wiki. Testers and test writers are both needed. Testing and writing manual tests can be learned by anyone, no coding required. That said if you have skills or interest in technical work, I would encourage you help out. You will learn by doing and get help from others while you do it.

Now onto the good stuff! What can you do to help ubuntu this cycle from a quality perspective?

Dogfooding
There is an ever present need for brave folks willing to simply run the development version of ubuntu and use it as a daily machine throughout the cycle. It's one of the best ways for us as a community to uncover bugs and issues, in particular things that regress from the previous release. Upgrade to vivid today and see what you can break!

QATracker
This tool is written in drupal7 and runs the iso.qa.ubuntu.com and packages.qa.ubuntu.com sites. These sites are used to record and view the results of all of our manual testing efforts. Currently dkessel is leading the effort on implementing some needed UI changes. The code and more information about the project can be found on launchpad. The tracker is one of our primary tools and needs your help to become friendly for everyone to use.

In addition a charm would be useful to simplify setting up a development environment. The charm can be based upon the existing drupal charm. At the moment this work is ready for someone to jump in.

Unity8
Running unity8 as a full-time desktop is a personal goal I have for this cycle. I hope some others might also want to be early adopters and join me in this goal. For now you can help by testing the unity8 desktop. Have a look at running unity in lxc for an easy way to run unity8 today on your machine. Use it, test it, and offer feedback. I'll be talking more about unity8 as the cycle progresses and opportunities to test new features aimed at the desktop appear.

Core Apps
The core apps project is an excellent way to get involved. These applications have been lovingly developed by community members just like you. Many of the teams are looking for help in writing tests and for someone who can help bring a testing mindset and eye to the work. As of this writing specifically the docviewer, terminal and calculator teams would love your help. The core apps hackdays are happening this week, drop by and introduce yourself to get started!

Manual Tests
Like the sound of writing tests but the idea of writing code turns you off? Manual tests are needed as well! They are written in English and are easy to understand and write. Manual tests include everything you see on the qatracker and are managed as a launchpad project. This means you can pick a bug and "fix it" by submitting a merge request. The bugs involve both fixing existing tests as well as requests for new testcases.

Images
As always there are images that need testing. Testing milestones occur later in the cycle which involve everyone helping to test a specific set of images. In the meantime, daily images are generated that have made it through the automated tests and are ready for manual testing. Booting an image in a live session is a great way to check for regressions on your machine. Doing this early in the cycle can help make sure your hardware and others like it experience a regression free upgrade when the time comes.

Triaging
After subjecting software to testing, bugs are naturally found. These bugs then need to be verified and triaged. The bugsquadders, as they are called, would be happy to help you learn to categorize or triage bugs and do other tasks.

No matter how you choose to get involved, feel free to contact me for help if needed. Most of all, Happy Testing!


Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141209 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel has been rebased to the
final v3.18 upstream kernel. We have pushed uploads to our team’s PPA
for preliminary testing. We’ll likely upload to the official archive
soon.
—–
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~1 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~6 weeks away)
Fri Jan 9 – 14.04.2 Kernel Freeze (~4 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~8 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Testing
  • Precise – Testing
  • Trusty – Testing
  • Utopic – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 21-Nov through 13-Dec
    ====================================================================
    21-Nov Last day for kernel commits for this cycle
    23-Nov – 29-Nov Kernel prep week.
    30-Nov – 13-Dec Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No Open Discussions.

Read more
Daniel Holbach

The call for an Ubuntu Foundation has come up again. It has been discussed many times before, ever since an announcement was made many years ago which left a number of people confused about the state of things.

The way I understood the initial announcement was that a trust had been set up, so that if aliens ever kidnapped our fearless leader, or if he decided that beekeeping was more interesting than Ubuntu, we could still go on and bring the best flavour of linux to the world.

Ok, now back to the current discussion. An Ubuntu Foundation seems to have quite an appeal to some. The question to me is: which problems would it solve?

Looking at it from a very theoretical point of view, an Ubuntu foundation could be a place where you separate “commercial” from “public” interests, but how would this separation work? Who would work for which of the entities? Would people working for the Ubuntu foundation have to review Canonical’s paperwork before they can close deals? Would there be a board where decisions have to be pre-approved? Which separation would generally happen?

Right now, Ubuntu’s success is closely tied to Canonical’s success. I consider this a good thing. With every business win of Canonical, Ubuntu gets more exposure in the world. Canonical’s great work in the support team, in the OEM sector or when closing deals with governments benefits Ubuntu to a huge degree. It’s like two sides of a coin right now. Also: Canonical pays the bills for Ubuntu’s operations. Data centers, engineers, designers and others have to be paid.

In theory it all sounds fine: “you get to have a say”, “more transparency”, etc. I don’t think many realise though, that this will mean that additional people will have to sift through legal and other documents, that more people will be busy writing reports, summarising discussions, that there will be more need for admin , that customers will have to wait longer, that this will in general have to cost more time and money.

I believe that bringing in a new layer will bring incredible amounts of work and open up endless possibilities for politics and easily bring things to a stand-still.

Will this fix Ubuntu’s problems? I absolutely don’t think so. Could we be more open, more inspiring and more inviting? Sure, but demanding more transparency and more separation is not going to bring that.

Read more
Alan Pope

Ubuntu Core Apps
As we come to the end of 2014, looking forward to new devices running Ubuntu in our immediate future, it’s time for one last set of Hack Days of the year.

Next week, from Monday 8th December till Friday 12th we’re going to be having another set of Core Apps Hack Days. We’ve had a few of these this year which have been a great way to focus attention on specific applications and their dependent components in the platform. They’re also a nice gateway for getting new people into the Core Apps project and Ubuntu development in general.

The Core Apps are community maintained Free Software applications which were created for Ubuntu devices, but also work on the Ubuntu desktop. We welcome new developers, testers, autopilot writers, artists and translators to get involved in these exciting projects.

The schedule

As with previous hack days we’re going to focus on specific apps on each day, which we run from 9:00 UTC until 21:00 UTC. In summary our schedule looks like this:-

  • Monday 8th – Calculator, Terminal & Clock
  • Tuesday 9th – File Manager & Calendar
  • Wednesday 10th – Music & Document Viewer – QA Day Workshop: writing tests for the core apps (18:00UTC)
  • Thursday 11th – Shorts & Reminders
  • Friday 12th – Weather & Dekko (email)

A QA treat

Creating core apps involves close coordination between developers and designers to provide the right set of features, high usability and appealing visuals. All these would be nothing without a suite of automated tests that are run to ensure the features are rock-solid and that no regressions are introduced with new development.

All core apps include Autopilot and QML tests that we are constantly expanding to increase test coverage. Writing tests for core apps is a nice way to get started contributing. All you’ll need is some Python knowledge for Autopilot tests or QML for QML tests. Our quality man, Nicholas Skaggs will be running a live video workshop on Wednesday Dec 10th, at 18:00UTC, as an on-ramp to learn how to create tests.

Join the fest

The Hack Days will be happening live at the #ubuntu-app-devel IRC channel on Freenode

The QA Workshop will be happening also live on Ubuntu On Air. You can watch the video and ask your questions on the same IRC channel.

We’ll blog more details about the apps each day next week with links to specific bugs, tasks and goals, so stay tuned!

As always we greatly appreciate all contributions to the Core Apps project during the Hack Days, but welcome community efforts all year round, so if this week doesn’t work for you, feel free to drop by #ubuntu-app-devel on Freenode any time and speak to me, popey.

Read more
bmichaelsen

To Win in Toulouse

Now the only thing a gambler needs
Is a suitcase and a trunk.
– Animals, The House of the Rising Sun

So, as many others, I have been to the LibreOffice Hackfest in Toulouse which — unlike many of our other Hackfests — was part of a bigger event: Capitole du Libre. As we had our own area and were not 30+ hackers, this also had the advantage that we got quicker to work. And while I had still some boring administrative work to do, this is a Hackfest were I actually got to do some coding. I looked for some bookmark related bugs in Writer, but the first bugs I looked at were just too well suited to be Easy Hacks: fdo#51741 (“Deleting bookmark is not seen as modification of document”) and fdo#56116 (“Names of bookmarks should allow all characters which are valid in HTML anchor names (missing: ‘:’ and ‘.’)”). Both were made Easy Hacks and both are fixed on master now. I then fixed fdo#85542 (“DOCX import of overlapping bookmarks”), which proved slightly more work then expected and provided a unittest for it to never come back. I later learned that the second part was entirely nonoptional, as Markus promised he would not have let me leave Toulouse without writing a unittest for commited code. I have to admit that that is a supportable position.

Toulouse Hackfest Room

Toulouse Hackfest Room

Scenes like the above were actually rather rare as we were mostly working over our notebooks. One thing I came up with at the Hackfest, but didnt finish there was some clang plugins for finding cascading conditional ops and and conditional ops that have assignments as a sideeffect in their midst. While I found nothing as mindboggling as the tweet that gave inspiration to these plugins in sw (Writer), I found some impressive expressions that certainly wouldnt be a joy to step through in gdb (or even better: set a breakpoint in) when debugging and fixed those. We probably could make a few EasyHacks out of what these (or similar) plugins find outside of sw/ (I only looked there for now) — those are reasonably easy to refactor, but you dont want to do that in the middle of a debugging session. While at it, I also looked at clangs “value assigned, but never read” hints. Most were harmless, but also trivial to get rid of. On the other hand, some of those pointed to real logic errors that are otherwise hard to see. Like this one, which has been hiding — if git is to be believed — in plain sight ever since OpenOffice.org was originally open sourced in 2000. All in all, this experience is encouraging. Now that there are our coverity defect density is a just a rounding error above zero getting more fancy clang plugins might be promising.

Just one week after the Hackfest in Toulouse, there was another event LibreOffice took part in: The Bug Squashing Party in Munich — its encouraging to see Jonathan Riddell being a commiter to LibreOffice too now, but that is not all, we have more events coming up: The Document Foundation and LibreOffice will have an assembly at 31c3 in Hamburg, you are most welcome to drop by there! And next then, there will be FOSDEM 2015 in Bruessels, where LibreOffice will be present as usual.


Read more
Nicholas Skaggs

Creating mutli-arch click packages

Click packages are one of the pieces of new technology that drives the next version of ubuntu on the phone and desktop. In a nutshell click packages allow for application developers to easily package and deliver application updates independent of the distribution release or archive. Without going into the interesting technical merits and de-merits of click packages, this means the consumer can get faster application updates. But much of the discussion and usage of click packages until now has revolved around mobile. I wanted to talk about using click packages on the desktop and packaging clicks for multiple architectures.

The manifest file
Click packages follow a specific format. Click packages contain a payload of an application's libraries, code, artwork and resources, along with its needed external dependencies. The description of the package is found in the manifest file, which is what I'd like to talk about. The file must contain a few keys, but one of the recognized optional keys is architecture. This key allows specifying architectures the package will run on.

If an application contains no compiled code, simply use 'all' as the value for architecture. This accomplishes the goal of running on all supported architectures and many of the applications currently in the ubuntu touch store fall into this category. However, an increasing number of applications do contain compiled code. Here's how to enable support across architectures for projects with compiled code.

Fat packages
The click format along with the ubuntu touch store fully support specifying one or more values for specific architecture support inside the application manifest file. Those values follow the same format as dpkg architecture names. Now in theory if a project containing compiled code lists the architectures to support, click build should be able to build one package for all. However, for now this process requires a little manual intervention. So lets talk about building a fat (or big boned!) package that contains support for multiple architectures inside a single click package.

Those who just want to skip ahead can check out the example package I put together using clock. This same package can be found in the store as multi-arch clock test. Feel free to install the click package on the desktop, the i386 emulator and an armhf device.

Building a click for a different architecture
To make a multi-arch package a click package needs to be built for each desired architecture. Follow this tutorial on developer.ubuntu.com for more information on how to create a click target for each architecture. Once all the targets are setup, use the ubuntu sdk to build a click for each target. The end result is a click file specific to each architecture.

For example in creating the clock package above, I built a click for amd64, i386 and armhf. Three files were generated:

com.ubuntu.clock_3.2.176_amd64.click
com.ubuntu.clock_3.2.176_i386.click
com.ubuntu.clock_3.2.176_armhf.click

Notice the handy naming scheme allows for easy differentiation as to which click belongs to which architecture. Next, extract the compiled code from each click package. This can be accomplished by utilizing dpkg. For example,

dpkg -x com.ubuntu.clock_3.2.176_amd64.click amd64

Do this for each package. The result should be a folder corresponding to each package architecture.

Next copy one version of the package for use as the base of multi-arch click package. In addition, remove all the compiled code under the lib folder. This folder will be populated with the extracted compiled code from the architecture specific click packages.

cp amd64 multi
rm -rf multi/lib/*

Now there is a folder for each click package, and a new folder named multi that contains the application, minus any compiled code.

Creating the multi-arch click
Inside the extracted click packages is a lib folder. The compiled modules should be arranged inside, potentially inside an architecture subfolder (depending on how the package is built).

Copy all of the compiled modules into a new folder inside the lib folder of the multi directory. The folder name should correspond to the architecture of the complied code. Here's a list of the architectures for ARM, i386, and amd64 respectively.


arm-linux-gnueabihf
i386-linux-gnu
x86_64-linux-gnu


You can check the naming from an intended device by looking in the application-click.conf file.

grep ARCH /usr/share/upstart/sessions/application-click.conf

To use the clock package as an example again, here's a quick look at the folder structure:

lib/arm-linux-gnueabihf/...
lib/i386-linux-gnu/...
lib/x86_64-linux-gnu/...

The contents of lib/* from each click package I built earlier is under a corresponding folder inside the multi/lib directory. So for example, the lib folder from com.ubuntu.clock_3.2.176_i386.click became lib/i386-linux-gnu/.

Presto, magic package time! 
Finally the manifest.json file needs to be updated to reflect support for the desired architectures. Inside the manifest.json file under the multi directory, edit the architecture key values to list all supported architectures for the new package. For example to list support for ARM and x86 architectures,

"architecture": ["armhf", "i386", "amd64"],

To build the new package, execute click build multi. The resulting click should build and be named with a _multi.click prefix. This click can be installed on any of the specified architectures and is ready to be uploaded to the store.

Caveats, nibbly bits and bugs
So apart from click not automagically building these packages, there is one other bug as of this writing. The resulting multi-arch click will fail the automated store review and instead enter manual review. To workaround this request a manual review. Upon approval, the application will enter the store as usual.

Summary
In summary to create a multi-arch click package build a click for each supported architecture. Then pull the compiled library code from each click and place into a single click package. Next modify the click manifest file to state all of the architectures supported. Finally, rebuild the click package!

I trust this explanation and example provides encouragement to include support for x86 platforms when creating and uploading a click package to the store. Undoubtedly there are other ways to build a multi-arch click; simply ensure all the compiled code for each architecture is included inside the click package. Feel free to experiment!

If you have any questions as usual feel free to contact me. I look forward to seeing more applications in the store from my unity8 desktop!

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141202 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel has been rebased to the
v3.18-rc7 upstream kernel. We have pushed uploads to our team’s PPA
for preliminary testing. We have still withheld uploading to
the archive while we iron out some final issues.
—–
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~2 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~7 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~9 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today (02-Dec):

  • Lucid – Verification & Testing
  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 21-Nov through 13-Dec
    ====================================================================
    21-Nov Last day for kernel commits for this cycle
    23-Nov – 29-Nov Kernel prep week.
    30-Nov – 13-Dec Bug verification; Regression testing; Release
    Note: Utopic and LTS-Utopic kernels are being respun due to a verification
    failure


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
David Planella

image-phone-naturallyneat-medium

The 5 weeks after the Ubuntu Scopes Showdown announcement are coming to an end, it’s time to start putting the pencils down and submitting your scopes for the judges to do their reviews.

While you’ve still got two days to fix some bugs and do some final polish, the 3rd of December is the last day for submissions to be accepted for the Showdown. Remember that to qualify, you’ll need to:

  • Register your scope for the contest
  • Submit your scope to the Ubuntu Software Store

Registering your scope

To register your scope for the judges’ review, you’ll simply need a couple of minutes to fill in the registration form. It might be worth filling it in advance, even if you are planning to upload your app at the last minute.

You can submit the form now and still upload new revisions of your app until the 3rd of December.

Register your scope for participation

Submitting your scope

Submitting your scope to the store should also be quick and easy. The upload workflow is exactly the same as for apps, and with automated reviews it takes just a few minutes from upload to your scope being available for everyone on the Ubuntu Software Store.

To ensure your scope is discoverable and looks good, you might want to check out the scope upload tips ›

And when you’re ready to start the upload, you can follow the 5-step process to get it published ›

Need help?

If you need help with any of the above, feel free to reach out in any of the channels below:

Looking forward to seeing your scopes in the store!

Read more
Daniel Holbach

Despite being an “old” technology and having its problems, we still use mailing lists… a lot.  Some of the lists have been cleaned up by the Community Council some time ago, especially if they were created and then forgotten some time later.

We do have a number of mailing lists though which are still active, but have the problem of not having enough (or enough active) moderators on board. What then happens is this:

List moderation

… which sucks.

It’s not very nice if you have lots and lots of good discussion not happening just because you had no time to tend to the moderation queue.

Some mailing lists receive quite a bit of spam, others get a lot of mails from folks who are not subscribed yet, but this really shouldn’t be a problem. If you run a popular mailing list and moderation gets too much of a hassle, please consider adding more moderators – if you ask nicely a bunch of folks will be happy to help out.

So my advice:

  1. If you every registered a mailing list, please have a look at its moderation queue and see if you need help.
  2. If yes, please add more moderators.
  3. If you don’t do it yet, use listadmin – it’s the best thing since sliced bread and keeping up with moderation in the future will be no problem at all.

Read more