Canonical Voices

Robin Winslow

Nowadays free software is everywhere – from browsers to encryption software to operating systems.

Even so, it is still relatively rare for the code behind websites and services to be opened up.

Stepping into the open

Three years ago we started to move our website projects to Github, and we also took this opportunity to start making them public. We started with the www.ubuntu.com codebase, and over the next couple of years almost all our team’s other sites have followed suit.

canonical-websites org

At this point practically all the web team’s sites are open source, and you can find the code for each site in our canonical-websites organisation.

www.ubuntu.com developer.ubuntu.com www.canonical.com
partners.ubuntu.com design.ubuntu.com maas.io
tour.ubuntu.com snapcraft.io build.snapcraft.io
cn.ubuntu.com jp.ubuntu.com conjure-up.io
docs.ubuntu.com tutorials.ubuntu.com cloud-init.io
assets.ubuntu.com manager.assets.ubuntu.com vanillaframework.io

We’ve tried to make it as easy as possible to get them up and running, with accurate and simple README files. Each of our projects can be run in much the same way, and should work the same across Linux and macOs systems. I’ll elaborate more on how we manage this in a future post.

README example

We also have many supporting projects – Django modules, snap packages, Docker images etc. – which are all openly available in our canonical-webteam organisation.

Reaping the benefits

Opening up our sites in this way means that anyone can help out by making suggestions in issues or directly submitting fixes as pull requests. Both are hugely valuable to our team.

Another significant benefit of opening up our code is that it’s actually much easier to manage:

  • It’s trivial to connect third party services, like Travis, Waffle or Percy;
  • Similarly, our own systems – such as our Jenkins server – don’t need special permissions to access the code;
  • And we don’t need to worry about carefully managing user permissions for read access inside the organisation.

All of these tasks were previously surprisingly time-consuming.

Designing in the open

Shortly after we opened up the www.ubuntu.com codebase, the design team also started designing in the open, as Anthony Dillon recently explained.

Read more
Michael Hall

After a little over 6 years, I am embarking on a new adventure. Today is my last day at Canonical, it’s bitter sweet saying goodbye precisely because it has been such a joy and an honor to be working here with so many amazing, talented and friendly people. But I am leaving by choice, and for an opportunity that makes me as excited as leaving makes me sad.

Goodbye Canonical

maltaI’ve worked at Canonical longer than I’ve worked at any company, and I can honestly say I’ve grown more here both personally and professionally than I have anywhere else. It launched my career as a Community Manager, learning from the very best in the industry how to grow, nurture, and excite a world full of people who share the same ideals. I owe so many thanks (and beers) to Jono Bacon, David Planella, Daniel Holbach, Jorge Castro, Nicholas Skaggs, Alan Pope, Kyle Nitzsche and now also Martin Wimpress. I also couldn’t have done any of this without the passion and contributions of everybody in the Ubuntu community who came together around what we were doing.

As everybody knows by now, Canonical has been undergoing significant changes in order to set it down the road to where it needs to be as a company. And while these changes aren’t the reason for my leaving, it did force me to think about where I wanted to go with my future, and what changes were needed to get me there. Canonical is still doing important work, I’m confident it’s going to continue making a huge impact on the technology and open source worlds and I wish it nothing but success. But ultimately I decided that where I wanted to be was along a different path.

Of course I have to talk about the Ubuntu community here. As big of an impact as Canonical had on my life, it’s only a portion of the impact that the community has had. From the first time I attended a Florida LoCo Team event, I was hooked. I had participated in open source projects before, but that was when I truly understood what the open source community was about. Everybody I met, online or in person, went out of their way to make me feel welcome, valuable, and appreciated. In fact, it was the community that lead me to work for Canonical in the first place, and it was the community work I did that played a big role in me being qualified for the job. I want to give a special shout out to Daniel Holbach and Jorge Castro, who built me up from a random contributor to a project owner, and to Elizabeth Joseph and Laura Faulty who encouraged me to take on leadership roles in the community. I’ve made so many close and lasting friendships by being a part of this amazing group of people, and that’s something I will value forever. I was a community member for years before I joined Canonical, and I’m not going anywhere now. Expect to see me around on IRC, mailing lists and other community projects for a long time to come.

Hello Endless

EndlessNext week I will be joining the team at Endless as their Community Manager. Endless is an order of magnitude smaller than Canonical, and they have a young community that it still getting off the ground. So even though I’ll have the same role I had before, there will be new and exciting challenges involved. But the passion is there, both in the company and the community, to really explode into something big and impactful. In the coming months I will be working to setup the tools, processes and communication that will be needed to help that community grow and flourish. After meeting with many of the current Endless employees, I know that my job will be made easier by their existing commitment to both their own community and their upstream communities.

What really drew me to Endless was the company’s mission. It’s not just about making a great open source project that is shared with the world, they have a specific focus on social good and improving the lives of people who the current technology isn’t supporting. As one employee succinctly put it to me: the whole world, empowered. Those who know me well will understand why this resonates with me. For years I’ve been involved in open source projects aimed at early childhood education and supporting those in poverty or places without the infrastructure that most modern technology requires. And while Ubuntu covers much of this, it wasn’t the primary focus. Being able to work full time on a project that so closely aligned with my personal mission was an opportunity I couldn’t pass up.

Broader horizons

Over the past several months I’ve been expanding the number of communities I’m involved in. This is going to increase significantly in my new role at Endless, where I will be working more frequently with upstream and side-stream projects on areas of mutual benefit and interest. I’ve already started to work more with KDE, and I look forward to becoming active in GNOME and other open source desktops soon.

I will also continue to grow my independent project, Phoenicia, which has a similar mission to Endless but a different technology and audience. Now that this is no longer competing in the XPRIZE competition, it releases some restrictions that we had to operate under and frees us to investigate new areas of innovation and collaboration. If you’re interested in game development, or making an impact on the lives of children around the world, come and see what we’re doing.

If anybody wants to reach out to me to chat, you can still reach me at mhall119@ubuntu.com and soon at mhall119@endlessm.com, tweet me at @mhall119, connect on LinkedIn, chat on Telegram or circle me on Google+. And if we’re ever at a conference together give me a shout, I’d love to grab a drink and catch up.

Read more
Colin Ian King

What is new in FWTS 17.05.00?

Version 17.05.00 of the Firmware Test Suite was released this week as part of  the regular end-of-month release cadence. So what is new in this release?

  • Alex Hung has been busy bringing the SMBIOS tests in-sync with the SMBIOS 3.1.1 standard
  • IBM provided some OPAL (OpenPower Abstraction Layer) Firmware tests:
    • Reserved memory DT validation tests
    • Power management DT Validation tests
  • The first fwts snap was created
  •  Over 40 bugs were fixed
As ever, we are grateful for all the community contributions to FWTS.  The full release details are available from the fwts-devel mailing list.

I expect that the next upcoming ACPICA release will be integrated into the 17.06.00 FWTS release next month.

Read more
admin

I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

  • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
  • Hardware Testing
  • DHCP Relay Support
  • Unmanaged Subnets
  • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
  • Various improvements and minor features.
  • MAAS Client Library
  • Intel Rack Scale Design support.

For more information, please read the release notes are available here.

Availability
MAAS 2.2.0 is currently available in the following MAAS team PPA.
ppa:maas/next
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.

Read more
facundo


Al final salió una idea que venía dándome vuelta en la cabeza desde principios del año pasado, y que tardó sus meses en concretarse: voy a estar dando un Seminario de Introducción a Python junto a una empresa, con el objetivo de bajar el costo del curso a los asistentes (lo cubre en parte la empresa), y de esta manera poder hacer algo más largo y más masivo.

La empresa con la cual voy a hacer este Seminario es Onapsis, bastante cercana a la comunidad de Python Argentina, ya que hace mucho que es sponsor de eventos, pone los famosos "pybus" para ir a las PyCones, hosteó un meetup, etc, etc.

El Seminario es abierto al público en general, y será de 16 horas en total, cuatro sábados de Julio, durante la mañana, en CABA.

El costo es súper accesible, $600, ya que parte lo cubre Onapsis, y la idea es hacerlo barato para que pueda venir la mayor cantidad de gente posible.  Así y todo los cupos son limitados (la oficina tiene un límite), por lo que cuanto antes consigan reserva la posición, mejor.

Al final del Seminario entregaré un certificado de asistencia y la totalidad del curso en formato electrónico.

Para realizar la reserva deben enviarme un mail así les confirmo disponibilidad y les paso los datos necesarios para realizar el pago (que podrá ser por depósito, transferencia bancaria, tarjeta de crédito, débito, etc.).

Acá están todos los detalles del curso.

Read more
facundo

Salió fades 6


Salió la última versión de fades, el sistema que maneja automáticamente los virtualenvs en los casos que uno normalmente encuentra al escribir scripts y programas pequeños, e incluso ayuda a administrar proyectos grandes.

Esta es una de las versiones que más cambios metimos! Estos son solo algunos de los puntos de la lista de cambios:

- Instala no solamente desde PyPI sino también de repositorios remotos (GitHub, Bitbucket, Launchpad, etc) y directorios locales

    fades -d git+https://github.com/yandex/gixy.git@v0.1.3

    fades -d file://$PATH_TO_PROJECT

- Creamos un video para mostrar las características de fades más relevantes

- Selecciona el mejor virtualenv de los almacenados en casos de coincidencia múltiple

- Agregamos una opción --clean-unused-venvs para borrar todos los virtualenvs que no fueron usados en los últimos días

    fades --clean-unused-venvs=30

- Agregamos un --pip-options para pasarle los parámetros que sean necesarios a la ejecución subyacente de pip

    fades -d requests --pip-options="--no-cache-dir"

La lista completa de cambios está en el release formal, esta es la documentación entera, y acá tienen como instalarlo y disfrutarlo.

Read more
Colin Ian King

The Firmware Test Suite (FWTS) has an easy to use text based front-end that is primarily used by the FWTS Live-CD image but it can also be used in the Ubuntu terminal.

To install and run the front-end use:

 sudo apt-get install fwts-frontend  
sudo fwts-frontend-text

..and one should see a menu of options:


In this demonstration, the "All Batch Tests" option has been selected:


Tests will be run one by one and a progress bar shows the progress of each test. Some tests run very quickly, others can take several minutes depending on the hardware configuration (such as number of processors).

Once the tests are all complete, the following dialogue box is displayed:


The test has saved several files into the directory /fwts/15052017/1748/ and selecting Yes one can view the results log in a scroll-box:


Exiting this, the FWTS frontend dialog is displayed:


Press enter to exit (note that the Poweroff option is just for the fwts Live-CD image version of fwts-frontend).

The tool dumps various logs, for example, the above run generated:

 ls -alt /fwts/15052017/1748/  
total 1388
drwxr-xr-x 5 root root 4096 May 15 18:09 ..
drwxr-xr-x 2 root root 4096 May 15 17:49 .
-rw-r--r-- 1 root root 358666 May 15 17:49 acpidump.log
-rw-r--r-- 1 root root 3808 May 15 17:49 cpuinfo.log
-rw-r--r-- 1 root root 22238 May 15 17:49 lspci.log
-rw-r--r-- 1 root root 19136 May 15 17:49 dmidecode.log
-rw-r--r-- 1 root root 79323 May 15 17:49 dmesg.log
-rw-r--r-- 1 root root 311 May 15 17:49 README.txt
-rw-r--r-- 1 root root 631370 May 15 17:49 results.html
-rw-r--r-- 1 root root 281371 May 15 17:49 results.log

acpidump.log is a dump of the ACPI tables in format compatible with the ACPICA acpidump tool.  The results.log file is a copy of the results generated by FWTS and results.html is a HTML formatted version of the log.

Read more
facundo

Marcha contra el 2x1


Ayer fuí a la marcha contra el fallo de la corte suprema berreta que tenemos estos tiempos en el que pretenden aplicar una vieja ley que ya no está vigente para bajarle la condena a criminales de lesa humanidad.

Fue impresionante. Y fue necesario. Como leí por ahí, hay un momento para las redes sociales virtuales, pero también hay un momento para ir y poner el cuerpo.

Una panorámica de la plaza

Otra panorámica de la plaza

Ningún genocida suelto. No queremos que esas bestias asesinan disfruten ni un día de libertad, que se mueran en carcel común.

La columna de HIJOS entrando a la plaza

"Esta vez no vamos a decir 'gracias por acompañarnos', porque todos los que estamos acá es porque repudiamos la decisión de la Corte. Estamos acá celebrando porque el pueblo unido jamás será vencido", dijo en un momento Taty Almeida, y siguió "Los organismos de derechos humanos decimos nunca más a la impunidad, nunca más torturadores, violadores, apropiadores de niños. Nunca más privilegios para los criminales de lesa humanidad. Nunca más terrorismo de Estado. Nunca más genocidas sueltos. Nunca más el silencio. No queremos convivir con los asesinos más sangrientos de la historia".

Las oradoras principales

Estela Carlotto también habló: "La dictadura no es un hecho del pasado lejano. Que la corporación judicial nos escuche, porque no claudicaremos en nuestro reclamo nacional e internacional en la defensa de los derechos conquistados. ¡Levanten los pañuelos! ¡Por los 30 mil desaparecidos!"

Pañuelos

Fuimos medio millón. Más treinta mil que no estaban pero sí.

Read more
Colin Ian King

Simple job scripting in stress-ng 0.08.00

The latest release of stress-ng 0.08.00 now contains a new job scripting feature. Jobs allow one to bundle up a set of stress options  into a script rather than cram them all onto the command line.  One can now also run multiple invocations of a stressor with the latest version of stress-ng and conbined with job scripts we now have a powerful way of running more complex stress tests.

The job script commands are essentially the stress-ng long options without the need for the '--' option characters.  One option per line is allowed.

For example:

 $ stress-ng --cpu 1 --matrix 1 --verbose --tz --timeout 60s --cpu 1 --matrix -1 --icache 1 

would become:

 $cat example.job  
verbose
tz
timeout 60
cpu 1
matrix 1
icache 1

One can also add comments using the # character prefix.   By default the stressors will be run in parallel, but one can use the "run sequential" command in the job script to run the stressors sequentially.

The following script runs the mmap stressor multiple times using more memory on each run:

 $ cat mmap.job  
run sequential # one job at a time
timeout 2m # run for 2 minutes
verbose # verbose output
#
# run 4 invocations and increase memory each time
#
mmap 1
mmap-bytes 25%
mmap 1
mmap-bytes 50%
mmap 1
mmap-bytes 75%
mmap 1
mmap-bytes 100%

Some of the stress-ng stressors have various "methods" that allow one to modify the way the stressor behaves.  The following example shows how job scripts can be uses to exercise a system using different stressor methods:

 $ cat /usr/share/stress-ng/example-jobs/matrix-methods.job   
#
# hot-cpu class stressors:
# various options have been commented out, one can remove the
# proceeding comment to enable these options if required.
#
# run the following tests in parallel or sequentially
#
run sequential
# run parallel
#
# verbose
# show all debug, warnings and normal information output.
#
verbose
#
# run each of the tests for 60 seconds
# stop stress test after N seconds. One can also specify the units
# of time in seconds, minutes, hours, days or years with the suf‐
# fix s, m, h, d or y.
#
timeout 1m
# tz
# collect temperatures from the available thermal zones on the
# machine (Linux only). Some devices may have one or more thermal
# zones, where as others may have none.
tz
#
# matrix stressor with examples of all the methods allowed
#
# start N workers that perform various matrix operations on float‐
# ing point values. By default, this will exercise all the matrix
# stress methods one by one. One can specify a specific matrix
# stress method with the --matrix-method option.
#
#
# Method Description
# all iterate over all the below matrix stress methods
# add add two N × N matrices
# copy copy one N × N matrix to another
# div divide an N × N matrix by a scalar
# hadamard Hadamard product of two N × N matrices
# frobenius Frobenius product of two N × N matrices
# mean arithmetic mean of two N × N matrices
# mult multiply an N × N matrix by a scalar
# prod product of two N × N matrices
# sub subtract one N × N matrix from another N × N matrix
# trans transpose an N × N matrix
#
matrix 0
matrix-method all
matrix 0
matrix-method add
matrix 0
matrix-method copy
matrix 0
matrix-method div
matrix 0
matrix-method frobenius
matrix 0
matrix-method hadamard
matrix 0
matrix-method mean
matrix 0
matrix-method mult
matrix 0
matrix-method prod
matrix 0
matrix-method sub
matrix 0
matrix-method trans

Various example job scripts can be found in /usr/share/stress-ng/example-job, one can use these as a base for writing more complex stressors.  The example jobs have all the options commented (using the text from the stress-ng manual) to make it easier to see how each stressor can be run.

Version 0.08.00 landed in Ubuntu 17.10 Artful Aardvark and is available as a snap and I've got backports in ppa:colin-king/white for older releases of Ubuntu.

Read more
Colin Watson

Well, it’s been a while!  Since we last posted a general update, the Launchpad team has become part of Canonical’s Online Services department, so some of our efforts have gone into other projects.  There’s still plenty happening with Launchpad, though, and here’s a changelog-style summary of what we’ve been up to.

Answers

  • Lock down question title and description edits from random users
  • Prevent answer contacts from editing question titles and descriptions
  • Prevent answer contacts from editing FAQs

Blueprints

  • Optimise SpecificationSet.getStatusCountsForProductSeries, fixing Product:+series timeouts
  • Add sprint deletion support (#2888)
  • Restrict blueprint count on front page to public blueprints

Build farm

  • Add fallback if nominated architecture-independent architecture is unavailable for building (#1530217)
  • Try to load the nbd module when starting launchpad-buildd (#1531171)
  • Default LANG/LC_ALL to C.UTF-8 during binary package builds (#1552791)
  • Convert buildd-manager to use a connection pool rather than trying to download everything at once (#1584744)
  • Always decode build logtail as UTF-8 rather than guessing (#1585324)
  • Move non-virtualised builders to the bottom of /builders; Ubuntu is now mostly built on virtualised builders
  • Pass DEB_BUILD_OPTIONS=noautodbgsym during binary package builds if we have not been told to build debug symbols (#1623256)

Bugs

  • Use standard milestone ordering for bug task milestone choices (#1512213)
  • Make bug activity records visible to anonymous API requests where appropriate (#991079)
  • Use a monospace font for “Add comment” boxes for bugs, to match how the comments will be displayed (#1366932)
  • Fix BugTaskSet.createManyTasks to map Incomplete to its storage values (#1576857)
  • Add basic GitHub bug linking (#848666)
  • Prevent rendering of private team names in bugs feed (#1592186)
  • Update CVE database XML namespace to match current file on cve.mitre.org
  • Fix Bugzilla bug watches to support new versions that permit multiple aliases
  • Sort bug tasks related to distribution series by series version rather than series name (#1681899)

Code

  • Remove always-empty portlet from Person:+branches (#1511559)
  • Fix OOPS when editing a Git repository with over a thousand refs (#1511838)
  • Add Git links to DistributionSourcePackage:+branches and DistributionSourcePackage:+all-branches (#1511573)
  • Handle prerequisites in Git-based merge proposals (#1489839)
  • Fix OOPS when trying to register a Git merge with a target path but no target repository
  • Show an “Updating repository…” indication when there are pending writes
  • Launchpad’s Git hosting backend is now self-hosted
  • Fix setDefaultRepository(ForOwner) to cope with replacing an existing default (#1524316)
  • Add “Configure Code” link to Product:+git
  • Fix Git diff generation crash on non-ASCII conflicts (#1531051)
  • Fix stray link to +editsshkeys on Product:+configure-code when SSH keys were already registered (#1534159)
  • Add support for Git recipes (#1453022)
  • Fix OOPS when adding a comment to a Git-based merge proposal without using AJAX (#1536363)
  • Fix shallow git clones over HTTPS (#1547141)
  • Add new “Code” portlet on Product:+index to make it easier to find source code (#531323)
  • Add missing table around widget row on Product:+configure-code, so that errors are highlighted properly (#1552878)
  • Sort GitRepositorySet.getRepositories API results to make batching reliable (#1578205)
  • Show recent commits on GitRef:+index
  • Show associated merge proposals in Git commit listings
  • Show unmerged and conversation-relevant Git commits in merge proposal views (#1550118)
  • Implement AJAX revision diffs for Git
  • Fix scanning branches with ghost revisions in their ancestry (#1587948)
  • Fix decoding of Git diffs involving non-UTF-8 text that decodes to unpaired surrogates when treated as UTF-8 (#1589411)
  • Fix linkification of references to Git repositories (#1467975)
  • Fix +edit-status for Git merge proposals (#1538355)
  • Include username in git+ssh URLs (#1600055)
  • Allow linking bugs to Git-based merge proposals (#1492926)
  • Make Person.getMergeProposals have a constant query count on the webservice (#1619772)
  • Link to the default git repository on Product:+index (#1576494)
  • Add Git-to-Git code imports (#1469459)
  • Improve preloading of {Branch,GitRepository}.{landing_candidates,landing_targets}, fixing various timeouts
  • Export GitRepository.getRefByPath (#1654537)
  • Add GitRepository.rescan method, useful in cases when a scan crashed

Infrastructure

  • Launchpad’s SSH endpoints (bazaar.launchpad.net, git.launchpad.net, upload.ubuntu.com, and ppa.launchpad.net) now support newer key exchange and MAC algorithms, allowing compatibility with OpenSSH >= 7.0 (#1445619)
  • Make cross-referencing code more efficient for large numbers of IDs (#1520281)
  • Canonicalise path encoding before checking a librarian TimeLimitedToken (#677270)
  • Fix Librarian to generate non-cachable 500s on missing storage files (#1529428)
  • Document the standard DELETE method in the apidoc (#753334)
  • Add a PLACEHOLDER account type for use by SSO-only accounts
  • Add support to +login for acquiring discharge macaroons from SSO via an OpenID exchange (#1572605)
  • Allow managing SSH keys in SSO
  • Re-raise unexpected HTTP errors when talking to the GPG key server
  • Ensure that the production dump is usable before destroying staging
  • Log SQL statements as Unicode to avoid confusing page rendering when the visible_render_time flag is on (#1617336)
  • Fix the librarian to fsync new files and their parent directories
  • Handle running Launchpad from a Git working tree
  • Handle running Launchpad on Ubuntu 16.04 (upgrade currently in progress)
  • Fix delete_unwanted_swift_files to not crash on segments (#1642411)
  • Update database schema for PostgreSQL 9.5 and 9.6
  • Check fingerprints of keys received from the keyserver rather than trusting it implicitly

Registry

  • Make public SSH key records visible to anonymous API requests (#1014996)
  • Don’t show unpublished packages or package names from private PPAs in search results from the package picker (#42298, #1574807)
  • Make Person.time_zone always be non-None, allowing us to easily show the edit widget even for users who have never set their time zone (#1568806)
  • Let latest questions, specifications and products be efficiently calculated
  • Let project drivers edit series and productreleases, as series drivers can; project drivers should have series driver power over all series
  • Fix misleading messages when joining a delegated team
  • Allow team privacy changes when referenced by CodeReviewVote.reviewer or BugNotificationRecipient.person
  • Don’t limit Person:+related-projects to a single batch

Snappy

  • Add webhook support for snaps (#1535826)
  • Allow deleting snaps even if they have builds
  • Provide snap builds with a proxy so that they can access external network resources
  • Add support for automatically uploading snap builds to the store (#1572605)
  • Update latest snap builds table via AJAX
  • Add option to trigger snap builds when top-level branch changes (#1593359)
  • Add processor selection in new snap form
  • Add option to automatically release snap builds to store channels after upload (#1597819)
  • Allow manually uploading a completed snap build to the store
  • Upload *.manifest files from builders as well as *.snap (#1608432)
  • Send an email notification for general snap store upload failures (#1632299)
  • Allow building snaps from an external Git repository
  • Move upload to FAILED if its build was deleted (e.g. because of a deleted snap) (#1655334)
  • Consider snap/snapcraft.yaml and .snapcraft.yaml as well as snapcraft.yaml for new snaps (#1659085)
  • Add support for building snaps with classic confinement (#1650946)
  • Fix builds_for_snap to avoid iterating over an unsliced DecoratedResultSet (#1671134)
  • Add channel track support when uploading snap builds to the store (contributed by Matias Bordese; #1677644)

Soyuz (package management)

  • Remove some more uses of the confusing .dsc component; add the publishing component to SourcePackage:+index in compensation
  • Add include_meta option to SPPH.sourceFileUrls, paralleling BPPH.binaryFileUrls
  • Kill debdiff after ten minutes or 1GiB of output by default, and make sure we clean up after it properly (#314436)
  • Fix handling of << and >> dep-waits
  • Allow PPA admins to set external_dependencies on individual binary package builds (#671190)
  • Fix NascentUpload.do_reject to not send an erroneous Accepted email (#1530220)
  • Include DEP-11 metadata in Release file if it is present
  • Consistently generate Release entries for uncompressed versions of files, even if they don’t exist on the filesystem; don’t create uncompressed Packages/Sources files on the filesystem
  • Handle Build-Depends-Arch and Build-Conflicts-Arch from SPR.user_defined_fields in Sources generation and SP:+index (#1489044)
  • Make index compression types configurable per-series, and add xz support (#1517510)
  • Use SHA-512 digests for GPG signing where possible (#1556666)
  • Re-sign PPAs with SHA-512
  • Publish by-hash index files (#1430011)
  • Show SHA-256 checksums rather than MD5 on DistributionSourcePackageRelease:+files (#1562632)
  • Add a per-series switch allowing packages in supported components to build-depend on packages in unsupported components, used for Ubuntu 16.04 and later
  • Expand archive signing to kernel modules (contributed by Andy Whitcroft; #1577736)
  • Uniquely index PackageDiff(from_source, to_source) (part of #1475358)
  • Handle original tarball signatures in source packages (#1587667)
  • Add signed checksums for published UEFI/kmod files (contributed by Andy Whitcroft; #1285919)
  • Add support for named authentication tokens for private PPAs
  • Show explicit add-apt-repository command on Archive:+index (#1547343)
  • Use a per-archive OOPS timeline in archivepublisher scripts
  • Link to package versions on DSP:+index using fmt:url rather than just a relative link to the version, to avoid problems with epochs (#1629058)
  • Fix RepositoryIndexFile to gzip without timestamps
  • Fix Archive.getPublishedBinaries API call to have a constant query count (#1635126)
  • Include the package name in package copy job OOPS reports and emails (#1618133)
  • Remove headers from Contents files (#1638219)
  • Notify the Changed-By address for PPA uploads if the .changes contains “Launchpad-Notify-Changed-By: yes” (#1633608)
  • Accept .debs containing control.tar.xz (#1640280)
  • Add Archive.markSuiteDirty API call to allow requesting that a given archive/suite be published
  • Don’t allow cron-control to interrupt publish-ftpmaster part-way through (#1647478)
  • Optimise non-SQL time in PublishingSet.requestDeletion (#1682096)
  • Store uploaded .buildinfo files (#1657704)

Translations

  • Allow TranslationImportQueue to import entries from file objects rather than having to read arbitrarily-large files into memory (#674575)

Miscellaneous

  • Use gender-neutral pronouns where appropriate
  • Self-host the Ubuntu webfonts (#1521472)
  • Make the beta and privacy banners float over the rest of the page when scrolling
  • Upgrade to pytz 2016.4 (#1589111)
  • Publish Launchpad’s code revision in an X-Launchpad-Revision header
  • Truncate large picker search results rather than refusing to display anything (#893796)
  • Sync up the lists footer with the main webapp footer a bit (#1679093)

Read more
facundo


A los que armamos presentaciones mostrando programitas o pequeñas porciones de código siempre se nos presentó un inconveniente: ¿cómo mostrar ese código apropiadamente coloreado?

Con "apropiadamente coloreado" no me refiero a pintarrajeado como adolescente que sale a bailar, o decorado con florcitas, soles, y/o aviones de guerra, sino a algo que es típico en el mundo de la programación donde los editores le ponen distintos colores a las palabras que forman el código en función de qué tipo de palabra son: un color para las variables, otro para los textos, otro para los nombres de las funciones, otro para...

No voy a entrar en detalle sobre qué es ese coloreado (que en inglés llamamos "syntax highlighting"), pero les muestro un ejemplo:

Ejemplo de código coloreado

En fin, volviendo a meter código coloreado en LibreOffice. Lo charlé bastante en su momento con varias personas, lo mejor parecía capturar una imagen del código y meter eso, pero es una porquería porque no queda bien ante el menor cambio de tamaño, y si encima hay que tocar cualquier cosa de ese texto es imposible.

También buscando encontré Coooder, que es una extensión de LibreOffice que hacía exactamente eso. El verbo hacer de la oración anterior está en pasado porque sólo funciona para los LibreOffice del 3.3 a 3.6 (yo actualmente tengo 5.1).

Finalmente encontré la manera de hacerlo! No es la más directa, pero el resultado es el que estaba buscando: un texto coloreado dentro de LibreOffice. Genial!

Los pasos se dividen en dos partes grandes:

  • generar un documento en formato RTF
  • meter este doc RTF en la presentación

Cómo generar el doc RTF:

  • Abrir el código con gvim
  • Escribir :TOhtml, lo cual abrirá otra ventana con el código HTML correspondiente a nuestro texto coloreado.
  • Escribir :saveas /tmp/cod.html, lo cual grabará ese HTML en el path ahí especificado
  • Cerrar cualquier libreffice abierto (sino el próximo paso falla :/).
  • Desde una terminal, ejecutar unoconv -f rtf /tmp/cod.html lo cual nos dejará un archivo en /tmp/cod.rtf justamente con el código nuestro, en formato RTF.
  • Abrir el LibreOffice Impress
  • Ir al Menu, Insertar, Archivo; un par de clicks en "siguiente" y ya tenemos el texto adentro.
  • Seleccionar el texto que acabamos de insertar, y cambiarle la tipografía a alguna monoespaciada.

Voilà!

Read more
Anthony Dillon

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.

One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.

Here is the process we developed and have been using on multiple projects.

The process

Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.

Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.

The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.

As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.

Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.

Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.

That’s it!

Benefits

If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.

Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.

If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.

The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.

Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.

Conclusion

This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.

Read more
Alan Griffiths

Fairer than Death

The changes at Canonical have had an effect both on the priorities for the Mir project and on the resources available for future development. We have been meeting to make new plans. In short:

Mir is alive, there are Canonical IoT projects that use it. Work will continue on Mir to support these and on cleaning and upstreaming the distro patches Ubuntu carries to support Mir.

Canonical are no longer working on a desktop environment or phone shell. However we will maintain the existing support Mir has for compositing and window management. (We’re happy to receive PRs in support of similar efforts.)

Read more
Colin Ian King

Tracking CoverityScan issues on Linux-next

Over the past 6 months I've been running static analysis on linux-next with CoverityScan on a regular basis (to find new issues and fix some of them) as well as keeping a record of the defect count.


Since the beginning of September over 2000 defects have been eliminated by a host of upstream developers and the steady downward trend of outstanding issues is good to see.  A proportion of the outstanding defects are false positives or issues where the code is being overly zealous, for example, bounds checking where some conditions can never happen. Considering there are millions of lines of code, the defect rate is about average for such a large project.

I plan to keep the static analysis running long term and I'll try and post stats every 6 months or so to see how things are progressing.

Read more
Alan Griffiths

unity8-team

In previous posts I’ve alluded to the ecosystem of projects developed to support Unity8. While I have come across most of them during my time with Canonical I wouldn’t be confident of creating a complete list.

But some of the Unity8 developers (mostly Pete Woods) have been working to make it easy to identify these projects. They have been copied onto github:

https://github.com/unity8-team

This is simply a snapshot for easy reference, not the start of another fork.

Read more
facundo


Hace rato que Exaile es mi reproductor de música de cabecera. Tiene todo lo que quiero, y el resto de las cosas que no quiero no son intrusivas ni me molestan (no tengo que pelearme con el programa para usarlo, digamos).

Y está hecho en Python :). Es una ventaja a la hora de debuguear algún problema (y si no recuerdo mal algún parche he mandado por algún bug...).

Exaile

Con las idas y venidas de Ubuntu en el escritorio, en algún momento tuve problemas usando la versión oficial o última liberada, y en ese momento lo resolví saltando directamente a usarlo desde el proyecto. Cuando decidí hacer eso probé directamente master, y me anduvo, así que me quedé ahí.

Es un toque riesgoso (a nivel de estabilidad) porque estás probando lo último que meten los desarrolladores, pero por ahora estamos (casi) bien; hay que tener en cuenta que no lo actualizo todo el tiempo, sino cuando estoy buscando alguna corrección específica que se haya hecho.

El otro día vi que habían solucionado algo que me molestaba (un detalle nomás, relacionado con el arrastrar canciones en la playlist), e hice git pull para actualizar a lo último. Algunas cosas mejoraron (puntualmente lo que estaba buscando, joya), pero unos minutos después me di cuenta que no me andaba mi hotkey de teclado para pausar y rearrancar la música.

Yo estoy muy acostumbrado a apretar ctrl-shift-espacio para hacer que la música se frene, y el mismo golpe de teclas para que la música reanude, y de repente no me funcionaba más :(.

Empecé a investigar qué era, y me di cuenta que Exaile no tenía más el plugin gnomemmkeys, que es el que le permite "recibir las teclas de multimedia que uno aprieta" (así muy entre comillas, porque no es la descripción más realista de lo que sucede, pero transmite la idea).

Buscando (en el proyecto mismo) en qué momento eso desapareció encontré un commit que hacía referencia a mpris2, que resulta que es una interfaz de D-Bus para controlar reproductores de sonido/video.

Caution, geek

Aprendiendo sobre esta tecnología encontré que había un cliente de mpris de linea de comandos, así que lo instalé (sudo apt-get install mpris-remote) y configuré en el sistema para que ctrl-shift-espacio sea mpris-remote pause.

Nota: el comando que puse arriba manda la señal "pause", que pausa y "despausa", ojo, no confundir con "play", que arranca la próxima canción (no sigue de donde estaba).

Nota 2: después de que lo había implementado, me dijeron en el canal de IRC de Exaile que directamente podía hacer exaile --play-pause desde la linea de comandos. Me quedé con la implementación original, sin embargo, porque es más rápida (solo manda una señal, no levanta todo un reproductor de música solo para mandarla).

Read more
Alan Griffiths

Why Mir

Mir provides a framework for integration between three parts of the graphics stack.

These parts are:

  1. The drivers that control the hardware
  2. The desktop environment or shell
  3. Applications with a GUI

Mir currently works with mesa-kms graphics, mesa-x11 graphics or android HWC graphics (work has been done on vulkan graphics and is well beyond proof-of-concept but hasn’t been released).

Switching the driver support doesn’t impact the shell or applications. (Servers will run unchanged on mesa, on X11 and android.) Mir provides “abstractions” so that, for example, user input or display configuration changes look the same to servers and client applications regardless of the drivers being used.

Mir supports writing a display server by providing sensible defaults for (e.g.) positioning tooltips without imposing a desktop style. It has always carried example programs demonstrating how to do “fullscreen” (kiosk style), traditional “floating windows” and “tiling” window management to ensure we don’t “bake in” too many policies.

Because the work has been funded by Canonical features that were important to Ubuntu Phone and Unity8 desktop have progressed faster and are more complete than others.

When Mir was started we needed a mechanism for client-server communications (and Wayland wasn’t in the state it is today). We did something that worked well enough (libmirclient) and, because it’s just a small, intentionally isolated part of the whole, we could change later. We never imagined what a “big deal” that decision would become.


[added]

Seeing the initial reactions I can tell I made a farce of explaining this. I’ll try again:

For the author of a shell what Mir provides is subtly but significantly different from a set of libraries you can use to build on: It provides a default shell that can be customized.

Read more

This is the third in a series of blog posts on creating an asynchronous D-Bus service in python. For the inital entry, go here. For the previous entry, go here

Last time we transformed our base synchronous D-Bus service to include asynchronous calls in a rather naive way. In this post, we’ll refactor those asynchronous calls to include D-Bus signals; codewise, we’ll pick up right where we left off after part 2: https://github.com/larryprice/python-dbus-blog-series/tree/part2. Of course, all of today’s code can be found in the same project with the part3 tag: https://github.com/larryprice/python-dbus-blog-series/tree/part3.

Sending Signals

We can fire signals from within our D-Bus service to notify clients of tasks finishing, progress updates, or data availability. Clients subscribe to these signals and act accordingly. Let’s start by changing the signature of the slow_result method of RandomData to be a signal:

random_data.py
1
2
3
4
# ...
@dbus.service.signal("com.larry_price.test.RandomData", signature='ss')
def slow_result(self, thread_id, result):
    pass

We’ve replaced the context decorator with a signal, and we’ve swapped out the guts of this method for a pass, meaning the method will call but doesn’t do anything else. We now need a way to call this signal, which we can do from the SlowThread class we were using before. When creating a SlowThread in the slow method, we can pass in this signal as a callback. At the same time, we can remove the threads list we used to use to keep track of existing SlowThread objects.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")

        random.seed()

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def slow(self, bits=8):
        thread = SlowThread(bits, self.slow_result)
        return thread.thread_id

    # ...

Now we can make some updates to SlowThread. The first thing we should do is add a new parameter callback and store it on the object. Because slow_result no longer checks the done property, we can remove that and the finished event. Instead of calling set on the event, we can now simply call the callback we stored with the current thread_id and result. We end up with a couple of unused variables here, so I’ve also gone ahead and refactored the work method on SlowThread to be a little cleaner.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# ...

class SlowThread(object):
    def __init__(self, bits, callback):
        self._callback = callback
        self.result = ''

        self.thread = threading.Thread(target=self.work, args=(bits,))
        self.thread.start()
        self.thread_id = str(self.thread.ident)

    def work(self, bits):
        num = ''

        while True:
            num += str(random.randint(0, 1))
            bits -= 1
            time.sleep(1)

            if bits <= 0:
                break

        self._callback(self.thread_id, str(int(num, 2)))

And that’s it for the service-side. Any callers will need to subscribe to our slow_result method, call our slow method, and wait for the result to come in.

Receiving Signals

We need to make some major changes to our client program in order to receive signals. We’ll need to introduce a main loop, which we’ll spin up in a separate thread, for communicating on the bus. The way I like to do this is with a ContextManager so we can guarantee that the loop will be exited when the program exits. We’ll move the logic we previously used in client to get the RandomData object into a private member method called _setup_object, which we’ll call on context entry after creating the loop. On context exit, we’ll simply call quit on the loop.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Encapsulate calling the RandomData object on the session bus with a main loop
import dbus, dbus.exceptions, dbus.mainloop.glib
import threading
from gi.repository import GLib
class RandomDataClient(object):
    def __enter__(self):
        self._setup_dbus_loop()
        self._setup_object()

        return self

    def __exit__(self, exc_type, exc_value, traceback):
        self._loop.quit()
        return True

    def _setup_dbus_loop(self):
        dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
        self._loop = GLib.MainLoop()

        self._thread = threading.Thread(target=self._loop.run)
        self._thread.start()

    def _setup_object(self):
        try:
            self._bus = dbus.SessionBus()
            self._random_data = self._bus.get_object("com.larry-price.test",
                                                     "/com/larry_price/test/RandomData")
        except dbus.exceptions.DBusException as e:
            print("Failed to initialize D-Bus object: '%s'" % str(e))
            sys.exit(2)

We can add methods on RandomDataClient to encapsulate quick and slow. quick is easy - we’ll just return self._random_data.quick(bits). slow, on the other hand, will take a bit of effort. We’ll need to subscribe to the slow_result signal, giving a callback for when the signal is received. Since we want to wait for the result here, we’ll create a threading.Event object and wait for it to be set, which we’ll do in our handler. The handler, which we’ll call _finished will validate that it has received the right result based on the current thread_id and then set the result on the RandomDataClient object. After all this, we’ll remove the signal listener from our bus connection and return the final result.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class RandomDataClient(object):
    # ...

    def quick(self, bits):
        return self._random_data.quick(bits)

    def _finished(self, thread_id, result):
        if self._thread_id == self._thread_id:
            self._result = result
            self._done.set()

    def slow(self, bits):
        self._done = threading.Event()
        self._thread_id = None
        self._result = None

        signal = self._bus.add_signal_receiver(path="/com/larry_price/test/RandomData", handler_function=self._finished,
                                               dbus_interface="com.larry_price.test.RandomData", signal_name='slow_result')
        self._thread_id = self._random_data.slow(bits)
        self._done.wait()
        signal.remove()

        return self._result

Now we’re ready to actually call these methods. We’ll wrap our old calling code with the RandomDataClient context manager, and we’ll directly call the methods as we did before on the client:

client
1
2
3
4
5
6
7
8
# ...

# Call the appropriate method with the given number of bits
with RandomDataClient() as client:
    if args.slow:
        print("Your random number is: %s" % client.slow(int(args.bits)))
    else:
        print("Your random number is: %s" % client.quick(int(args.bits)))

This should have feature-parity with our part 2 code, but now we don’t have to deal with an infinite loop waiting for the service to return.

Next time

We have a working asynchronous D-Bus service using signals. Next time I’d like to dive into forwarding command output from a D-Bus service to a client.

As a reminder, the end result of our code in this post is MIT Licensed and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part3.

Read more
Alan Griffiths

A new hope

Disclaimer: With the changes in progress at Canonical I am not currently in a position to make any commitment about the future of Mir.

It is no secret that I think there’s value to the Mir project and I’d like it to be a valued contribution to the free software landscape.

I’ve written elsewhere about my efforts to make it easy to use Mir for making desktop, phone and “Internet of Things” shells, I won’t repeat that here beyond saying “have a look”.

It is important to me that Mir is GPL. That makes it a contribution to a “commons” that I care about.

The dream of convergence dies hard. Canonical may have abandoned it, but I hope it survives. A lot of the issues have been tackled and knowledge gained.

I read that UBPorts will be using Mir “for the time being”. They sensibly don’t want to maintain Mir and are planning a migration to an (unidentified) Wayland compositor.

However, we can also see from G+ Mark Shuttleworth is planning to keep “investing in Mir” for the Internet of Things.

This opens up an interesting possibility: there’s no obvious technical reason that Mir could not support clients using libwayland directly. It would take some research to confirm this but I can’t foresee anything technical blocking such an approach.

There could be some benefits to Canonical from this: the current design of Mir client-server interation makes sense in a traditional Debian (or RPM) repository based world, but less so for Snap (or Flatpak).

In a traditional environment where the libraries are a shared resource updates simply need to maintain ABI compatibility to work with existing clients. That makes it possible to keep Mir server and client and server libraries “in step” while making incompatible changes to the communications protocol.

However with Snaps the client and server “snap”s package the libraries they use with the applications.That presents issues for keeping them in step. These issues are soluble but create an additional burden for Mir, server and client developers. Using a protocol based solution would ease this burden.

For the wider community native support for Wayland clients in Mir would make the task of toolkit maintainers and others simpler.

If Canonical could be persuaded to add this feature to Mir and/or maintain it in the project would anyone care?

Is anyone else willing to help with such a feature?

Read more