Canonical Voices

facundo


Hace un par de semanas finalmente salió el trámite de la Inspección General de Justicia sin nada para revisar o modificar... ¡está formada legalmente la Asociación Civil Python Argentina!

Estatuto todo sellado

Ahora estamos trabajando a full para sacar el CUIT en la AFIP, lo que nos va a permitir abrir una cuenta en el banco. De esta manera los chicos que están organizando la PyCon ya van a poder darle luz verde a los sponsors para que pongan plata.

Más allá de ayudar organizativamente en la PyCon y en otros eventos, son cuatro las cosas que queremos empujar el primer par de años:

  • Becas de viaje: porque creemos que hay mucho valor en que la gente se conozca, así que trataremos de ayudar a que la gente pueda viajar a eventos que se organicen en el país
  • Traducciones en eventos: si van a venir disertantes grosos que no hablen castellano, hacer lo posible para que la mayoría pueda entenderlos
  • Descuentos en cursos: estamos barajando un par de modalidades
  • Sitio web de PyAr y otra infraestructura: tenemos que dar un salto en seriedad a la hora de mantener los distintos servicios que da el grupo

Para eso (y para los costos operativos) básicamente vamos a necesitar dinero :) La Asociación se va a financiar de dos maneras, principalmente...

Una es por aporte de los socios. La idea es que los socios, que se beneficiarían directa e indirectamente por la Asociación Civil, pongan un manguito por mes para ayudar a hacer cosas.

El otro mecanismo es por aporte directo de empresas (de las cuales esperamos un manguito más grande, posiblemente anual).

Ya les contaremos bien cuales serán los mecanismos, montos, y eso. ¡Estén atentos!

Read more
Alan Griffiths

Mir release 0.26.3

Mir 0.26.3 for all!

By itself Mir 0.26.3 isn’t a very interesting release, just a few minor bugfixes: [https://launchpad.net/mir/0.26/0.26.3]

The significant thing with Mir 0.26.3 is that we are making this version available across the latest releases of Ubuntu as well as 17.10 (Artful Ardvark). That is: Ubuntu 17.04 (Zesty Zapus), Ubuntu 16.10 (Yakkety Yak) and, last but not least, Ubuntu 16.04LTS (Xenial Xerus).

This is important to those developing Mir based snaps. Having Mir 0.26 in the 16.04LTS archive removes the need to build Mir based snaps using the “stable-phone-overlay” PPA.

Read more
Stéphane Graber

LXD logo

Introduction

As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups

Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:65536
root:100000:65536

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:65536
root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:1000000000
root:100000:1000000000

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:1000000000
root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

root@vorash:~# systemctl restart lxd
root@vorash:~# cat /var/log/lxd/lxd.log
lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
root@vorash:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

stgraber@castiana:~$ lxc config set test security.idmap.isolated true
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

stgraber@castiana:~$ lxc config set test security.idmap.size 200000
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
error: Not enough uid/gid available for the container.

Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
Device home added to test

So that was pretty easy, but did it work?

stgraber@castiana:~$ lxc exec test -- bash
root@test:~# ls -lh /home/
total 529K
drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
lxd:201105:1
root:201105:1

stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
lxd:200512:1
root:200512:1

stgraber@castiana:~$ sudo systemctl restart lxd

stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -

stgraber@castiana:~$ lxc restart test

At which point, things should be working in the container:

stgraber@castiana:~$ lxc exec test -- su ubuntu -l
ubuntu@test:~$ ls -lh
total 119K
drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
ubuntu@test:~$ 

Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Discussion forun: https://discuss.linuxcontainers.org
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
K.Tsakalozos

Introduction

In this post we discus our efforts to setup a Federation of on-prem Kubernetes Clusters using CoreDNS. The Kubernetes Cluster version used is 1.6.2. We use Juju to deliver clusters on AWS, yet the clusters should be considered on-prem since they do not integrate with any of the cloud’s features. The steps described here are repeatable on any pool of resources you may have available (cloud or bare metal). Note that this is still a work in progress and should not be used on a production environment.

What is Federation

Cluster Federation (Fig. from http://blog.kubernetes.io/2016/07/cross-cluster-services.html)

Cluster federation allows you to “connect” your Kubernetes clusters and manage certain functionality from a central point. The need for central management arises when you have more than one clusters possibly spread across different locations. You may need a globally distributed infrastructure for performance, availability or even legal reasons. You might want to have isolated clusters satisfying the needs of different teams. Whatever the case may be, federation can assist in some of the ops tasks. You can find more on the benefits of federation in the official documentation.

Why Cluster Federation with Juju

Juju makes delivery of Kubernetes clusters dead simple! Juju builds an abstraction over the pool of resources you have and allows us to deliver Kubernetes to all the main public and private clouds (AWS, Azure, GCE, Openstack, Oracle etc) and bare metal physical clusters. Juju also enables you to experiment with small scale deployments stood up on your local machine using lxc containers.

Having the option to deploy a cluster with the ease Juju offers, federation comes as a natural next step. Deployed clusters should be considered on-prem clusters as they are exactly the same as if they were deployed on bare metal. This is simply because even if you deploy on a cloud Juju will provision machines (virtual machines in this case) and deploy Kubernetes as if it were a local on-prem deployment. You see, Juju is not magic after all!

There is a excellent blog post showing how you can place Juju clusters under a federation hosting cluster setup on GCE. Here we focus on a federation that has no dependencies on a specific cloud provider. To this end, we go with CoreDNS as a federation DNS provider.

Let’s Federate

Deploy clusters

Our setup is made of 3 clusters:

  • f1: will host the federation controller place
  • c1, c2: federated clusters

At the time of this writing (2nd of June 2017) cluster federation is in beta and is not yet integrated into the official Kubernetes charms. Therefore we will be using a patched version of charms and bundles (source of charms and cdk-addons, charms master and worker, and bundle).

We assume you already have Juju installed and a controller in place. If not have a look at the Juju installation instructions.

Let’s deploy these three clusters.

#> juju add-model f1
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
#> juju add-model c1
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
#> juju add-model c2
#> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2

Here is what the bundle.yaml looks like: http://paste.ubuntu.com/24746868/

Let’s copy and merge the kubeconfig files.

#> juju switch f1
#> juju scp kubernetes-master/0:config ~/.kube/config
#> juju switch c1
#> juju scp kubernetes-master/0:config ~/.kube/config.c1
#> juju switch c2
#> juju scp kubernetes-master/0:config ~/.kube/config.c2

Using this script you can merge the configs

#> ./merge-configs.py config.c1 c1
#> ./merge-configs.py config.c2 c2

To enable CoreDNS on f1 you just need to:

#> juju switch f1
#> juju config kubernetes-master enable-coredns=True

Have a look at your three contexts:

#> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
f1 f1 ubuntu
c1 c1-cluster c1-user
c2 c2-cluster c2-user

Create the federation

At this point you are ready to deploy the federation control plane on f1. To do so you will need a call to kubefed init. Kubefed init will deploy the required services and it will also perform a health check ping to see everything is ready. During this last check kubefed init will try to reach a service on a port that is not open. At the time of this writing we have a patch waiting upstream to make the port configurable so you can open it before triggering the federation creation. If you do not want to wait for the patch to be merged you can call kubefed init with -v 9 to see what port the services were appointed to and expose them though Juju while keeping kubefed running. This will become clear shortly.

Create a coredns-provider.conf file with the etc endpoint of etcd of f1 and your zone. You can look at the Juju status to find out the IP of the etcd machine:

#> cat ./coredns-provider.conf 
[Global]
etcd-endpoints = http://54.211.106.117:2379
zones = juju-fed.com.

Call kubefed

#> juju switch f1
#> kubefed init federation — host-cluster-context=f1 — dns-provider=”coredns” — dns-zone-name=”juju-fed.com.” — dns-provider-config=coredns-provider.conf — etcd-persistent-storage=false — api-server-service-type=”NodePort” -v 9 — api-server-advertise-address=34.224.101.147 — image=gcr.io/google_containers/hyperkube-amd64:v1.6.2

Note here that the api-server-advertise-address=34.224.101.147 is the IP of the worker on your f1 cluster. You selected NodePort so the federation service is available from all the worker nodes under a random port.

Kubefed will end up trying to do a health check. You should see on your console something similar to this: “GET https://34.224.101.147:30229/healthz” . The port 30229 is randomly selected and will differ in your case. While keeping kubefed init running, go ahead and open this port from a second terminal:

#> juju run — application kubernetes-worker “open-port 30229”

The federation context should be present in the list of contexts you have.

#> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
f1 f1 ubuntu
federation federation federation
c1 c1-cluster c1-user
c2 c2-cluster c2-user

Now you are ready to have other clusters join the federation:

#> kubectl config use-context federation
#> kubefed join c1 — host-cluster-context=f1
#> kubefed join c2 — host-cluster-context=f1

Two clusters should soon appear ready:

#> kubectl get clusters
NAME STATUS AGE
c1 Ready 3m
c2 Ready 1m

Congratulations! You are all set, your federation is ready to use.

What works

Setup your default namespace:

#> kubectl — context=federation create ns default

Get this replica set and create it.

#> kubectl — context=federation create -f nginx-rs.yaml

See your replica set created:

#> kubectl — context=federation describe rs/nginx
Name: nginx
Namespace: default
Selector: app=nginx
Labels: app=nginx
type=demo
Annotations: <none>
Replicas: 1 current / 1 desired
Pods Status: error in fetching pods: the server could not find the requested resource
Pod Template:
Labels: app=nginx
Containers:
frontend:
Image: nginx
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
26s 26s 1 federated-replicaset-controller Normal CreateInCluster Creating replicaset in cluster c2

Create a service.

#> kubectl — context=federation create -f beacon.yaml

Scale and open ports so the service is available

#> kubectl — context=federation scale rs/nginx — replicas 3

What does not work yet

Federation does not seem to handle NodePort services. To see that you can deploy a busybox and try to resolve the name of the service you deployed right above.

Use this yaml.

#> kubectl — context=c2 create -f bbox-4-test.yaml
#> kubectl — context=c2 exec -it busybox — nslookup beacon

This returns a local cluster IP which is rather unexpected.

There are some configuration options that did not have the effect I was expecting. First, we would want to expose the CoreDNS service as described in the docs. Assigning the same NodePort for both UDP and TCP had no effect; this is rather unfortunate since the issue is already addressed. The configuration described here and here (although it should be automatically applied I thought I would give it a try) did not seem to have any effect.

Open issues:

Conclusions

Federation on on-prem clusters has gone a long way and is getting significant improvements with every release. Some of the issues described here have already been addressed, I am waiting anxiously for the upcoming v1.7.0 release to see the fixes in action.

Do not hesitate to reach out to the sig-federation group, but most importantly I would be extremely happy and grateful if you pointed out some bug I have on my configs/code/thoughts ;)

References and Links

  1. Official federations docs, https://kubernetes.io/docs/concepts/cluster-administration/federation/
  2. Juju docs, https://jujucharms.com/
  3. Canonical Distribution of Kubernetes, https://jujucharms.com/canonical-kubernetes/
  4. Federation with host on GCE, https://medium.com/google-cloud/experimenting-with-cross-cloud-kubernetes-cluster-federation-dfa99f913d54
  5. CDK-addons source, https://github.com/juju-solutions/cdk-addons/tree/feature/coredns
  6. Kubernetes charms, https://github.com/juju-solutions/kubernetes/tree/feature/coredns
  7. Kubernetes master charm, https://jujucharms.com/u/kos.tsakalozos/kubernetes-master/31
  8. Kubernetes worker charm, https://jujucharms.com/u/kos.tsakalozos/kubernetes-worker/17
  9. Kubernetes bundle used for federation, https://jujucharms.com/u/kos.tsakalozos/federated-kubernetes-core/bundle/0
  10. Juju getting started, https://jujucharms.com/docs/stable/getting-started
  11. Patch kubefed init, https://github.com/kubernetes/kubernetes/pull/46283
  12. CoreDNS setup instructions https://kubernetes.io/docs/tasks/federation/set-up-coredns-provider-federation/#setup-coredns-server-in-nameserver-resolvconf-chain
  13. CoreDNS yaml file https://github.com/juju-solutions/cdk-addons/blob/feature/coredns/coredns/coredns-rbac.yaml#L95
  14. Nodeport UDP/TCP issue, https://github.com/kubernetes/kubernetes/issues/20092
  15. Federation DNS on-prem configuration https://kubernetes.io/docs/admin/federation/#kubernetes-15-passing-federations-flag-via-config-map-to-kube-dns

Read more
admin

Thursday June 8th, 2017

The MAAS team is happy to announce the introduction of development summaries. We hope this helps to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS, and bugs fixed in released MAAS versions.

Announcements

With the MAAS 2.2 release out of the door, we are happy to announce that:

  • MAAS 2.3 is now opened for development.
  • MAAS is moving to GIT in Launchpad – In the coming weeks, MAAS source will now be hosted under a GIT repository in Launchpad, once we complete the work of updating all our internal processes (e.g. CI, Landers, etc).

MAAS 2.3 (current development release)

With the team now focusing efforts on the new development release, MAAS 2.3, the team has been working on the following features and improvements:

  • Started adding support for Django 1.11 – MAAS will continue to be backward compatible with Django 1.8.
  • Adding support for ‘upstream’ proxy – MAAS deployed machines will continue to use MAAS’ internal proxy, while allowing MAAS ‘ proxy to communicate with an upstream proxy.
  • Started adding network beaconing – New feature to support better network (subnet’s, vlans) discovery and allow fabric deduplication.
    • Officially registered IPv4 and IPv6 multicast groups for MAAS beaconing (224.0.0.118 and ff02::15a, respectively).
    • Implemented a mechanism to provide authenticated encryption using the MAAS shared secret.
    • Prototyped initial beaconing multicast join mechanism and receive path.

Libmaas (python-libmaas)

With the continuous improvement of the new MAAS Python Library (python-libmaas), we have focused our efforts on the following improvements the past week:

  • Add support to be able to provide nested objects and object sets.
  • Add support to be able to update any object accessible via the library.
  • Add ability to read interfaces (nested) under Machines, Devices, Rack Controllers and Region Controllers.
  • Add ability to read VLAN’s (nested) under Fabrics.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • Bug #1694767: RSD composition not setting local disk tags
  • Bug #1694759: RSD Pod refresh shows ComposedNodeState is “Failed”
  • Bug #1695083: Improve NTP IP address selection for MAAS DHCP clients.

Questions?

IRC – Find as on #maas @ freenode.

ML – https://lists.ubuntu.com/mailman/listinfo/maas-devel

Read more
Colin Watson

Here’s a brief changelog for this month.

Bugs

  • Export searchTasks for the top-level bugs collection on the webservice, and implement a global bugs feed to go with it (#434244)

Code

  • Fix git-to-git code imports on xenial
  • Backport upstream serf commit to fix svn-to-bzr code imports on xenial (#1690613)
  • Fix crash when unlinking a bug from a Git-based MP in UpdatePreviewDiffJob
  • Handle revision ID passed to BranchMergeProposal.setStatus for transitions to MERGED

Infrastructure

  • Fix processing of purchased Launchpad commercial subscriptions
  • Some progress towards converting the build system to pip, though there’s quite a bit more work to do there

Registry

  • Remove extra internal slashes from URLs requested by the mirror prober (#1692347)

Snappy

  • Record the branch revision used to build a snap and return it along with other XML-RPC status information (#1679157)
  • Configure a git:// proxy for snap builds (#1663920)
  • Allow configuring a snap to build from the current branch of a Git repository rather than explicitly naming a branch (#1688224)

Soyuz (package management)

  • Precache permissions for archives returned by Person.getVisiblePPAs (#1685202)
  • Drop requirement for source .buildinfo files to be signed
  • Make DistroSeries:+queue link to upload files via the webapp, to help dget users

Read more
K.Tsakalozos

Juju Made the Deadline

Ok… I am exaggerating. Juju did not make the deadline, Panagiotis and his co-authors with their hard work made the deadline. Ah, ok you caught me lying again… One of Panagiotis co-authors (me) did not work hard, actually he did not work on the paper at all! Normally, I would publicly apologize, but let me first explain why I am in the author’s list.

Panagiotis is a PhD researcher at the University of Athens. He is also a member of the Madgik lab, which is where I know him from. Panagiotis is interested in Graphs, so he is into Apache Giraph and GraphX for Apache Spark. Knowing how frustrating his work may get, I felt obliged to introduce him to Juju. The goal was to save him some time from deploying and configuring infrastructures and have him focus on real work.

Juju offers an often overlooked feature that proves to be immensely useful to researchers and people who just want to experiment with some software without committing to it. You can deploy an infrastructure on your local machine in less than 10 minutes, take it for a test drive, and then when you are happy, move to a cloud and test at scale. In the case of the Madgik lab, where Panagiotis is, getting cloud resources includes contacting the IT department and wait for them to find time and resources. I think I saw a spark in Panagiotis eyes when I showed him my laptop, or was it the reflection of the Spark infrastructure running in LXC containers, I don’t remember. He immediately showed me a Spark deployment of his own with a bunch of worker nodes that took him a week to set up. After all this time, he was still not sure if that configuration was appropriate for running tests and publishing results. What if he had misconfigured something? What if a minor config change (e.g.: cache size) would skew the results? That is not to say Juju has a magic way to optimally tune the infrastructure for your needs, but we try to choose sensible configuration variables based on the feedback we get from the community.

Panagiotis office looked chaotic, computer towers, screens and keyboards all over the place. Funny how he had prepared a VM for us to work on! Setting up Juju was flawless. Deploying a mapreduce processing bundle and scaling it within the limits of 48GB of RAM and 16 cores was a piece of cake. We even double checked that after a reboot all nodes would come up. All this under the eye of George, the head of the lab’s IT. After an hour or so, we said our goodbyes.

A few weeks later, I received an email titled SocInf 2016 submission and my name in the author’s list. Surprised, I asked, “Why?” The response was that, “without Juju we wouldn’t have made the deadline!”

So… you will find us at the 2nd International Workshop on Social Influence Analysis, co-located with the International Joint Conference on Artificial Intelligence (IJCAI 2016) in NY on the 9th of July. The paper is titled Pinpointing Influence in Pinterest. Kudos to Panagiotis Liakos, Katia Papakonstantinopoulou, Michael Sioutis, Alex Delis and the Juju Big Data team.

Original article

Originally published at insights.ubuntu.com on June 15, 2016.

Read more
Robin Winslow

Nowadays free software is everywhere – from browsers to encryption software to operating systems.

Even so, it is still relatively rare for the code behind websites and services to be opened up.

Stepping into the open

Three years ago we started to move our website projects to Github, and we also took this opportunity to start making them public. We started with the www.ubuntu.com codebase, and over the next couple of years almost all our team’s other sites have followed suit.

canonical-websites org

At this point practically all the web team’s sites are open source, and you can find the code for each site in our canonical-websites organisation.

www.ubuntu.com developer.ubuntu.com www.canonical.com
partners.ubuntu.com design.ubuntu.com maas.io
tour.ubuntu.com snapcraft.io build.snapcraft.io
cn.ubuntu.com jp.ubuntu.com conjure-up.io
docs.ubuntu.com tutorials.ubuntu.com cloud-init.io
assets.ubuntu.com manager.assets.ubuntu.com vanillaframework.io

We’ve tried to make it as easy as possible to get them up and running, with accurate and simple README files. Each of our projects can be run in much the same way, and should work the same across Linux and macOs systems. I’ll elaborate more on how we manage this in a future post.

README example

We also have many supporting projects – Django modules, snap packages, Docker images etc. – which are all openly available in our canonical-webteam organisation.

Reaping the benefits

Opening up our sites in this way means that anyone can help out by making suggestions in issues or directly submitting fixes as pull requests. Both are hugely valuable to our team.

Another significant benefit of opening up our code is that it’s actually much easier to manage:

  • It’s trivial to connect third party services, like Travis, Waffle or Percy;
  • Similarly, our own systems – such as our Jenkins server – don’t need special permissions to access the code;
  • And we don’t need to worry about carefully managing user permissions for read access inside the organisation.

All of these tasks were previously surprisingly time-consuming.

Designing in the open

Shortly after we opened up the www.ubuntu.com codebase, the design team also started designing in the open, as Anthony Dillon recently explained.

Read more
Michael Hall

After a little over 6 years, I am embarking on a new adventure. Today is my last day at Canonical, it’s bitter sweet saying goodbye precisely because it has been such a joy and an honor to be working here with so many amazing, talented and friendly people. But I am leaving by choice, and for an opportunity that makes me as excited as leaving makes me sad.

Goodbye Canonical

maltaI’ve worked at Canonical longer than I’ve worked at any company, and I can honestly say I’ve grown more here both personally and professionally than I have anywhere else. It launched my career as a Community Manager, learning from the very best in the industry how to grow, nurture, and excite a world full of people who share the same ideals. I owe so many thanks (and beers) to Jono Bacon, David Planella, Daniel Holbach, Jorge Castro, Nicholas Skaggs, Alan Pope, Kyle Nitzsche and now also Martin Wimpress. I also couldn’t have done any of this without the passion and contributions of everybody in the Ubuntu community who came together around what we were doing.

As everybody knows by now, Canonical has been undergoing significant changes in order to set it down the road to where it needs to be as a company. And while these changes aren’t the reason for my leaving, it did force me to think about where I wanted to go with my future, and what changes were needed to get me there. Canonical is still doing important work, I’m confident it’s going to continue making a huge impact on the technology and open source worlds and I wish it nothing but success. But ultimately I decided that where I wanted to be was along a different path.

Of course I have to talk about the Ubuntu community here. As big of an impact as Canonical had on my life, it’s only a portion of the impact that the community has had. From the first time I attended a Florida LoCo Team event, I was hooked. I had participated in open source projects before, but that was when I truly understood what the open source community was about. Everybody I met, online or in person, went out of their way to make me feel welcome, valuable, and appreciated. In fact, it was the community that lead me to work for Canonical in the first place, and it was the community work I did that played a big role in me being qualified for the job. I want to give a special shout out to Daniel Holbach and Jorge Castro, who built me up from a random contributor to a project owner, and to Elizabeth Joseph and Laura Faulty who encouraged me to take on leadership roles in the community. I’ve made so many close and lasting friendships by being a part of this amazing group of people, and that’s something I will value forever. I was a community member for years before I joined Canonical, and I’m not going anywhere now. Expect to see me around on IRC, mailing lists and other community projects for a long time to come.

Hello Endless

EndlessNext week I will be joining the team at Endless as their Community Manager. Endless is an order of magnitude smaller than Canonical, and they have a young community that it still getting off the ground. So even though I’ll have the same role I had before, there will be new and exciting challenges involved. But the passion is there, both in the company and the community, to really explode into something big and impactful. In the coming months I will be working to setup the tools, processes and communication that will be needed to help that community grow and flourish. After meeting with many of the current Endless employees, I know that my job will be made easier by their existing commitment to both their own community and their upstream communities.

What really drew me to Endless was the company’s mission. It’s not just about making a great open source project that is shared with the world, they have a specific focus on social good and improving the lives of people who the current technology isn’t supporting. As one employee succinctly put it to me: the whole world, empowered. Those who know me well will understand why this resonates with me. For years I’ve been involved in open source projects aimed at early childhood education and supporting those in poverty or places without the infrastructure that most modern technology requires. And while Ubuntu covers much of this, it wasn’t the primary focus. Being able to work full time on a project that so closely aligned with my personal mission was an opportunity I couldn’t pass up.

Broader horizons

Over the past several months I’ve been expanding the number of communities I’m involved in. This is going to increase significantly in my new role at Endless, where I will be working more frequently with upstream and side-stream projects on areas of mutual benefit and interest. I’ve already started to work more with KDE, and I look forward to becoming active in GNOME and other open source desktops soon.

I will also continue to grow my independent project, Phoenicia, which has a similar mission to Endless but a different technology and audience. Now that this is no longer competing in the XPRIZE competition, it releases some restrictions that we had to operate under and frees us to investigate new areas of innovation and collaboration. If you’re interested in game development, or making an impact on the lives of children around the world, come and see what we’re doing.

If anybody wants to reach out to me to chat, you can still reach me at mhall119@ubuntu.com and soon at mhall119@endlessm.com, tweet me at @mhall119, connect on LinkedIn, chat on Telegram or circle me on Google+. And if we’re ever at a conference together give me a shout, I’d love to grab a drink and catch up.

Read more
Colin Ian King

What is new in FWTS 17.05.00?

Version 17.05.00 of the Firmware Test Suite was released this week as part of  the regular end-of-month release cadence. So what is new in this release?

  • Alex Hung has been busy bringing the SMBIOS tests in-sync with the SMBIOS 3.1.1 standard
  • IBM provided some OPAL (OpenPower Abstraction Layer) Firmware tests:
    • Reserved memory DT validation tests
    • Power management DT Validation tests
  • The first fwts snap was created
  •  Over 40 bugs were fixed
As ever, we are grateful for all the community contributions to FWTS.  The full release details are available from the fwts-devel mailing list.

I expect that the next upcoming ACPICA release will be integrated into the 17.06.00 FWTS release next month.

Read more
admin

I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

  • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
  • Hardware Testing
  • DHCP Relay Support
  • Unmanaged Subnets
  • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
  • Various improvements and minor features.
  • MAAS Client Library
  • Intel Rack Scale Design support.

For more information, please read the release notes are available here.

Availability
MAAS 2.2.0 is currently available in the following MAAS team PPA.
ppa:maas/next
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.

Read more
facundo


Al final salió una idea que venía dándome vuelta en la cabeza desde principios del año pasado, y que tardó sus meses en concretarse: voy a estar dando un Seminario de Introducción a Python junto a una empresa, con el objetivo de bajar el costo del curso a los asistentes (lo cubre en parte la empresa), y de esta manera poder hacer algo más largo y más masivo.

La empresa con la cual voy a hacer este Seminario es Onapsis, bastante cercana a la comunidad de Python Argentina, ya que hace mucho que es sponsor de eventos, pone los famosos "pybus" para ir a las PyCones, hosteó un meetup, etc, etc.

El Seminario es abierto al público en general, y será de 16 horas en total, cuatro sábados de Julio, durante la mañana, en CABA.

El costo es súper accesible, $600, ya que parte lo cubre Onapsis, y la idea es hacerlo barato para que pueda venir la mayor cantidad de gente posible.  Así y todo los cupos son limitados (la oficina tiene un límite), por lo que cuanto antes consigan reserva la posición, mejor.

Al final del Seminario entregaré un certificado de asistencia y la totalidad del curso en formato electrónico.

Para realizar la reserva deben enviarme un mail así les confirmo disponibilidad y les paso los datos necesarios para realizar el pago (que podrá ser por depósito, transferencia bancaria, tarjeta de crédito, débito, etc.).

Acá están todos los detalles del curso.

Read more
facundo

Salió fades 6


Salió la última versión de fades, el sistema que maneja automáticamente los virtualenvs en los casos que uno normalmente encuentra al escribir scripts y programas pequeños, e incluso ayuda a administrar proyectos grandes.

Esta es una de las versiones que más cambios metimos! Estos son solo algunos de los puntos de la lista de cambios:

- Instala no solamente desde PyPI sino también de repositorios remotos (GitHub, Bitbucket, Launchpad, etc) y directorios locales

    fades -d git+https://github.com/yandex/gixy.git@v0.1.3

    fades -d file://$PATH_TO_PROJECT

- Creamos un video para mostrar las características de fades más relevantes

- Selecciona el mejor virtualenv de los almacenados en casos de coincidencia múltiple

- Agregamos una opción --clean-unused-venvs para borrar todos los virtualenvs que no fueron usados en los últimos días

    fades --clean-unused-venvs=30

- Agregamos un --pip-options para pasarle los parámetros que sean necesarios a la ejecución subyacente de pip

    fades -d requests --pip-options="--no-cache-dir"

La lista completa de cambios está en el release formal, esta es la documentación entera, y acá tienen como instalarlo y disfrutarlo.

Read more
Colin Ian King

The Firmware Test Suite (FWTS) has an easy to use text based front-end that is primarily used by the FWTS Live-CD image but it can also be used in the Ubuntu terminal.

To install and run the front-end use:

 sudo apt-get install fwts-frontend  
sudo fwts-frontend-text

..and one should see a menu of options:


In this demonstration, the "All Batch Tests" option has been selected:


Tests will be run one by one and a progress bar shows the progress of each test. Some tests run very quickly, others can take several minutes depending on the hardware configuration (such as number of processors).

Once the tests are all complete, the following dialogue box is displayed:


The test has saved several files into the directory /fwts/15052017/1748/ and selecting Yes one can view the results log in a scroll-box:


Exiting this, the FWTS frontend dialog is displayed:


Press enter to exit (note that the Poweroff option is just for the fwts Live-CD image version of fwts-frontend).

The tool dumps various logs, for example, the above run generated:

 ls -alt /fwts/15052017/1748/  
total 1388
drwxr-xr-x 5 root root 4096 May 15 18:09 ..
drwxr-xr-x 2 root root 4096 May 15 17:49 .
-rw-r--r-- 1 root root 358666 May 15 17:49 acpidump.log
-rw-r--r-- 1 root root 3808 May 15 17:49 cpuinfo.log
-rw-r--r-- 1 root root 22238 May 15 17:49 lspci.log
-rw-r--r-- 1 root root 19136 May 15 17:49 dmidecode.log
-rw-r--r-- 1 root root 79323 May 15 17:49 dmesg.log
-rw-r--r-- 1 root root 311 May 15 17:49 README.txt
-rw-r--r-- 1 root root 631370 May 15 17:49 results.html
-rw-r--r-- 1 root root 281371 May 15 17:49 results.log

acpidump.log is a dump of the ACPI tables in format compatible with the ACPICA acpidump tool.  The results.log file is a copy of the results generated by FWTS and results.html is a HTML formatted version of the log.

Read more
facundo

Marcha contra el 2x1


Ayer fuí a la marcha contra el fallo de la corte suprema berreta que tenemos estos tiempos en el que pretenden aplicar una vieja ley que ya no está vigente para bajarle la condena a criminales de lesa humanidad.

Fue impresionante. Y fue necesario. Como leí por ahí, hay un momento para las redes sociales virtuales, pero también hay un momento para ir y poner el cuerpo.

Una panorámica de la plaza

Otra panorámica de la plaza

Ningún genocida suelto. No queremos que esas bestias asesinan disfruten ni un día de libertad, que se mueran en carcel común.

La columna de HIJOS entrando a la plaza

"Esta vez no vamos a decir 'gracias por acompañarnos', porque todos los que estamos acá es porque repudiamos la decisión de la Corte. Estamos acá celebrando porque el pueblo unido jamás será vencido", dijo en un momento Taty Almeida, y siguió "Los organismos de derechos humanos decimos nunca más a la impunidad, nunca más torturadores, violadores, apropiadores de niños. Nunca más privilegios para los criminales de lesa humanidad. Nunca más terrorismo de Estado. Nunca más genocidas sueltos. Nunca más el silencio. No queremos convivir con los asesinos más sangrientos de la historia".

Las oradoras principales

Estela Carlotto también habló: "La dictadura no es un hecho del pasado lejano. Que la corporación judicial nos escuche, porque no claudicaremos en nuestro reclamo nacional e internacional en la defensa de los derechos conquistados. ¡Levanten los pañuelos! ¡Por los 30 mil desaparecidos!"

Pañuelos

Fuimos medio millón. Más treinta mil que no estaban pero sí.

Read more
Colin Ian King

Simple job scripting in stress-ng 0.08.00

The latest release of stress-ng 0.08.00 now contains a new job scripting feature. Jobs allow one to bundle up a set of stress options  into a script rather than cram them all onto the command line.  One can now also run multiple invocations of a stressor with the latest version of stress-ng and conbined with job scripts we now have a powerful way of running more complex stress tests.

The job script commands are essentially the stress-ng long options without the need for the '--' option characters.  One option per line is allowed.

For example:

 $ stress-ng --cpu 1 --matrix 1 --verbose --tz --timeout 60s --cpu 1 --matrix -1 --icache 1 

would become:

 $cat example.job  
verbose
tz
timeout 60
cpu 1
matrix 1
icache 1

One can also add comments using the # character prefix.   By default the stressors will be run in parallel, but one can use the "run sequential" command in the job script to run the stressors sequentially.

The following script runs the mmap stressor multiple times using more memory on each run:

 $ cat mmap.job  
run sequential # one job at a time
timeout 2m # run for 2 minutes
verbose # verbose output
#
# run 4 invocations and increase memory each time
#
mmap 1
mmap-bytes 25%
mmap 1
mmap-bytes 50%
mmap 1
mmap-bytes 75%
mmap 1
mmap-bytes 100%

Some of the stress-ng stressors have various "methods" that allow one to modify the way the stressor behaves.  The following example shows how job scripts can be uses to exercise a system using different stressor methods:

 $ cat /usr/share/stress-ng/example-jobs/matrix-methods.job   
#
# hot-cpu class stressors:
# various options have been commented out, one can remove the
# proceeding comment to enable these options if required.
#
# run the following tests in parallel or sequentially
#
run sequential
# run parallel
#
# verbose
# show all debug, warnings and normal information output.
#
verbose
#
# run each of the tests for 60 seconds
# stop stress test after N seconds. One can also specify the units
# of time in seconds, minutes, hours, days or years with the suf‐
# fix s, m, h, d or y.
#
timeout 1m
# tz
# collect temperatures from the available thermal zones on the
# machine (Linux only). Some devices may have one or more thermal
# zones, where as others may have none.
tz
#
# matrix stressor with examples of all the methods allowed
#
# start N workers that perform various matrix operations on float‐
# ing point values. By default, this will exercise all the matrix
# stress methods one by one. One can specify a specific matrix
# stress method with the --matrix-method option.
#
#
# Method Description
# all iterate over all the below matrix stress methods
# add add two N × N matrices
# copy copy one N × N matrix to another
# div divide an N × N matrix by a scalar
# hadamard Hadamard product of two N × N matrices
# frobenius Frobenius product of two N × N matrices
# mean arithmetic mean of two N × N matrices
# mult multiply an N × N matrix by a scalar
# prod product of two N × N matrices
# sub subtract one N × N matrix from another N × N matrix
# trans transpose an N × N matrix
#
matrix 0
matrix-method all
matrix 0
matrix-method add
matrix 0
matrix-method copy
matrix 0
matrix-method div
matrix 0
matrix-method frobenius
matrix 0
matrix-method hadamard
matrix 0
matrix-method mean
matrix 0
matrix-method mult
matrix 0
matrix-method prod
matrix 0
matrix-method sub
matrix 0
matrix-method trans

Various example job scripts can be found in /usr/share/stress-ng/example-job, one can use these as a base for writing more complex stressors.  The example jobs have all the options commented (using the text from the stress-ng manual) to make it easier to see how each stressor can be run.

Version 0.08.00 landed in Ubuntu 17.10 Artful Aardvark and is available as a snap and I've got backports in ppa:colin-king/white for older releases of Ubuntu.

Read more
Colin Watson

Well, it’s been a while!  Since we last posted a general update, the Launchpad team has become part of Canonical’s Online Services department, so some of our efforts have gone into other projects.  There’s still plenty happening with Launchpad, though, and here’s a changelog-style summary of what we’ve been up to.

Answers

  • Lock down question title and description edits from random users
  • Prevent answer contacts from editing question titles and descriptions
  • Prevent answer contacts from editing FAQs

Blueprints

  • Optimise SpecificationSet.getStatusCountsForProductSeries, fixing Product:+series timeouts
  • Add sprint deletion support (#2888)
  • Restrict blueprint count on front page to public blueprints

Build farm

  • Add fallback if nominated architecture-independent architecture is unavailable for building (#1530217)
  • Try to load the nbd module when starting launchpad-buildd (#1531171)
  • Default LANG/LC_ALL to C.UTF-8 during binary package builds (#1552791)
  • Convert buildd-manager to use a connection pool rather than trying to download everything at once (#1584744)
  • Always decode build logtail as UTF-8 rather than guessing (#1585324)
  • Move non-virtualised builders to the bottom of /builders; Ubuntu is now mostly built on virtualised builders
  • Pass DEB_BUILD_OPTIONS=noautodbgsym during binary package builds if we have not been told to build debug symbols (#1623256)

Bugs

  • Use standard milestone ordering for bug task milestone choices (#1512213)
  • Make bug activity records visible to anonymous API requests where appropriate (#991079)
  • Use a monospace font for “Add comment” boxes for bugs, to match how the comments will be displayed (#1366932)
  • Fix BugTaskSet.createManyTasks to map Incomplete to its storage values (#1576857)
  • Add basic GitHub bug linking (#848666)
  • Prevent rendering of private team names in bugs feed (#1592186)
  • Update CVE database XML namespace to match current file on cve.mitre.org
  • Fix Bugzilla bug watches to support new versions that permit multiple aliases
  • Sort bug tasks related to distribution series by series version rather than series name (#1681899)

Code

  • Remove always-empty portlet from Person:+branches (#1511559)
  • Fix OOPS when editing a Git repository with over a thousand refs (#1511838)
  • Add Git links to DistributionSourcePackage:+branches and DistributionSourcePackage:+all-branches (#1511573)
  • Handle prerequisites in Git-based merge proposals (#1489839)
  • Fix OOPS when trying to register a Git merge with a target path but no target repository
  • Show an “Updating repository…” indication when there are pending writes
  • Launchpad’s Git hosting backend is now self-hosted
  • Fix setDefaultRepository(ForOwner) to cope with replacing an existing default (#1524316)
  • Add “Configure Code” link to Product:+git
  • Fix Git diff generation crash on non-ASCII conflicts (#1531051)
  • Fix stray link to +editsshkeys on Product:+configure-code when SSH keys were already registered (#1534159)
  • Add support for Git recipes (#1453022)
  • Fix OOPS when adding a comment to a Git-based merge proposal without using AJAX (#1536363)
  • Fix shallow git clones over HTTPS (#1547141)
  • Add new “Code” portlet on Product:+index to make it easier to find source code (#531323)
  • Add missing table around widget row on Product:+configure-code, so that errors are highlighted properly (#1552878)
  • Sort GitRepositorySet.getRepositories API results to make batching reliable (#1578205)
  • Show recent commits on GitRef:+index
  • Show associated merge proposals in Git commit listings
  • Show unmerged and conversation-relevant Git commits in merge proposal views (#1550118)
  • Implement AJAX revision diffs for Git
  • Fix scanning branches with ghost revisions in their ancestry (#1587948)
  • Fix decoding of Git diffs involving non-UTF-8 text that decodes to unpaired surrogates when treated as UTF-8 (#1589411)
  • Fix linkification of references to Git repositories (#1467975)
  • Fix +edit-status for Git merge proposals (#1538355)
  • Include username in git+ssh URLs (#1600055)
  • Allow linking bugs to Git-based merge proposals (#1492926)
  • Make Person.getMergeProposals have a constant query count on the webservice (#1619772)
  • Link to the default git repository on Product:+index (#1576494)
  • Add Git-to-Git code imports (#1469459)
  • Improve preloading of {Branch,GitRepository}.{landing_candidates,landing_targets}, fixing various timeouts
  • Export GitRepository.getRefByPath (#1654537)
  • Add GitRepository.rescan method, useful in cases when a scan crashed

Infrastructure

  • Launchpad’s SSH endpoints (bazaar.launchpad.net, git.launchpad.net, upload.ubuntu.com, and ppa.launchpad.net) now support newer key exchange and MAC algorithms, allowing compatibility with OpenSSH >= 7.0 (#1445619)
  • Make cross-referencing code more efficient for large numbers of IDs (#1520281)
  • Canonicalise path encoding before checking a librarian TimeLimitedToken (#677270)
  • Fix Librarian to generate non-cachable 500s on missing storage files (#1529428)
  • Document the standard DELETE method in the apidoc (#753334)
  • Add a PLACEHOLDER account type for use by SSO-only accounts
  • Add support to +login for acquiring discharge macaroons from SSO via an OpenID exchange (#1572605)
  • Allow managing SSH keys in SSO
  • Re-raise unexpected HTTP errors when talking to the GPG key server
  • Ensure that the production dump is usable before destroying staging
  • Log SQL statements as Unicode to avoid confusing page rendering when the visible_render_time flag is on (#1617336)
  • Fix the librarian to fsync new files and their parent directories
  • Handle running Launchpad from a Git working tree
  • Handle running Launchpad on Ubuntu 16.04 (upgrade currently in progress)
  • Fix delete_unwanted_swift_files to not crash on segments (#1642411)
  • Update database schema for PostgreSQL 9.5 and 9.6
  • Check fingerprints of keys received from the keyserver rather than trusting it implicitly

Registry

  • Make public SSH key records visible to anonymous API requests (#1014996)
  • Don’t show unpublished packages or package names from private PPAs in search results from the package picker (#42298, #1574807)
  • Make Person.time_zone always be non-None, allowing us to easily show the edit widget even for users who have never set their time zone (#1568806)
  • Let latest questions, specifications and products be efficiently calculated
  • Let project drivers edit series and productreleases, as series drivers can; project drivers should have series driver power over all series
  • Fix misleading messages when joining a delegated team
  • Allow team privacy changes when referenced by CodeReviewVote.reviewer or BugNotificationRecipient.person
  • Don’t limit Person:+related-projects to a single batch

Snappy

  • Add webhook support for snaps (#1535826)
  • Allow deleting snaps even if they have builds
  • Provide snap builds with a proxy so that they can access external network resources
  • Add support for automatically uploading snap builds to the store (#1572605)
  • Update latest snap builds table via AJAX
  • Add option to trigger snap builds when top-level branch changes (#1593359)
  • Add processor selection in new snap form
  • Add option to automatically release snap builds to store channels after upload (#1597819)
  • Allow manually uploading a completed snap build to the store
  • Upload *.manifest files from builders as well as *.snap (#1608432)
  • Send an email notification for general snap store upload failures (#1632299)
  • Allow building snaps from an external Git repository
  • Move upload to FAILED if its build was deleted (e.g. because of a deleted snap) (#1655334)
  • Consider snap/snapcraft.yaml and .snapcraft.yaml as well as snapcraft.yaml for new snaps (#1659085)
  • Add support for building snaps with classic confinement (#1650946)
  • Fix builds_for_snap to avoid iterating over an unsliced DecoratedResultSet (#1671134)
  • Add channel track support when uploading snap builds to the store (contributed by Matias Bordese; #1677644)

Soyuz (package management)

  • Remove some more uses of the confusing .dsc component; add the publishing component to SourcePackage:+index in compensation
  • Add include_meta option to SPPH.sourceFileUrls, paralleling BPPH.binaryFileUrls
  • Kill debdiff after ten minutes or 1GiB of output by default, and make sure we clean up after it properly (#314436)
  • Fix handling of << and >> dep-waits
  • Allow PPA admins to set external_dependencies on individual binary package builds (#671190)
  • Fix NascentUpload.do_reject to not send an erroneous Accepted email (#1530220)
  • Include DEP-11 metadata in Release file if it is present
  • Consistently generate Release entries for uncompressed versions of files, even if they don’t exist on the filesystem; don’t create uncompressed Packages/Sources files on the filesystem
  • Handle Build-Depends-Arch and Build-Conflicts-Arch from SPR.user_defined_fields in Sources generation and SP:+index (#1489044)
  • Make index compression types configurable per-series, and add xz support (#1517510)
  • Use SHA-512 digests for GPG signing where possible (#1556666)
  • Re-sign PPAs with SHA-512
  • Publish by-hash index files (#1430011)
  • Show SHA-256 checksums rather than MD5 on DistributionSourcePackageRelease:+files (#1562632)
  • Add a per-series switch allowing packages in supported components to build-depend on packages in unsupported components, used for Ubuntu 16.04 and later
  • Expand archive signing to kernel modules (contributed by Andy Whitcroft; #1577736)
  • Uniquely index PackageDiff(from_source, to_source) (part of #1475358)
  • Handle original tarball signatures in source packages (#1587667)
  • Add signed checksums for published UEFI/kmod files (contributed by Andy Whitcroft; #1285919)
  • Add support for named authentication tokens for private PPAs
  • Show explicit add-apt-repository command on Archive:+index (#1547343)
  • Use a per-archive OOPS timeline in archivepublisher scripts
  • Link to package versions on DSP:+index using fmt:url rather than just a relative link to the version, to avoid problems with epochs (#1629058)
  • Fix RepositoryIndexFile to gzip without timestamps
  • Fix Archive.getPublishedBinaries API call to have a constant query count (#1635126)
  • Include the package name in package copy job OOPS reports and emails (#1618133)
  • Remove headers from Contents files (#1638219)
  • Notify the Changed-By address for PPA uploads if the .changes contains “Launchpad-Notify-Changed-By: yes” (#1633608)
  • Accept .debs containing control.tar.xz (#1640280)
  • Add Archive.markSuiteDirty API call to allow requesting that a given archive/suite be published
  • Don’t allow cron-control to interrupt publish-ftpmaster part-way through (#1647478)
  • Optimise non-SQL time in PublishingSet.requestDeletion (#1682096)
  • Store uploaded .buildinfo files (#1657704)

Translations

  • Allow TranslationImportQueue to import entries from file objects rather than having to read arbitrarily-large files into memory (#674575)

Miscellaneous

  • Use gender-neutral pronouns where appropriate
  • Self-host the Ubuntu webfonts (#1521472)
  • Make the beta and privacy banners float over the rest of the page when scrolling
  • Upgrade to pytz 2016.4 (#1589111)
  • Publish Launchpad’s code revision in an X-Launchpad-Revision header
  • Truncate large picker search results rather than refusing to display anything (#893796)
  • Sync up the lists footer with the main webapp footer a bit (#1679093)

Read more
facundo


A los que armamos presentaciones mostrando programitas o pequeñas porciones de código siempre se nos presentó un inconveniente: ¿cómo mostrar ese código apropiadamente coloreado?

Con "apropiadamente coloreado" no me refiero a pintarrajeado como adolescente que sale a bailar, o decorado con florcitas, soles, y/o aviones de guerra, sino a algo que es típico en el mundo de la programación donde los editores le ponen distintos colores a las palabras que forman el código en función de qué tipo de palabra son: un color para las variables, otro para los textos, otro para los nombres de las funciones, otro para...

No voy a entrar en detalle sobre qué es ese coloreado (que en inglés llamamos "syntax highlighting"), pero les muestro un ejemplo:

Ejemplo de código coloreado

En fin, volviendo a meter código coloreado en LibreOffice. Lo charlé bastante en su momento con varias personas, lo mejor parecía capturar una imagen del código y meter eso, pero es una porquería porque no queda bien ante el menor cambio de tamaño, y si encima hay que tocar cualquier cosa de ese texto es imposible.

También buscando encontré Coooder, que es una extensión de LibreOffice que hacía exactamente eso. El verbo hacer de la oración anterior está en pasado porque sólo funciona para los LibreOffice del 3.3 a 3.6 (yo actualmente tengo 5.1).

Finalmente encontré la manera de hacerlo! No es la más directa, pero el resultado es el que estaba buscando: un texto coloreado dentro de LibreOffice. Genial!

Los pasos se dividen en dos partes grandes:

  • generar un documento en formato RTF
  • meter este doc RTF en la presentación

Cómo generar el doc RTF:

  • Abrir el código con gvim
  • Escribir :TOhtml, lo cual abrirá otra ventana con el código HTML correspondiente a nuestro texto coloreado.
  • Escribir :saveas /tmp/cod.html, lo cual grabará ese HTML en el path ahí especificado
  • Cerrar cualquier libreffice abierto (sino el próximo paso falla :/).
  • Desde una terminal, ejecutar unoconv -f rtf /tmp/cod.html lo cual nos dejará un archivo en /tmp/cod.rtf justamente con el código nuestro, en formato RTF.
  • Abrir el LibreOffice Impress
  • Ir al Menu, Insertar, Archivo; un par de clicks en "siguiente" y ya tenemos el texto adentro.
  • Seleccionar el texto que acabamos de insertar, y cambiarle la tipografía a alguna monoespaciada.

Voilà!

Read more