Canonical Voices

James Henstridge

While working on a feature for snapd, we had a need to perform a “secure bind mount”. In this context, “secure” meant:

  1. The source and/or target of the mount is owned by a less privileged user.
  2. User processes will continue to run while we’re performing the mount (so solutions that involve suspending all user processes are out).
  3. While we can’t prevent the user from moving the mount point, they should not be able to trick us into mounting to locations they don’t control (e.g. by replacing the path with a symbolic link).

The main problem is that the mount system call uses string path names to identify the mount source and target. While we can perform checks on the paths before the mounts, we have no way to guarantee that the paths don’t point to another location when we move on to the mount() system call: a classic time of check to time of use race condition.

One suggestion was to modify the kernel to add a MS_NOFOLLOW flag to prevent symbolic link attacks. This turns out to be harder than it would appear, since the kernel is documented as ignoring any flags other than MS_BIND and MS_REC when performing a bind mount. So even if a patched kernel also recognised the MS_NOFOLLOW, there would be no way to distinguish its behaviour from an unpatched kernel. Fixing this properly would probably require a new system call, which is a rabbit hole I don’t want to dive down.

So what can we do using the tools the kernel gives us? The common way to reuse a reference to a file between system calls is the file descriptor. We can securely open a file descriptor for a path using the following algorithm:

  1. Break the path into segments, and check that none are empty, ".", or "..".
  2. Open the root directory with open("/", O_PATH|O_DIRECTORY).
  3. Open the first segment with openat(parent_fd, "segment", O_PATH|O_NOFOLLOW|O_DIRECTORY).
  4. Repeat for each of the remaining file descriptors, closing parent descriptors as needed.

Now we just need to find a way to use these file descriptors with the mount system call. I came up with two strategies to achieve this.

Use the current working directory

The first idea I tried was to make use of the fact that the mount system call accepts relative paths. We can use the fchdir system call to change to a directory identified by a file descriptor, and then refer to it as ".". Putting those together, we can perform a secure bind mount as a multi step process:

  1. fchdir to the mount source directory.
  2. Perform a bind mount from "." to a private stash directory.
  3. fchdir to the mount target directory.
  4. Perform a bind mount from the private stash directory to ".".
  5. Unmount the private stash directory.

While this works, it has a few downsides. It requires a third intermediate location to stash the mount. It could interfere with anything else that relies on the working directory. It also only works for directory bind mounts, since you can’t fchdir to a regular file.

Faced with these downsides, I started thinking about whether there was any simpler options available.

Use magic /proc symbolic links

For every open file descriptor in a process, there is a corresponding file in /proc/self/fd/. These files appear to be symbolic links that point at the file associated with the descriptor. So what if we pass these /proc/self/fd/NNN paths to the mount system call?

The obvious question is that if these paths are symbolic links, is this any different than passing the real paths directly? It turns out that there is a difference, because the kernel does not resolve these symbolic links in the standard fashion. Rather than recursively resolving link targets, the kernel short circuits the process and uses the path structure associated with the file descriptor. So it will follow file moves and can even refer to paths in other mount namespaces. Furthermore the kernel keeps track of deleted paths, so we will get a clean error if the user deletes a directory after we’ve opened it.

So in pseudo-code, the secure mount operation becomes:

source_fd = secure_open("/path/to/source", O_PATH);
target_fd = secure_open("/path/to/target", O_PATH);
mount("/proc/self/fd/${source_fd}", "/proc/self/fd/${target_fd}",
      NULL, MS_BIND, NULL);

This resolves all the problems I had with the first solution: it doesn’t alter the working directory, it can do file bind mounts, and doesn’t require a protected third location.

The only downside I’ve encountered is that if I wanted to flip the bind mount to read-only, I couldn’t use "/proc/self/fd/${target_fd}" to refer to the mount point. My best guess is that it continues to refer to the directory shadowed by the mount point, rather than the mount point itself. So it is necessary to re-open the mount point, which is another opportunity for shenanigans. One possibility would be to read /proc/self/fdinfo/NNN to determine the mount ID associated with the file descriptor and then double check it against /proc/self/mountinfo.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta2)

New Features & Improvements

MAAS Internals optimisation

Continuing with MAAS’ internal surgery, a few more improvements have been made:

  • Backend improvements

  • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images.

  • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy).

  • UI Performance optimizations for machines, pods, and zones, including better filtering of node types.

KVM pod improvements

Continuing with the improvements for KVM pods, beta 2 adds the ability to:

  • Define a default storage pool

This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space.

  • API – Allow allocating machines with different storage pools

Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS.

UI Improvements

  • Remove remaining YUI in favor of AngularJS.

As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead.

  • Re-organize Settings page

The MAAS settings  have now been reorganized into multiple tabs.

Minor improvements

  • API for default DNS domain selection

Adds the ability to define the default DNS domain. This is currently only available via the API.

  • Vanilla framework upgrade

We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day!

Bug fixes

Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board:

https://launchpad.net/maas/+milestone/2.4.0beta2

 

Read more
Colin Watson

Summary

Mohamed Alaa reported that Launchpad’s Bing site search implementation had a cross-site-scripting vulnerability.  This was introduced on 2018-03-29, and fixed on 2018-04-10.  We have not found any evidence of this bug being actively exploited by attackers; the rest of this post is an explanation of the problem for the sake of transparency.

Details

Some time ago, Google announced that they would be discontinuing their Google Site Search product on 2018-04-01.  Since this served as part of the backend for Launchpad’s site search feature (“Search Launchpad” on the front page), we began to look around for a replacement.  We eventually settled on Bing Custom Search, implemented appropriate support in Launchpad, and switched over to it on 2018-03-29.

Unfortunately, we missed one detail.  Google Site Search’s XML API returns excerpts of search results as pre-escaped HTML, using <b> tags to indicate where search terms match.  This makes complete sense given its embedding in XML; it’s hard to see how that API could do otherwise.  The Launchpad integration code accordingly uses TAL code along these lines, using the structure keyword to explicitly indicate that the excerpts in question do not require HTML-escaping (like most good web frameworks, TAL’s default is to escape all variable content, so successful XSS attacks on Launchpad have historically been rare):

<div class="summary" tal:content="structure page/summary" />

However, Bing Custom Search’s JSON API returns excerpts of search results without any HTML escaping.  Again, in the context of the API in question, this makes complete sense as a default behaviour (though a textFormat=HTML switch is available to change this); but, in the absence of appropriate handling, this meant that those excerpts were passed through to the TAL code above without escaping.  As a result, if you could craft search terms that match a portion of an existing page on Launchpad that shows scripting tags (such as a bug about an XSS vulnerability in another piece of software hosted on Launchpad), and convince other people to follow a suitable search link, then you could cause that code to be executed in other users’ browsers.

The fix was, of course, to simply escape the data returned by Bing Custom Search.  Thanks to Mohamed Alaa for their disclosure.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 1 is currently available in Bionic -proposed waiting to be published into Ubuntu, or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta1)

Important announcements

Debian package maas-dns no longer needed

The Debian package ‘maas-dns’ has now been made a transitional package. This package provided some post-installation configuration to prepare bind to be managed by MAAS, but it required maas-region-api to be installed first.

In order to streamline the installation and make it easier for users to install HA environments, the configuration of bind has now been integrated to the ‘maas-region-api’ package itself, which and we have made ‘maas-dns’ a dummy transitional package that can now be removed.

New Features & Improvements

MAAS Internals optimization

Major internal surgery to MAAS 2.4 continues improve various areas not visible to the user. These updates will advance the overall performance of MAAS in larger environments. These improvements include:

  • Database query optimizations

Further reductions in the number of database queries, significantly cutting the queries made by the boot source cache image import process from over 100 to just under 5.

  • UI optimizations

MAAS is being optimized to reduce the amount of data using the websocket API to render the UI. This is targeted at only processing data only for viewable information, improving various legacy areas. Currently, the work done for this release includes:

  • Only load historic script results (e.g. old commissioning/testing results) when requested / accessed by the user, instead of always making them available over the websocket.

  • Only load node objects in listing pages when the specific object type is requested. For instance, only load machines when accessing the machines tab instead of also loading devices and controllers.

  • Change the UI mechanism to only request OS Information only on initial page load rather than every 10 seconds.

KVM pod improvements

Continuing with the improvements from alpha 2, this new release provides more updates to KVM pods:

  • Added overcommit ratios for CPU and memory.

When composing or allocating machines, previous versions of MAAS would allow the user to request as many resources as the user wanted regardless of the available resources. This created issues when dynamically allocating machines as it could allow users to create an infinite number of machines even when the physical host was already over committed. Adding this feature allows administrators to control the amount of resources they want to over commit.

  • Added ability to filter which pods or pod types to avoid when allocating machines

Provides users with the ability to select which pods or pod types not to allocate resources from. This makes it particularly useful when dynamically allocating machines when MAAS has a large number of pods.

DNS UI Improvements

MAAS 2.0 introduced the ability to manage DNS, not only to allow the creation of new domains, but also to the creation of resources records such as A, AAA, CNAME, etc. However, most of this functionality has only been available over the API, as the UI only allowed to add and remove new domains.

As of 2.4, MAAS now adds the ability to manage not only DNS domains but also the following resource records:

  • Added ability to edit domains (e.g. TTL, name, authoritative).

  • Added ability to create and delete resource records (A, AAA, CNAME, TXT, etc).

  • Added ability to edit resource records.

Navigation UI improvements

MAAS 2.4 beta 1 is changing the top-level navigation:

  • Rename ‘Zones’ for ‘AZs’

  • Add ‘Machines, Devices, Controllers’ to the top-level navigation instead of ‘Hardware’.

Minor improvements

A few notable improvements being made available in MAAS 2.4 include:

  • Add ability to force the boot type for IPMI machines.

Hardware manufactures have been upgrading their BMC firmware versions to be more compliant with the Intel IPMI 2.0 spec. Unfortunately, the IPMI 2.0 spec has made changes that provide a non-backward compatible user experience. For example, if the administrator configures their machine to always PXE boot over EFI, and the user executed an IPMI command without specifying the boot type, the machine would use the value of the configured BIOS. However, with  these new changes, the user is required to always specify a boot type, avoiding a fallback to the BIOS.

As such, MAAS now allows the selection of a boot type (auto, legacy, efi) to force the machine to always PXE with the desired type (on the next boot only) .

  • Add ability, via the API, to skip the BMC configuration on commissioning.

Provides an API option to skip the BMC auto configuration during commissioning for IPMI systems. This option helps admins keep credentials provided over the API when adding new nodes.

Bug fixes

Please refer to the following for all 32 bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0beta1

 

Read more
facundo


fades

Después de casi un año, con Nico liberamos una nueva versión de fades.

¿Qué hay de nuevo en esta release?

  • Revisar si todo lo pedido está realmente disponible en PyPI antes de comenzar a instalarlo
  • Ignora dependencias duplicadas
  • Varias mejoras y correcciones en los mensajes que fades muestra en modo verbose
  • Prohibimos el mal uso de fades: instalarlo en legacy Python y ejecutarlo desde adentro de otro virtualenv
  • Un montón de mejoras relacionadas al proyecto en sí (pero no directamente visibles para el usuario final) y algunas pequeñas otras correcciones

Pruébenlo.

Loguito de fades

infoauth

infoauth es un un pequeño pero práctico módulo de Python y script para grabar/cargar tokens a/desde disco.

Esto es lo que hace:

  • graba tokens en un archivo en disco, pickleado y zippeado
  • cambia el archivo a sólo lectura, y sólo legible por vos
  • carga los tokens de ese archivo en disco

En qué casos este módulo es útil? Digamos que tenés un script o programa que necesita algunos tokens secretos (autenticación de mail, tokens de Twitter, la info para conectarse a una base de datos, etc...), pero no querés incluir estos tokens en el código, porque el mismo es público, entonces con este módulo harías:

    tokens = infoauth.load(os.path.expanduser("~/.my-tokens"))

Fijate que el archivo va a quedar legible sólo por vos y no en el directorio del proyecto (así no tenés el riesgo de compartirlo por accidente).

CUIDADO: infoauth NO protege tus secretos con una clave o algo así, este módulo NO asegura tus secretos de ninguna manera. Sí, los tokens están enmarañados (porque se picklean y comprimen) y otra gente quizás no pueda accederlos fácilmente (legible sólo por vos), pero no hay más protección que esa. Usalo bajo tu propio riesgo.

Entonces, ¿cómo usarlo desde un programa en Python? Es fácil, para cargar la data:

    import infoauth
    auth = infoauth.load(os.path.expanduser("~/.my-mail-auth"))
    # ...
    mail.auth(auth['user'], auth['password'])

Para grabarla:

    import infoauth
    secrets = {'some-stuff': 'foo', 'code': 67}
    infoauth.dump(secrets, os.path.expanduser("~/.secrets"))

Fijate que como grabar los tokens es algo que normalmente se hace una sola vez, seguro es más práctico hacerlo desde la linea de comandos, como se muestra a continuación...

Por eso, ¿cómo usarlo desde la linea de comandos? Para mostrar la info:

    $ infoauth show ~/.my-mail-auth
    password: ...
    user: ...

Y para grabar un archivo con los datos::

    $ infoauth create ~/.secrets some-stuff=foo code=67

Fijate que al crear el archivo desde la linea de comandos tenemos la limitación de que todos los valores almacenados van a ser cadenas de texto; si querés grabar otros tipos de datos, como enteros, listas, o lo que quieras, tendrías que usar la forma programática que se muestra arriba.

Esta es la página del proyecto, y claro que está en PyPI así que se puede usar sin problema desde fades (guiño, guiño).

Read more
Marcus Haslam

Introducing the new Snapcraft branding

Early developments of the the Snapcraft brand mark

If you’re a regular visitor to Snapcraft.io or any of its associated sites, you will have noticed a change recently to the logo and overall branding which has been in development over the past few months. We have developed a stand-alone brand for Snapcraft, the command line tool for writing and publishing software as a snap. One of the challenges we faced was how to create a brand for Snapcraft that stands out in its own right yet fitted in with the existing Ubuntu brand and be a part of the extended family. To achieve this, we took reference from the Suru visual language. The Suru philosophy stems from our brand values alluding to Japanese culture. Working with paper metaphors we were inspired by origami as a solid and tangible foundation.

We identified the key attributes of:

Simple – to package leveraging your existing tools

Integrated – easily with build and CL infrastructure

Effortless – roll back versions

Easy – integrate with build and CI infrastructures

We worked with these whilst keeping with the overall Ubuntu brand values of:

Freedom

Reliable

Precise

Collaborative

We started exploring the Origami language and felt the idea of a bird fitted well with the attributes. We worked at simplifying the design until we had it at it’s most minimal we could whilst retaining the elegance of a bird in flight.

A new colour scheme was developed to have a clean fresh look while sitting comfortably with the primary and secondary Ubuntu colour palette. We used the Ubuntu font for consistency with the overall parent brand, and in this instance we used the thin weight along side the light to create a dynamic word mark reflecting the values of Freedom, Precision, Collaboration and Reliable.

 

Colour Palette

Lock-up

We have a number of different Lock-ups we can use for different circumstances

 

 

 

Read more
Colin Ian King

Kernel Commits with "Fixes" tag

Over the past 5 years there has been a steady increase in the number of kernel bug fix commits that use the "Fixes" tag.  Kernel developers use this annotation on a commit to reference an older commit that originally introduced the bug, which is obviously very useful for bug tracking purposes. What is interesting is that there has been a steady take-up of developers using this annotation:

With the 4.15 release, 1859 of the 16223 commits (11.5%) were tagged as "Fixes", so that's a fair amount of work going into bug fixing.  I suspect there are more commits that are bug fixes, but aren't using the "Fixes" tag, so it's hard to tell for certain how many commits are fixes without doing deeper analysis.  Probably over time this tag will be widely adopted for all bug fixes and the trend line will level out and we will have a better idea of the proportion of commits per release that are just devoted to fixing issues.  Let's see how this looks in another 5 years time,  I'll keep you posted!

Read more
facundo

De trabajo en Hungría


Estuve una semana en Budapest, trabajando en un sprint con otros compañeros de equipo y de otros equipos, en general.

El viaje largo, pero sin sorpresas... sólo el detalle que me perdieron la valija en el viaje de ida :(. Cuando fui a hacer el reclamo, se fijaron y la ubicaron en Frankfurt (donde era la escala) y me dijeron que llegaba esa noche. Incluso me dieron un pelpa para que el hotel pueda recibir la valija por mí. Obviamente, cuando hice el checkin les comenté la situación. A las diez de la noche golpearon la puerta de la habitación y era alguien del hotel con mi valija \o/.

Así y todo tuve que salir vestido como venía (pantalón finito y zapatillas náuticas) a pasear durante la tarde... y me cagué de frío, aunque tenía polar y campera. Salimos a pegar una vuelta con Naty, Matías y Guillo, y caminamos un par de horas a la tardecita, antes de que caiga el sol, porque luego teníamos el coctel de bienvenida de la empresa. Aunque era "de día", estaba muy nublado, y eso, hacía mucho frío...

El auto tapado de hielo, ese frío hacía

El agua se congelaba a mitad del chorro (?)

En general no paseé demasiado, porque los días eran grises y fríos, y cuando terminábamos el día laboral (entre las 17:30 y las 18) ya era de noche. Excepto el viernes, que terminamos a las 16hs, y encima salió el sol. Y el sábado, claro, que salí a pegar una vuelta durante la mañana y mediodía. A diferencia de los primeros días, ya teníamos como 12°C, estábamos como queríamos (?)

Los primeros días nos poníamos toda la ropa que teníamos

Nerdeando en un bar de cerveza artesanal; indoor las temperaturas eran otras, claro

El sábado hice paseo por la zona del Danubio, subí un montecito donde estaba la Estatua de la Libertad (levantada originalmente en 1947, en recuerdo a la liberación soviética de Hungría durante la Segunda Guerra Mundial, finalizando la ocupación nazi), fui al mercado central de la ciudad, y caminé bastante para un lado y para el otro.

La gente, en general, educada. La mayoría no sabe inglés, incluso en zonas turísticas y en lugares como para comer o comprar cosas "de turistas", así que a veces uno vuelve a la típica charla de señas y sonidos varios. O se termina hablando en italiano, como nos pasó en una heladería :p.

Estatua de la Libertad

Claro, los otros días también estuvimos caminando por acá y por allá, pero en general de noche y con todos los negocios (excepto los relacionados a comer y beber) cerrados... Budapest realmente es una ciudad distinta antes y después de las 18 horas (porque a las seis cierran la mayoría de los negocios, y ya es de noche...).

Pero bueno, eso obviamente no impidió que saliéramos a comer, y yo me dediqué a los gulashs. El gulash, originario de Hungría, justamente, es simplemente un estofado de carne, y de ahí salen muchas variantes... con papa, sin papa, con spätzl chicos, grandes o directamente sin nada de eso, con cebolla o sin, etc... siempre con carne, cocida varias horas, apenas picante (por eso se lo acompaña con alguna salsita para apicantarlo, como hacemos nosotros con el locro), y MUY RICO.

A varios gulashs, le entré

Todas las fotos, acá.

Read more
facundo


Tenemos algunos eventos de Python Argentina en el primer semestre del año. Vayamos cronológicamente.

El miércoles 4 de Abril, 19hs,  hacemos un meetup de Python Argentina en Devecoop. Tendremos un par de charlas técnicas, y quizás hagamos un "Consultorio Python", veremos cómo lo armamos. Si no están anotados en el meetup de Python Buenos Aires, regístrense así reciben las noticias de estos meetups y se anotan fácil y etc.

Del 28 de Abril al 1 de Mayo tenemos una nueva edición del PyCamp, nuevamente en Baradero. Ya está abierta la inscripción, considerando incluso descuentos para socios de Python Argentina. Pueden buscar toda la info en la página del evento.

También tenemos dos PyDays, ambos en Mayo. Sí, dos, por ahora... habrán más en el año. El primero en La Plata, todavía pendiente de definir cual sábado, y el segundo en Corrientes, el sábado 19. Ya les traeré más noticias cuando nos acerquemos a las fechas.

Por otro lado, estamos armando la PyCon Argentina de este año en Buenos Aires. Fecha todavía no tenemos (estamos buscando el lugar, todavía) pero será Septiembre, Octubre o Noviembre.

Todo esto es en parte gracias a la Asociación Civil de Python Argentina, que nos permite tener un marco estructural con el cual movernos y que no nos venga la AFIP a meter en cana por mover guita por ahí... así que si querés apoyarnos en general considerá hacerte socia/o vos o ayudanos pasando este folleto en donde trabajes para que esa empresa/cooperativa lo considere también. Gracias!

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 2 has now been released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (alpha2)

Important announcements

NTP services now provided by Chrony

Starting with 2.4 Alpha 2, and in common with changes being made to Ubuntu Server, MAAS replaces ‘ntpd’ with Chrony for the NTP protocol. MAAS will handle the upgrade process and automatically resume NTP service operation.

Vanilla CSS Framework Transition

MAAS 2.4 is undergoing a Vanilla CSS framework transition to a new version of vanilla, which will bring a fresher look to the MAAS UI. This framework transition is currently work in progress and not all of the UI have been fully updated. Please expect to see some inconsistencies in this new release.

New Features & Improvements

NTP services now provided by Chrony.

Starting from MAAS 2.4alpha2, chrony is now the default NTP service, replacing ntpd. This work has been done to align with the Ubuntu Server and Security team to support chrony instead of ntpd. MAAS will continue to provide services exactly the same way and users will not be affected by the changes, handling the upgrade process transparently. This means that:

  • MAAS will configure chrony as peers on all Region Controllers
  • MAAS will configure chrony as a client of peers for all Rack Controllers
  • Machines will use the Rack Controllers as they do today

MAAS Internals optimization

MAAS 2.4 is currently undergoing major surgery to improve various areas of operation that are not visible to the user. These updates will improve the overall performance of MAAS in larger environments. These improvements include:

  • AsyncIO based event loop
    • MAAS has an event loop which performs various internal actions. In older versions of MAAS, the event loop was managed by the default twisted event loop. MAAS now uses an asyncio based event loop, driven by uvloop, which is targeted at improving internal performance.

  • Improved daemon management
    • MAAS has changed the way daemons are run to allow users to see both ‘regiond’ and ‘rackd’ as processes in the process list.
    • As part of these changes, regiond workers are now managed by a master regiond process. In older versions of MAAS each worker was directly run by systemd. The master process is now in charge of ensuring workers are running at all times, and re-spawning new workers in case of failures. This also allows users to see the worker hierarchy in the process list.
  • Ability to increase the number of regiond workers
    • Following the improved way MAAS daemons are run, further internal changes have been made to allow the number of regiond workers to be increased automatically. This allows MAAS to scale to handle more internal operations in larger environments.
    • While this capability is already available, it is not yet available by default. It will become available in the following milestone release.
  • Database query optimizations
    • In the process of inspecting the internal operations of MAAS, it was discovered that multiple unnecessary database queries are performed for various operations. Optimising these requires internal improvements to reduce the footprint of these operations. Some areas that have been addressed in this release include:
      • When saving node objects (e.g. making any update of a machine, device, rack controller, etc), MAAS validated changes across various fields. This required an increased number of queries for fields, even when they were not being updated. MAAS now tracks specific fields that change and only performs queries for those fields.
        • Example: To update a power state, MAAS would perform 11 queries. After these improvements, , only 1 query is now performed.
      • On every transaction, MAAS performed 2 queries to update the timestamp. This has now been consolidated into a single query per transaction.
    • These changes  greatly improves MAAS performance and database utilisation in larger environments. More improvements will continue to be made as we continue to examine various areas in MAAS.
  • UI optimisations
    • MAAS is now being optimised to reduce the amount of data loaded in the websocket API to render the UI. This is targeted at only processing data for viewable information, improving various legacy areas. Currently, the work done in this area includes:
      • Script results are only loaded for viewable nodes in the machine listing page, reducing the overall amount of data loaded.
      • The node object is updated in the websocket only when something has changed in the database, reducing the data transferred to the clients as well as the amount of internal queries.

Audit logging

Continuing with the audit logging improvements, alpha2 now adds audit logging for all user actions that affect Hardware Testing & Commissioning.

KVM pod improvements

MAAS’ KVM pods was initially developed as a feature to help developers quickly iterate and test new functionality while developing MAAS. This, however, because a feature that allow not only developers, but also administrators to make better use of resources across their datacenter. Since the feature was initially create for developers, some features were lacking. As such, in 2.4 we are improving the usability of KVM pods:

  • Pod AZ’s.
    MAAS now allows setting the physical zone for the pod. This helps administrators by conceptually placing their KVM pods in a AZ, which enables them to request/allocate machines on demand based on its AZ. All VM’s created from a pod will inherit the AZ.

  • Pod tagging
    MAAS now adds the ability to set tags for a pod. This allows administrators to use tags to allow/prevent the creation of a VM inside the pod using tags. For example, if the administrator would like a machine with a ‘tag’ named ‘virtual’, MAAS will filter all physical machines and only consider other VM’s or a KVM pod for machine allocation.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha2

Read more
abeato

In chapter 5 of his mind-blowing “The Road to Reality”, Penrose devotes a section to complex powers, that is, to the solutions to

$$w^z~~~\text{with}~~~w,z \in \mathbb{C}$$

In this post I develop a bit more what he exposes and explore what the solutions look like with the help of some simple Python scripts. The scripts can be found in this github repo, and all the figures in this post can be replicated by running

git clone https://github.com/alfonsosanchezbeato/exponential-spiral.git
cd exponential-spiral; ./spiral_examples.py

The scripts make use of numpy and matplotlib, so make sure those are installed before running them.

Now, let’s develop the math behind this. The values for \(w^z\) can be found by using the exponential function as

$$w^z=e^{z\log{w}}=e^{z~\text{Log}~w}e^{2\pi nzi}$$

In this equation, “log” is the complex natural logarithm multi-valued function, while “Log” is one of its branches, concretely the principal value, whose imaginary part lies in the interval \((−\pi, \pi]\). In the equation we reflect the fact that \(\log{w}=\text{Log}~w + 2\pi ni\) with \(n \in \mathbb{Z}\). This shows the remarkable fact that, in the general case, we have infinite solutions for the equation. For the rest of the discussion we will separate \(w^z\) as follows:

$$w^z=e^{z~\text{Log}~w}e^{2\pi nzi}=C \cdot F_n$$

with constant \(C=e^{z~\text{Log}~w}\) and the rest being the sequence \(F_n=e^{2\pi nzi}\). Being \(C\) a complex constant that multiplies \(F_n\), the only influence it has is to rotate and scale equally all solutions. Noticeably, \(w\) appears only in this constant, which shows us that the \(z\) values are what is really determinant for the number and general shape of the solutions. Therefore, we will concentrate in analyzing the behavior of \(F_n\), by seeing what solutions we can find when we restrict \(z\) to different domains.

Starting by restricting \(z\) to integers (\(z \in \mathbb{Z}\)), it is easy to see that there is only one resulting solution in this case, as the factor \(F_n=e^{2\pi nzi}=1\) in this case (it just rotates the solution \(2\pi\) radians an integer number of times, leaving it unmodified). As expected, a complex number to an integer power has only one solution.

If we let \(z\) be a rational number (\(z=p/q\), being \(p\) and \(q\) integers chosen so we have the canonical form), we obtain

$$F_n=e^{2\pi\frac{pn}{q} i}$$

which makes the sequence \(F_n\) periodic with period \(q\), that is, there are \(q\) solutions for the equation. So we have two solutions for \(w^{1/2}\), three for \(w^{1/3}\), etc., as expected as that is the number of solutions for square roots, cube roots and so on. The values will be the vertex of a regular polygon in the complex plane. For instance, in figure 1 the solutions for \(2^{1/5}\) are displayed.

Figure 1

Fig. 1: The five solutions to \(2^{1/5}\)

If \(z\) is real, \(e^{2\pi nzi}\) is not periodic anymore has infinite solutions in the unit circle, and therefore \(w^z\) has infinite values that lie on a circle of radius \(|C|\).

In the more general case, \(z \in \mathbb{C}\), that is, \(z=a+bi\) being \(a\) and \(b\) real numbers, and we have

$$F_n=e^{-2\pi bn}e^{2\pi ani}.$$

There is now a scaling factor, \(e^{-2\pi bn}\) that makes the module of the solutions vary with \(n\), scattering them across the complex plane, while \(e^{2\pi ani}\) rotates them as \(n\) changes. The result is an infinite number of solutions for \(w^z\) that lie in an equiangular spiral in the complex plane. The spiral can be seen if we change the domain of \(F\) to \(\mathbb{R}\), this is

$$F(t)=e^{-2\pi bt}e^{2\pi ati}~~~\text{with}~~~t \in \mathbb{R}.$$

In figure 2 we can see one example which shows some solutions to \(2^{0.4-0.1i}\), plus the spiral that passes over them.

Fig. 2: Roots for \(2^{0.4-0.1i}\)
Fig. 2: Roots and spiral for \(2^{0.4-0.1i}\)

In fact, in Penrose’s book it is stated that these values are found in the intersection of two equiangular spirals, although he leaves finding them as an exercise for the reader (problem 5.9).

Let’s see then if we can find more spirals that cross these points. We are searching for functions that have the same value as \(F(t)\) when \(t\) is an integer. We can easily verify that the family of functions

$$F_k'(t)=F(t)e^{2\pi kti}~~~\text{with}~~~k \in \mathbb{Z}$$

are compatible with this restriction, as \(e^{2\pi kti}=1\) in that case (integer \(t\)). Figures 3 and 4 represent again some solutions to \(2^{0.4-0.1i}\), \(F(t)\) (which is the same as the spiral for \(k=0\)), plus the spirals for \(k=-1\) and \(k=1\) respectively. We can see there that the solutions lie in the intersection of two spirals indeed.

Fig. 3
Fig. 3: Roots for \(2^{0.4-0.1i}\) plus spirals for k=0 and k=-1


Fig. 4
Fig. 4: Roots for \(2^{0.4-0.1i}\) plus spirals for k=0 and k=1

If we superpose these 3 spirals, the ones for \(k=1\) and \(k=-1\) cross also in places different to the complex powers, as can be seen in figure 5. But, if we choose two consecutive numbers for \(k\), the two spirals will cross only in the solutions to \(w^z\). See, for instance, figure 6 where the spirals for \(k=\{-2,-1\}\) are plotted. We see that any pair of such spirals fulfills Penrose’s description.

Fig. 5
Fig. 5: Roots for \(2^{0.4-0.1i}\) plus spirals for k=-1,0,1


Fig. 6
Fig. 6: Roots for \(2^{0.4-0.1i}\) plus spirals for k=-1,-2

In general, the number of places at which two spirals cross depends on the difference between their \(k\)-number. If we have, say, \(F_k’\) and \(F_l’\) with \(k>l\), they will cross when

$$t=…,0,\frac{1}{k-l},\frac{2}{k-l},…,\frac{k-l-1}{k},1,1+\frac{1}{k-l},…$$

That is, they will cross when \(t\) is an integer (at the solutions to \(w^z\)) and also at \(k-l-1\) points between consecutive solutions.

Let’s see now another interesting special case: when \(z=bi\), that is, it is pure imaginary. In this case, \(e^{2\pi ati}\) is \(1\), and there is no turn in the complex plane when \(t\) grows. We end up with the spiral \(F(t)\) degenerating to a half-line that starts at the origin (which is reached when \(t=\infty\) if \(b>0\)). This can be appreciated in figure 7, where the line and the spirals for \(k=-1\) and \(k=1\) are plotted for \(20^{0.1i}\). The two spirals are mirrored around the half-line.

Fig. 7
Fig. 7: Roots for \(10^{0.1i}\), \(F(t)\), and spirals for k=-1,1

Digging more into this case, it turns out that a pure imaginary number to a pure imaginary power can produce a real result. For instance, for \(i^{0.1i}\), we see in figure 8 that the roots are in the half-positive real line.

Fig. 8
Fig. 8: Roots for \(i^{0.1i}\), \(F(t)\), and spirals for k=-1,1

That something like this can produce real numbers is a curiosity that has intrigued historically mathematicians (\(i^i\) has real values too!). And with this I finish the post. It is really amusing to start playing with the values of \(w\) and \(z\), if you want to do so you can use the python scripts I pointed to in the beginning of the post. I hope you enjoyed the post as much as I did writing it.

Read more
facundo


Durante el veranito nos tomamos unos días con la familia para pasear por distintos lados.

Sí, hace un montón. Es que nos tardamos en procesar las fotos... no escala; para las próximas vacaciones largas, creo que voy a poner a la familia a revisar fotos durante las vacaciones mismas, sino llegamos y tenemos 1238941 fotos, y revisando un cachito durante la cena en los sucesivos días, no terminamos más... al punto incluso que todavía no están listas! Pero bueno, ya no espero más, saco este post, las fotos vendrán después...


Playa

Entre Navidad y Año Nuevo nos fuimos en carpa a Mar Azul. Apuntábamos al camping al que habíamos ido cuando Felipe era pequeño, pero ese ya no existe más y el único que quedó es el Camping de Los Ingenieros, que estuvo bastante bien (aunque el baño que teníamos cerca nunca tuvo agua caliente, y en general le faltan mesas).

La pasamos muy bien, aunque hizo mucho calor y en algunos momentos era agobiante (apenas sobrevivíamos bajo unas sombritas poco densas). Claro, está el mar para meterse, pero hay toda una franja horaria en la que no íbamos a llevar a los chicos a la playa para que se calcinen bajo el sol.

Preparándonos para ir a la playa

Almorzando en el camping

Obviamente fuimos en carpa, y nos dimos cuenta que ya es un poco chica para los cuatro. Eso, sumado a que se quebró una vara de la estructura y rasgó el techo, nos estaría forzando a comprar una carpa nueva más grande, tenemos que ver qué hay...

La playa de Mar Azul es muy linda, bastante ancha, y aunque en algunos momentos había bastante gente, nunca estuvimos amuchados (pero a veces se complicaba armar una cancha de tejo grande...).

Atardecer en la playa, el día que llegamos

Jugando al tejo con Felipe


Al Este de las Sierras

En Enero nos fuimos a Córdoba un par de semanas (bien en contra mano de "findes de semana" o "quincenas", para no desquiciarnos al viajar). Estuvimos unos días en El Espinillo, y después cruzamos las sierras y nos quedamos otros días en La Población.

Un minuto de tranquilidad :)

El primer lugar, en El Espinillo, era en un complejo de tres cabañas con una pileta compartida. La pileta estuvo muy bien, ya que esos días hizo mucho calor entonces los chicos se metieron varias veces y la disfrutaron un montón. Con Moni no nos metimos tanto pero aprovechamos para tomar sol. Incluso, como a veces la gente de las otras cabañas se iban temprano, tuve un par de días en los que pude tomar sol en bolas lo más tranquilo.

Además de la pileta, también hicimos "agua" en varios ríos cercanos (Del Espinillo, Del Medio, y Los Reartes)... algunos con más sombra, otros con más pastito alrededor, algunos con el piso de piedra, en otros teníamos playita de arena... muchas combinaciones, lo importante era que nos podíamos meter un rato al agua, tomar sol, disfrutar el verde, pasarla bien.

Los chicos disfrutando la pileta a full

En un arroyo de por ahíEn un arroyo de por ahí

Uno de los días fuimos a visitar la Villa General Belgrano. Estuvo bueno porque visitamos el Museo Politemático Castillo Romano y paseamos un rato, pero el restaurant al qué íbamos siempre ya no es lo que era (la cerveza artesanal como aguada, la comida meh, la atención dejó que desear) y en general es una ciudad "demasiado turística" a la que no llegamos a disfrutar.

Un paseo que sí nos encantó de esos días fue el del Parque Recreativo de La Serranita, un lugar bárbaro con mil juegos y actividades. Hicimos minigolf, arquería, futpool, jugamos al tejo, carrera de bolitas, nos tiramos por un tobogán gigante, recorrimos un laberinto, hicimos actividades de treparse y equilibrio, adivinanzas y un montón de actividades más... estuvimos como seis horas ahí adentro (sin contar el almuerzo), súper recomendado para ir con la familia!

Practicando el mini-golf

Felipe atacando un helado con ganas

Otra visita que disfrutamos fue a Nico y Jesi en La Quintana, donde pasamos una tarde bárbara charlando y tomando mate. Estuvo complicado llegar porque ya en viaje nos agarró una lluvia torrencial, y cuando llegamos a La Quintana los caminos estaban complicados... no sólo por el barro, sino también porque la lluvia había sido tanta que cada calle era un pequeño arroyo.

Encima tuve la suerte de presenciar (y casi participar!) en la instalación del primer prototipo funcional del LibreRouter en su antena, :D

Visitando La Quintana

En el monumento a Mercedes Sosa


Las Sierras, lado Oeste

A mitad de las vacaciones cambiamos de lugar. Hicimos un pequeño viaje cruzando las sierras y nos fuimos hasta el segundo lugar que teníamos alquilado.

Era una casita mediana, súper equipada, en el medio de un inmenso terreno, cerca de una casa bien grande que era donde vivía la dueña. El paisaje era muy lindo, la verdad, todo muy cuidado, verde por doquier, hermoso.

Los peques, contemplando el paisaje

Foto grupal en una de las construcciones de la Villa

Los días de la segunda fase estuvieron en general más frescos, lo que nos llevó a realizar menos actividades "acuáticas", pero no por eso aprovechamos menos las vacaciones: además de un par de veces en la pileta, y algunas en ríos/arroyos, "pancheamos" a full y paseamos algo.

A nivel de chapoteo, nos fuimos una vez a un Balneario en Paso de las Tropas, cerca de Nono, con la idea de almorzar, pasar la tarde, y luego apuntar para el Museo Rocsen, pero se nos fueron pasando las horas y después ya estábamos muy cansados, así que nos volvimos, dejamos el museo para la próxima. También encontramos una zona linda para aprovechar un arroyo cerca de donde nos hospedábamos (recorrimos un caminito con el auto hasta que no se pudo más, y luego un tramo a pie siguiendo el arroyo, hasta llegar a una linda zona para pasar la tarde). La verdad es que la primera vez nos quedamos en un punto con una pequeña cascadita, pero la segunda vez como cuando llegamos ya había gente, seguimos caminando un poco más y encontramos un lugar mejor :)

En un arroyito con unas cascadas hermosas

Preparando el almuerzo al costado del agua

Con respecto a los paseos, fuimos varias veces a la Plaza de San Javier, unos kilómetros al norte. No sólo para las compras de supermercado y eso, sino porque había feria artesanal todos los días, entonces visitamos varias veces, e incluso caimos de casualidad en una obra de títeres que estuvo muy buena. Encima era una zona donde había varios lugares lindos para comer, e incluso un pequeño parque cervecero/heladería, al cual fuimos más de una vez :)

También hicimos varios kilómetros para el sur y visitamos la fábrica de aceite de oliva Sierra Pura. Nos encantó. Te reciben, te muestran los árboles contándote las diferencias entre los tipos de olivo que hay, luego te muestran las máquinas con las que producen, y te explican todo el proceso, contestándote las preguntas que se te ocurran. Como no hay turnos ni horarios, llegás, y al rato ya te hacen el paseo, sean 3, 7, o 15 personas. Al final, hay una degustación de todos los aceites que hacen (tres blends, tres varietales, y muchos saborizados), y obvio uno puede comprar ahí con algún descuento :)

Malena, disfrutando del aire libre

En fin. Vacaciones, paseo, descanso. Ahora de nuevo en la jungla, desde hace rato...

Read more
Colin Ian King

Linux Kernel Module Growth

The Linux kernel grows at an amazing pace, each kernel release adds more functionality, more drivers and hence more kernel modules.  I recently wondered what the trend was for kernel module growth per release, so I performed module builds on kernels v2.6.24 through to v4.16-rc2 for x86-64 to get a better idea of growth rates:


..as one can see, the rate of growth is relatively linear with about 89 modules being added to each kernel release, which is not surprising as the size of the kernel is growing at a fairly linear rate too.  It is interesting to see that the number of modules has easily more than tripled in the 10 years between v2.6.24 and v4.16-rc2,  with a rate of about 470 new modules per year. At this rate, Linux will see the 10,000th module land in around the year 2025.

Read more
Dustin Kirkland

One of the many excellent suggestions from last year's HackerNews thread, Ask HN: What do you want to see in Ubuntu 17.10?, was to refresh the Ubuntu server's command line installer:



We're pleased to introduce this new installer, which will be the default Server installer for 18.04 LTS, and solicit your feedback.

Follow the instructions below, to download the current daily image, and install it into a KVM.  Alternatively, you could write it to a flash drive and install a physical machine, or try it in your virtual machine of your choice (VMware, VirtualBox, etc.).

$ wget http://cdimage.ubuntu.com/ubuntu-server/daily-live/current/bionic-live-server-amd64.iso
$ qemu-img create -f raw target.img 10G
$ kvm -m 1024 -boot d -cdrom bionic-live-server-amd64.iso -hda target.img
...
$ kvm -m 1024 target.img

For those too busy to try it themselves at the moment, I've taken a series of screenshots below, for your review.














Finally, you can provide feedback, bugs, patches, and feature requests against the Subiquity project in Launchpad:



Cheers,
Dustin

Read more
Dustin Kirkland

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / motd.ubuntu.com (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. manpages.ubuntu.com (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next
 
Python-libmaas Availability
Libmaas is available in the Ubuntu Bionic archive or you can download the source from:

MAAS 2.4.0 (alpha1)

Important announcements

Dependency on tgt (iSCSI) has now been dropped

Starting from MAAS 2.3, the way run ephemeral environments and perform deployments was changed away from using iSCSI. Instead, we introduced the ability to do the same using a squashfs image. With that, we completely removed the requirement for having tgt at all, but we didn’t drop the dependency in 2.3. As of 2.4, however, tgt has now been completely removed.

Dependency on apache2 has now been dropped in the debian packages

Starting from MAAS 2.0, MAAS now made the UI available in port 5240 and deprecated the use of port 80. However, as a mechanism to not break users when upgrading from the previous LTS, MAAS continued to have apache2 as a dependency to provide a reverse proxy to allow users to connect via port 80.

However, the MAAS snap changed that behavior no longer providing access to MAAS via port 80. In order to keep MAAS consistent with the snap, starting from MAAS 2.4, the debian package no longer depends on apache2 to provide a reverse proxy capability from port 80.

Python libmaas (0.6.0) now available in the Ubuntu Archive

I’m happy to announce that the new MAAS Client Library is now available in the Ubuntu Archives for Bionic. Libmaas is an asyncio based client library that provides a nice interface to interact with MAAS. More details below.

New Features & Improvements

Machine Locking

MAAS now adds the ability to lock machines, which prevents the user from performing actions on machines that could change their state. This gives MAAS a prevention mechanism of potentially catastrophic actions. For example, it will prevent mistakenly powering off machines or mistanly releasing machines that could bring workloads down.

Audit logging

MAAS 2.4 now allows the administrators to audit the user’s actions, with the introduction of audit logging. The audit logs are available to administrators via the MAAS CLI/API, giving administrators a centralized location to access these logs.

Documentation is in the process of being published. For raw access please refer to the following link:

https://github.com/CanonicalLtd/maas-docs/pull/766/commits/eb05fb5efa42ba850446a21ca0d55cf34ced2f5d

Commissioning Harness – Supporting firmware upgrade and hardware specific scripts

The commissioning harness has been expanded with various improvements to help administrators write their own firmware upgrade and hardware specific scripts. These improvements addresses various of the challenges administrators face when performing such tasks at scale. The improvements include:

  • Ability to auto-select all the firmware upgrade/storage hardware changes (API only, UI will be available soon)

  • Ability to run scripts only for the hardware they are intended to run on.

  • Ability to reboot the machine while on the commissioning environment without disrupting the commissioning process.

This allows administrators to:

  • Create a hardware specific by declaring in which machine it needs to be run, by specifying the hardware specific PCI ID, modalias, vendor or model of the machine or device.

  • Create firmware upgrade scripts that require a reboot before the machine finishes the commissioning process, by allowing to describe this in the script’s metadata.

  • Allows administrators to define where the script can obtain proprietary firmware and/or proprietary tools to perform any of the operations required.

Minor improvements – Gather information about BIOS & firmware

MAAS now gathers more information about the underlying system, such as the Model, Serial, BIOS and firmware information of a machine (where available). It also gathers the information for storage devices as well as network interfaces.

MAAS Client Library (python-libmaas)

New upstream release – 0.6.0

A new upstream release is now available in the Ubuntu Archive for Bionic. The new release includes the following changes:

  • Add/read/update/delete storage devices attached to machines.

  • Configure partitions and mount points

  • Configure Bcache

  • Configure RAID

  • Configure LVM

Known issues & work arounds

LP: #1748712  – 2.4.0a1 upgrade failed with old node event data

It has been reported that an upgrade to MAAS 2.4.0a1 failed due to having old data from a non-existent know stored in the database. This could have been due to a older devel version of MAAS which would have left an entry in the node event table. A work around is provided in the bug report.

If you hit this issue, please update the bug report immediately so MAAS developers.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha1

Read more
K. Tsakalozos

In this post we will see how to automate the deployment of an ASP.NET Core application on an On-Prem Kubernetes cluster. We will base our work on the excellent blog “Deploying ASP.NET Core apps on App Engine” by Mete Atamel. Our contribution would be a) showing how to target public as well as private clouds for Kubernetes deployments, and b) automate the delivery of your software through a basic CI/CD based on Jenkins.

Click here to share this article on LinkedIn »

What will we be using

  • An Ubuntu 16.04 machine
  • git command-line client (installed by sudo apt install git)
  • Internet access
  • $0, yes this is going to be a “look how much you can do for free” kind of blog post

Quick outline

  • Create an ASP.NET Core application on the Ubuntu machine and push the code to GitHub.
  • Package the application in a Docker container and upload it to Docker Hub
  • Spin-up a Kubernetes cluster, the Canonical distribution. Here we will deploy that cluster locally, but you can use any private or public cloud you might have access to.
  • Deploy your application on the Kubernetes cluster and expose it.
  • Deploy Jenkins next to Kubernetes and automate the delivery of your app.

Let’s not waste any time we have a long way ahead of us.

ASP.NET Core applications on Linux

Installing .NET on an Ubuntu 16.04 is as simple as adding a repository and getting dotnet-sdk-2.1.4:

$ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg 
$ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
$ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
$ sudo apt-get install apt-transport-https
$ sudo apt-get update
$ sudo apt-get install dotnet-sdk-2.1.4

We can now create our application. It is going to be the template razor application as described in Mete’s codelab and we will not even bother to change the application’s name!

$ mkdir -p ~/workspace/dotnet
$ cd ~/workspace/dotnet
$ dotnet new razor -o HelloWorldAspNetCore
$ cd HelloWorldAspNetCore
$ dotnet run

You should see the application on your browser at http://localhost:5000

Our code will be on a public github repository so go ahead and create an account if you do not already have one (https://github.com). Click the “New repository” button to create a public repository named HelloWorldAspNetCore.

Creating a repository for our code.

Lets add a .gitignore file at the root of our project and push our code:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore
$ wget https://raw.githubusercontent.com/OmniSharp/generator-aspnet/master/templates/gitignore.txt
$ mv gitignore.txt .gitignore
$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin https://github.com/ktsakalozos/HelloWorldAspNetCore.git
$ git push -u origin master

You can now relax! Your code is safe with github.

Package your application in a Docker container

Installing docker would require adding the respective repository and apt getting docker-ce:

$ sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
$ sudo usermod -a -G docker $USER
$ newgrp docker

While we are in the process of setting up docker we might as well register with Docker Hub and create a repository to store our images. Go to https://hub.docker.com and create an account. I will be here waiting :).

After logging in to Docker Hub click on the “Create Repository” button and create a new public repository. That is where we will be pushing our docker images. For the rest of this blog the docker user is “kjackal” and the repository is named “hello-dotnet”. This will make more sense to you shortly.

New docker repository form.

To package our application we first need to ask dotnet to compile our code and gather any dependencies into a folder for later deployment. This is done with:

$ dotnet publish -c Release

Our docker container should package everything under bin/Release/netcoreapp2.0/publish/. We create a `Dockerfile` on the root of our project with the following contents:

$ cd ~/workspace/dotnet/HelloWorldAspNetCore/
$ cat ./Dockerfile
FROM gcr.io/google-appengine/aspnetcore:2.0
ADD ./bin/Release/netcoreapp2.0/publish/ /app
ENV ASPNETCORE_URLS=http://*:${PORT}
WORKDIR /app
ENTRYPOINT [ "dotnet", "HelloWorldAspNetCore.dll"]

Building the container and testing that it works:

$ docker build -t kjackal/hello-dotnet:v1 .
$ docker run -p 8080:8080 kjackal/hello-dotnet:v1

You should see the output at http://localhost:8080 .

It is time to push the first version to Docker Hub:

$ docker login
$ docker push kjackal/hello-dotnet:v1

The Dockerfile should be part of the code so we commit it and push it to the git repository:

$ git add Dockerfile
$ git commit -m "Adding dockerfile"
$ git push origin master

Deploy a Kubernetes Cluster

For the on-prem Kubernetes deployments we will go with Canonical’s solution. The reason being (apart from me being biased) that Canonical offers a seamless and effortless transition from a toy deployment running on your laptop to a full blown production grade Kubernetes deployed on private, public clouds or even on bare metal.

In this blog we show how to deploy Kubernetes and Jenkins on your localhost (laptop/desktop) — just make sure you have at least 8GB of RAM. We first need to have LXD running. LXD is a really powerful type of container based on the same technologies as Docker. Contrary to Docker, LXD containers resemble more to virtual machines (VMs). Lets simplify things and assume from now on that LXD containers are VMs that boot instantly and have no performance overhead!

In the following snippet we install LXD. While initialising ( /snap/bin/lxd init) make sure you go with the defaults but you do not enable ipv6. When asked “What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?” Reply with “none”.

$ sudo snap install lxd
$ sudo usermod -a -G lxd $USER
$ newgrp lxd
$ /snap/bin/lxd init

At this point we can use either Juju or Conjure-up to deploy Kubernetes. Conjure-up is essentially a wizard sitting on-top of Juju.

As already mentioned by Tim installing Kubernetes with conjure-up is as simple as:

$ sudo snap install conjure-up --classic
$ conjure-up

Canonical Kubernetes comes in two flavours:

  1. kubernetes-core is a cut down version installed in two machines, in our case two LXD containers running on our localhost.
  2. canonical-kubernetes is the full production grade deployment with features such as HA and monitoring.

We will go for kubernetes-core; on the next screen select localhost as the cloud provider. Follow the wizard’s steps till the end and wait for the deployment to finish. For a headless deployment you can do a conjure-up kubernetes-core localhost .

To review the status of our deployment have a look at:

$ juju status

Getting into one of the LXD containers/machines we use juju ssh, for exmple:

$ juju ssh kubernetes-master/0

Inside kubernetes-master under /home/ubuntu you will find a config file you can use for accessing your cluster. We can fetch that file with:

$ juju scp kubernetes-master/0:config .

Conjure-up has already copied the Kubernetes config locally and installed kubectl for us. How nice!

Automate the deployment process, CI/CD

We will be showing a few Jenkins jobs to automate the process of building, packaging and deploying our application. The intention here is to show everything that happens under the hood and not hide behind a flashy UI.

First things first, we need a Jenkins machine.

$ juju deploy jenkins

We deploy Jenkins next to our Kubernetes cluster. This will take some time, you can check the progress of the deployment with juju status.

Next we need to set a password to Jenkins and expose it so we can access its UI on port 8080. Exposing Jenkins is not needed in the localhost deployment which uses LXD containers but we show it here for completeness.

$ juju config jenkins password='your_secure_password'
$ juju expose jenkins

Before we start crafting our jobs we need to configure Jenkins a bit more. I can tell you beforehand our jobs need to sudo run without asking for a password. The easiest way to do that is to edit /etc/sudoers in the Jenkins machine. Here is how we append a line to the sudoers file with Juju:

$ juju run --unit jenkins/0 -- 'sudo echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'

We also know that our Jenkins jobs need to talk to the Kubernetes. To this end Jenkins will need the kubeconfig file. We take the file from the kubernetes-master and place it in our Jenkins machine under /var/tmp:

$ juju scp kubernetes-master/0:config .
$ juju scp config jenkins/0:/var/tmp/

Last part of the configuration, I promise! We know our application will be exposed using Kubernetes NodePort on port 31576. We need to make sure there is no firewall blocking that port and requests can reach it:

$ juju run --application kubernetes-worker -- open-port 31576

We are now ready to create our three main jobs:

Three jobs we will be creating
  • The first Jenkins jobs is the “Install dependencies”. This job is just a shell script installing all software packages needed to a) talk to kubernetes, b) build our ASP.NET application and c) package everything into a docker container. Place the following in a shell script job and run it once:
echo "Installing kubectl"
sudo snap install kubectl --classic
echo "Installing dotnet"
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get install apt-transport-https -y
sudo apt-get update
sudo apt-get install dotnet-sdk-2.1.4 -y
echo "Installing docker"
sudo apt-get install apt-transport-https ca-certificates curl \
software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce -y
  • The second Jenkins job bootstraps the service in Kubernetes. It creates a deployment using the v1 image we created above and it makes sure all operations are recorded ( — record param). This deployment is exposed using NodePort 31576. Place the following in a Jenkins jobs and run it once, remember to update the docker user name (kjackal):
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config run hello-dotnet \
--image=kjackal/hello-dotnet:v1 --port=8080 --record
echo "apiVersion: v1
kind: Service
metadata:
name: hello-dotnet
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31576
name: http
selector:
run: hello-dotnet" > /tmp/expose.yaml
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config apply -f /tmp/expose.yaml

Use juju status to find the IP of a kubernetes worker and open a browser at http://<kubernetes-worker-ip>:31576 . Your application is served from Kubernetes! But we are not done yet.

  • Lets create a third job (“Build and release”) that pulls your code from GitHub, compiles it, puts it in a container and deploys that container. Replace the repository and the docker username in the following snippet. Then create a Jenkins job:
rm -rf HelloWorldAspNetCore
git clone https://github.com/ktsakalozos/HelloWorldAspNetCore.git
cd HelloWorldAspNetCore
dotnet publish -c Release
sudo docker login -u kjackal -p ${DOCKER_PASS}
sudo docker build -t kjackal/hello-dotnet:${DOCKER_TAG} .
sudo docker push kjackal/hello-dotnet:${DOCKER_TAG}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${DOCKER_TAG}

Two parameters are needed: ${DOCKER_TAG} is a string, and ${DOCKER_PASS} holds the docker password of the user (kjackal in this case). You have to tick the checkbox indicating this is a parametrized job and add the two parameters. We are ready! Trigger the job and wait for it to finish. Your code should find its way to our Kubernetes cluster.

You do not believe me? Have a look at the rollout history of our deployment.

$ kubectl rollout history deployment/hello-dotnet

Release from GitHub

Everything looks great so far. Every time we want to release we will login to Jenkins and trigger the “Build and release” job…. Lets try something different. Let’s trigger a release from our code by creating a tag.

Create a “Release from Github” job on Jenkins and have it running periodically every 5 minutes:

Trigger job every 5 minutes

We want this job to look for new tags and if it detects a new one to perform the usual compile, package, deploy cycle. Here is how one such job:

#!/bin/bash
REPO="https://github.com/ktsakalozos/HelloWorldAspNetCore.git"
rm -rf ./HelloWorldAspNetCore
git clone $REPO
cd HelloWorldAspNetCore
# Initialise tags list
if [ ! -f /var/tmp/known-tags ]; then
git tag > /var/tmp/known-tags
echo "Initialising. List of preexisting git tags:"
cat /var/tmp/known-tags
exit 1
fi
mv /var/tmp/known-tags /var/tmp/know-tags.old
git tag > /var/tmp/known-tags
diff /var/tmp/known-tags /var/tmp/know-tags.old
if [ $? == '0' ]; then
echo "No new git tags detected."
exit 2
fi
# We have new tags
last_tag=$(grep -v -f /var/tmp/know-tags.old /var/tmp/known-tags | tail -n 1)
git checkout tags/${last_tag}
echo "Buidling ${last_tag}"
dotnet publish -c Release
sudo docker login -u kjackal -p <replace_with_docker_password>
sudo docker build -t kjackal/hello-dotnet:${last_tag} .
sudo docker push kjackal/hello-dotnet:${last_tag}
sudo /snap/bin/kubectl --kubeconfig=/var/tmp/config set image deployment/hello-dotnet hello-dotnet=kjackal/hello-dotnet:${last_tag}

Make sure you run the this job once so it gets initialised. Afterwords, and every 5 minutes, this job will look at the available tags and fail if no new tags are present.

Lets create a new tag/release now. Go to your GitHub repository click releases -as shown below- and fill up the release form.

Here is where you find the releases you have.
Release form on Github

Within 5 minutes your release will reach Kubernetes! Without you needing to login to Jenkins. Automagically!

A few points to note:

  1. There might be Jenkins plugins for this. But we said we will be looking at what is under the hood without hiding behind a GUI.
  2. You should place your Jenkins jobs on GitHub along with your code.

Where to go from here

So far you have seen the parts of a basic, yet fully functional CI/CD that delivers a .NET application onto any Kubernetes cluster. Each of the steps shown above are subject to improvements and tailoring based on your needs.

  • The ASP.NET application would normally have automated tests. You would want to run these tests in your CI. Travis is a great tool for this purpose and depending on the size and nature of your project it could be free. Alternatively you could setup Jenkins to run these tests as often as you please and report back the result.
  • Look into Helm if you intend to distribute your application instead of offering it as a service hosted by you.
  • Experiment with release strategies and find the one that best suits your needs. Make sure you read through Kubernetes deployment strategies.
  • You could consider using the auto scaling features of Kubernetes.
  • The Kubernetes cluster here is on LXD containers. You should use a cloud either private (eg Openstack) or public. With Conjure-up and Juju the cluster deployment process remains the same regardless of the targeted cloud. You have no excuse.
  • Finally, as you move to a cloud make sure you deploy the Canonical Distribution of Kubernetes instead of kubernetes-core. You will get a more robust deployment with HA features, logging and monitoring.

Resources


Automated Delivery of ASP.NET Core Apps on On-Prem Kubernetes was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more
Dustin Kirkland

  • To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.
  • Feedback welcome here: https://ubu.one/imgSurvey
In last year's AskHN HackerNews post, "Ask HN: What do you want to see in Ubuntu 17.10?", and the subsequent treatment of the data, we noticed a recurring request for "lighter, smaller, more minimal" Ubuntu images.

This is particularly useful for container images (Docker, LXD, Kubernetes, etc.), embedded device environments, and anywhere a developer wants to bootstrap an Ubuntu system from the smallest possible starting point.  Smaller images generally:
  • are subject to fewer security vulnerabilities and subsequent updates
  • reduce overall network bandwidth consumption
  • and require less on disk storage
First, a definition...
"The Ubuntu Minimal Image is the smallest base upon which a user can apt install any package in the Ubuntu archive."
By design, Ubuntu Minimal Images specifically lack the creature comforts, user interfaces and user design experience that have come to define the Ubuntu Desktop and Ubuntu Cloud images.

To date, we've shaved the Bionic (18.04 LTS) minimal images down by over 53%, since Ubuntu 14.04 LTS, and trimmed nearly 100 packages and thousands of files.





   




   

Read more
Colin Ian King

stress-ng V0.09.15

It has been a while since my last post about stress-ng so I thought it would be useful to provide an update on the changes since V0.08.09.

I have been focusing on making stress-ng more portable so it can build with various versions of clang and gcc as well as run against a wide range of kernels.   The portability shims and config detection added to stress-ng allow it to build and run on a wide range of Linux systems, as well as GNU/HURD, Minix, Debian kFreeBSD, various BSD systems, OpenIndiana and OS X.

Enabling stress-ng to work on a wide range of architectures and kernels with a range of compiler versions has helped me to find and fix various corner case bugs.  Also, static analysis with a various set of tools has helped to drive up the code quality. As ever, I thoroughly recommend using static analysis tools on any project to find bugs.

Since V0.08.09 I've added the following stressors:

  • inode-flags  - (using the FS_IOC_GETFLAGS/FS_IOC_SETFLAGS ioctl, see ioctl_iflags(2) for more details.
  • sockdiag - exercise the Linux sock_diag netlink socket diagnostics
  • branch - exercise branch prediction
  • swap - exercise adding and removing variously sized swap partitions
  • ioport - exercise I/O port read/writes to try and cause CPU I/O bus delays
  • hrtimers - high resolution timer stressor
  • physpage - exercise the lookup of a physical page address and page count of a virtual page
  • mmapaddr - mmap pages to randomly unused VM addresses and exercise mincore and segfault handling
  • funccall - exercise function calling with a range of function arguments types and sizes, for benchmarking stack/CPU/cache and compiler.
  • tree - BSD tree (red/black and splay) stressor, good for exercising memory/cache
  • rawdev - exercise raw block device I/O reads
  • revio - reverse file offset random writes, causes lots of fragmentation and hence many file extents
  • mmap-fixed - stress fixed address mmaps, with a wide range of VM addresses
  • enosys - exercise a wide range of random system call numbers that are not wired up, hence generating ENOSYS errors
  • sigpipe - stress SIGPIPE signal generation and handling
  • vm-addr - exercise a wide range of VM addresses for fixed address mmaps with thorough address bit patterns stressing
Stress-ng has nearly 200 stressors and many of these have various stress methods than can be selected to perform specific stress testing.  These are all documented in the manual.  I've also updated the stress-ng project page with various links to academic papers and presentations that have used stress-ng in various ways to stress computer systems.  It is useful to find out how stress-ng is being used so that I can shape this tool in the future.

As ever, patches for fixes and improvements are always appreciated.  Keep on stressing!

Read more
Dustin Kirkland


I'm the proud owner of a new Dell XPS 13 Developer Edition (9360) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.

Kudos to the Dell and the Canonical teams that have engineered a truly remarkable developer desktop experience.  You should also check out the post from Dell's senior architect behind the XPS 13, Barton George.

As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-)  See this post from May 7, 2012.  You'll be happy to know that machine is still going strong.  It's now my wife's daily driver.  And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)

Now, this latest XPS edition is a real dream of a machine!

From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory.  While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...

The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!

kirkland@xps13:~$ sudo hdparm -tT /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec
Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec

And on top of that, this is my first QHD+ touch screen laptop display, sporting a magnificent 3200x1800 resolution.  The graphics are nothing short of spectacular.  Here's nearly 4K of Hollywood hard "at work" :-)


The keyboard is super comfortable.  I like it a bit better than the 1st generation.  Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-)  The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.


The speakers are decent for a laptop, and the microphone is excellent.  The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.


And Bluetooth and WiFi, well, they "just work".  I got 98.2 Mbits/sec of throughput over WiFi.

kirkland@xps:~$ iperf -c 10.0.0.45
------------------------------------------------------------
Client connecting to 10.0.0.45, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.149 port 40568 connected with 10.0.0.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec

There's no external display port, so you'll need something like this USB-C-to-HDMI adapter to project to a TV or monitor.


There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.


One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended.  I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars.  To do so, you'll need to enable "USB PowerShare" in the BIOS.  Here's an article from Dell's KnowledgeBase explaining how.


Honestly, I have only one complaint...  And that's that there is no Trackstick mouse (which is available on some Dell models).  I'm not a huge fan of the Touchpad.  It's too sensitive, and my palms are always touching it inadvertently.  So I need to use an external mouse to be effective.  I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop!  Otherwise, this machine is a beauty.  I'm sure you'll love it too.

Cheers,
Dustin

Read more