Canonical Voices

Colin Ian King

New features in forkstat V0.02.00

The forkstat mascot
Forkstat is a tiny utility I wrote a while ago to monitor process activity via the process events connector. Recently I was sent a patch from Philipp Gesang to add a new -l option to switch to line buffered output to reduce the delay on output when redirecting stdout, which is a useful addition to the tool.   During some spare time I looked at the original code and noticed that I had overlooked some of lesser used process event types:
  • STAT_PTRC - ptrace attach/detach events
  • STAT_UID - UID (and GID) change events
  • STAT_SID - SID change events
..so I've now added support for these events too.
    I've also added some extra per-process information on each event. The new -x "extra info" option will now also display the UID of the process and where possible the TTY it is associated with.  This allows one to easily detect who is responsible for generating the process events.

    The following example shows fortstat being used to detect when a process is being traced using ptrace:

     sudo ./forkstat -x -e ptrce  
    Time Event PID UID TTY Info Duration Process
    11:42:31 ptrce 17376 0 pts/15 attach strace -p 17350
    11:42:31 ptrce 17350 1000 pts/13 attach top
    11:42:37 ptrce 17350 1000 pts/13 detach

    Process 17376 runs strace on process 17350 (top). We can see the ptrace attach event on the process and also then a few seconds later the detach event.  We can see that the strace was being run from pts/15 by root.   Using forkstat we can now snoop on users who are snooping on other user's processes.

    I use forkstat mainly to capture busy process fork/exec/exit activity that tools such as ps and top cannot see because of the very sort duration of some processes or threads. Sometimes processes are created rapidly that one needs to run forkstat with a high priority to capture all the events, and so the new -r option will run forkstat with a high real time scheduling priority to try and capture all the events.

    These new features landed in forkstat V0.02.00 for Ubuntu 17.10 Aardvark.

    Read more
    Alan Griffiths

    making-mesa

    In order to trace a problem[1] in the Mir stack I needed to build mesa to facilitate debugging. As the options needed were not entirely obvious I’m blogging the recipe here so I can find it again next time.

    $ apt source libgl1-mesa-dri
    $ cd mesa-17.0.3
    $ QUILT_PATCHES=debian/patches/ quilt push -a
    $ sudo mk-build-deps -i
    $ ./configure --with-gallium-drivers= --with-egl-platforms=mir,drm,rs
    $ make -j6
    $ sudo make install && sudo ldconfig

    [1] The stack breaking with EGL clients when the Mir server is running on a second host Mir server that is using the mesa-kms plugin and mesa is using the intel i965 driver. (LP: #1699484)

    Read more
    facundo

    La nube en casa


    Ya tengo andando un proyecto que arrancó hace tiempo pero se fue consolidando por partes, de forma bastante demorada. Así y todo, todavía no está 100% terminado, pero tampoco falta tanto.

    ¿Leyeron alguna vez la frase "there is no cloud, it's just someone else's computer" (no existe la "nube", es sólo la computadora de alguien más)? Bueno, este proyecto se basaba en comprar alguna computadorita y meterla en casa, :)

    La nube

    ¿Para qué? Básicamente para correr dos tareas...

    Una es Magicicada (la parte del server), que es el servicio de sincronización de archivos renacido de las cenizas de Ubuntu One. Entonces tengo tanto la computadora de escritorio como la laptop con un par de directorios sincronizados entre ellas, tanto si estoy en casa como si estoy afuera, lo cual me es muy útil. Y además me sirve de backup de tantísimos archivos (aunque no los sincronice a la laptop, como fotos y videos).

    El otro laburo que puse a correr en mi "nube personal" es el cdpetron, que es el generador automático de CDPedias. Es un proceso que tarda muchos días en terminar, y además hace un uso bastante intensivo de disco, entonces es algo que tener corriendo en mi desktop es bastante molesto.

    ¿En qué hardware puse a andar todo esto? Si están imaginando un datacenter, nada más lejos. En una minicomputadora: la Gigabyte Brix GB-BXBT-1900.

    Mini-compu

    Como pueden ver en las especificaciones es bastante modesta: un Celeron, espacio para un DIMM de memoria y un disco de 2.5" (que no vienen incluidos), y algunos puertos de salida, como Ethernet (que lo uso obviamente todo el tiempo), HDMI o USB (que usé sólo durante la instalación) y un par más que no utilicé para nada.

    A esta maquinita le puse 8GB de RAM (que va bien incluso cuando tengo todo corriendo en simultaneo) y un disco rígido (de los clásicos, de "platos que giran") de 750GB, que debería darme espacio para laburar durante un buen tiempo.

    ¿Por qué en casa y no en un server remoto o algo más "nuboso"? Por el costo, básicamente.

    Alquilar un VPS es relativamente barato, con disco decente, uno o dos cores y buena memoria. Así tengo mi blog, el servidor de linkode, y otras cosas por ahí. Pero si empezás a crecer en disco, se vuelve muy caro. En algún momento estuve alquilando un VPS con disco como para hacer la CDPedia ahí, pero me salía mucha plata, y en el momento en que ese disco también me quedó corto, lo dí de baja. Y a esto sumémosle los archivos sincronizados y de backup que tengo en Magicicada, que son más de 200GB.

    La cuenta es fácil: todo el hardware que compré (computadorita, disco, memoria, un par de cables) me salió menos que pagar un año el "pedazo de nube" que necesitaría...

    ¿Tiene algunas desventajas tener esto en casa? Ocupa algo de espacio y consume electricidad, pero es chiquita, y como no tiene ventiladores no hace ruido.

    Pero hay un factor que sí es claramente una desventaja: no me sirve de "backup offsite". O sea, si pasa algo que me afecta a todas las computadoras de casa (incendio, un rayo, me entran a robar, lo que sea), este backup también se ve afectado. Para mitigar este problema estoy pensando congelar backups periódicos.

    Read more
    Colin Ian King

    The stress-ng logo
    The latest release of stress-ng contains a mechanism to measure latencies via a cyclic latency test.  Essentially this is just a loop that cycles around performing high precisions sleeps and measures the (extra overhead) latency taken to perform the sleep compared to expected time.  This loop runs with either one of the Round-Robin (rr) or First-In-First-Out real time scheduling polices.

    The cyclic test can be configured to specify the sleep time (in nanoseconds), the scheduling type (rr or fifo),  the scheduling priority (1 to 100) and also the sleep method (explained later).

    The first 10,000 latency measurements are used to compute various latency statistics:
    • mean latency (aka the 'average')
    • modal latency (the most 'popular' latency)
    • minimum latency
    • maximum latency
    • standard deviation
    • latency percentiles (25%, 50%, 75%, 90%, 95.40%, 99.0%, 99.5%, 99.9% and 99.99%
    • latency distribution (enabled with the --cyclic-dist option)
    The latency percentiles indicate the latency at which a percentage of the samples fall into.  For example, the 99% percentile for the 10,000 samples is the latency at which 9,900 samples are equal to or below.

    The latency distribution is shown when the --cyclic-dist option is used; one has to specify the distribution interval in nanoseconds and up to the first 100 values in the distribution are output.

    For an idle machine, one can invoke just the cyclic measurements with stress-ng as follows:

     sudo stress-ng --cyclic 1 --cyclic-policy fifo \
    --cyclic-prio 100 --cyclic-method --clock_ns \
    --cyclic-sleep 20000 --cyclic-dist 1000 -t 5
    stress-ng: info: [27594] dispatching hogs: 1 cyclic
    stress-ng: info: [27595] stress-ng-cyclic: sched SCHED_FIFO: 20000 ns delay, 10000 samples
    stress-ng: info: [27595] stress-ng-cyclic: mean: 5242.86 ns, mode: 4880 ns
    stress-ng: info: [27595] stress-ng-cyclic: min: 3050 ns, max: 44818 ns, std.dev. 1142.92
    stress-ng: info: [27595] stress-ng-cyclic: latency percentiles:
    stress-ng: info: [27595] stress-ng-cyclic: 25.00%: 4881 us
    stress-ng: info: [27595] stress-ng-cyclic: 50.00%: 5191 us
    stress-ng: info: [27595] stress-ng-cyclic: 75.00%: 5261 us
    stress-ng: info: [27595] stress-ng-cyclic: 90.00%: 5368 us
    stress-ng: info: [27595] stress-ng-cyclic: 95.40%: 6857 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.00%: 8942 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.50%: 9821 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.90%: 22210 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.99%: 36074 us
    stress-ng: info: [27595] stress-ng-cyclic: latency distribution (1000 us intervals):
    stress-ng: info: [27595] stress-ng-cyclic: latency (us) frequency
    stress-ng: info: [27595] stress-ng-cyclic: 0 0
    stress-ng: info: [27595] stress-ng-cyclic: 1000 0
    stress-ng: info: [27595] stress-ng-cyclic: 2000 0
    stress-ng: info: [27595] stress-ng-cyclic: 3000 82
    stress-ng: info: [27595] stress-ng-cyclic: 4000 3342
    stress-ng: info: [27595] stress-ng-cyclic: 5000 5974
    stress-ng: info: [27595] stress-ng-cyclic: 6000 197
    stress-ng: info: [27595] stress-ng-cyclic: 7000 209
    stress-ng: info: [27595] stress-ng-cyclic: 8000 100
    stress-ng: info: [27595] stress-ng-cyclic: 9000 50
    stress-ng: info: [27595] stress-ng-cyclic: 10000 10
    stress-ng: info: [27595] stress-ng-cyclic: 11000 9
    stress-ng: info: [27595] stress-ng-cyclic: 12000 2
    stress-ng: info: [27595] stress-ng-cyclic: 13000 2
    stress-ng: info: [27595] stress-ng-cyclic: 14000 1
    stress-ng: info: [27595] stress-ng-cyclic: 15000 9
    stress-ng: info: [27595] stress-ng-cyclic: 16000 1
    stress-ng: info: [27595] stress-ng-cyclic: 17000 1
    stress-ng: info: [27595] stress-ng-cyclic: 18000 0
    stress-ng: info: [27595] stress-ng-cyclic: 19000 0
    stress-ng: info: [27595] stress-ng-cyclic: 20000 0
    stress-ng: info: [27595] stress-ng-cyclic: 21000 1
    stress-ng: info: [27595] stress-ng-cyclic: 22000 1
    stress-ng: info: [27595] stress-ng-cyclic: 23000 0
    stress-ng: info: [27595] stress-ng-cyclic: 24000 1
    stress-ng: info: [27595] stress-ng-cyclic: 25000 2
    stress-ng: info: [27595] stress-ng-cyclic: 26000 0
    stress-ng: info: [27595] stress-ng-cyclic: 27000 1
    stress-ng: info: [27595] stress-ng-cyclic: 28000 1
    stress-ng: info: [27595] stress-ng-cyclic: 29000 2
    stress-ng: info: [27595] stress-ng-cyclic: 30000 0
    stress-ng: info: [27595] stress-ng-cyclic: 31000 0
    stress-ng: info: [27595] stress-ng-cyclic: 32000 0
    stress-ng: info: [27595] stress-ng-cyclic: 33000 0
    stress-ng: info: [27595] stress-ng-cyclic: 34000 0
    stress-ng: info: [27595] stress-ng-cyclic: 35000 0
    stress-ng: info: [27595] stress-ng-cyclic: 36000 1
    stress-ng: info: [27595] stress-ng-cyclic: 37000 0
    stress-ng: info: [27595] stress-ng-cyclic: 38000 0
    stress-ng: info: [27595] stress-ng-cyclic: 39000 0
    stress-ng: info: [27595] stress-ng-cyclic: 40000 0
    stress-ng: info: [27595] stress-ng-cyclic: 41000 0
    stress-ng: info: [27595] stress-ng-cyclic: 42000 0
    stress-ng: info: [27595] stress-ng-cyclic: 43000 0
    stress-ng: info: [27595] stress-ng-cyclic: 44000 1
    stress-ng: info: [27594] successful run completed in 5.00s


    Note that stress-ng needs to be invoked using sudo to enable the Real Time FIFO scheduling for the cyclic measurements.

    The above example uses the following options:

    • --cyclic 1
      • starts one instance of the cyclic measurements (1 is always recommended)
    • --cyclic-policy fifo 
      • use the real time First-In-First-Out scheduling for the cyclic measurements
    • --cyclic-prio 100 
      • use the maximum scheduling priority  
    • --cyclic-method clock_ns
      • use the clock_nanoseconds(2) system call to perform the high precision duration sleep
    • --cyclic-sleep 20000 
      • sleep for 20000 nanoseconds per cyclic iteration
    • --cyclic-dist 1000 
      • enable latency distribution statistics with an interval of 1000 nanoseconds between each data point.
    • -t 5
      • run for just 5 seconds
    From the run above, we can see that 99.5% of latencies were less than 9821 nanoseconds and most clustered around the 4880 nanosecond model point. The distribution data shows that there is some clustering around the 5000 nanosecond point and the samples tail off with a bit of a long tail.

    Now for the interesting part. Since stress-ng is packed with many different stressors we can run these while performing the cyclic measurements, for example, we can tell stress-ng to run *all* the virtual memory related stress tests and see how this affects the latency distribution using the following:

     sudo stress-ng --cyclic 1 --cyclic-policy fifo \  
    --cyclic-prio 100 --cyclic-method clock_ns \
    --cyclic-sleep 20000 --cyclic-dist 1000 \
    --class vm --all 1 -t 60s

    ..the above invokes all the vm class of stressors to run all at the same time (with just one instance of each stressor) for 60 seconds.

    The --cyclic-method specifies the delay used on each of the 10,000 cyclic iterations used.  The default (and recommended method) is clock_ns, using the high precision delay.  The available cyclic delay methods are:
    • clock_ns (use the clock_nanosecond() sleep)
    • posix_ns (use the POSIX nanosecond() sleep)
    • itimer (use a high precision clock timer and pause to wait for a signal to measure latency)
    • poll (busy spin-wait on clock_gettime() to eat cycles for a delay.
    All the delay mechanisms use the CLOCK_REALTIME system clock for timing.

    I hope this is plenty of cyclic measurement functionality to get some useful latency benchmarks against various kernel components when using some or a mix of the stress-ng stressors.  Let me know if I am missing some other cyclic measurement options and I can see if I can add them in.

    Keep stressing and measuring those systems!

    Read more
    Dustin Kirkland

    Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

    I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:


    If you're interested in learning more, check out:
    It was a great audience, with plenty of good questions, pizza, and networking!

    I'm pleased to share my slide deck here.

    Cheers,
    Dustin

    Read more
    admin

    The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

    MAAS Sprint

    The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

    MAAS 2.3 (current development release)

    The team has been working on the following features and improvements:

    • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
    • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
      • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
      • Started working on creating new processes for PR’s auto-testing and landing.
    • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
    • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
    • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
    • UI Improvements
      • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
      • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
      • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

    Bug Fixes

    The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

    • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
    • LP: #1652298 – Improve loading of elements in the device discovery page

    Read more
    Sciri

    Note: Community TFTP documentation is on the Ubuntu Wiki but this short guide adds extra steps to help secure and safeguard your TFTP server.

    Every Data Centre Engineer should have a TFTP server somewhere on their network whether it be running on a production host or running on their own notebook for disaster recovery. And since TFTP is lightweight without any user authentication care should be taken to prevent access to or overwriting of critical files.

    The following example is similar to the configuration I run on my personal Ubuntu notebook and home Ubuntu servers. This allows me to do switch firmware upgrades and backup configuration files regardless of environment since my notebook is always with me.

    Step 1: Install TFTP and TFTP server

    $ sudo apt update; sudo apt install tftp-hpa tftpd-hpa

    Step 2: Configure TFTP server

    The default configuration below allows switches and other devices to download files but, if you have predictable filenames, then anyone can download those files if you configure TFTP Server on your notebook. This can lead to dissemination of copyrighted firmware images or config files that may contain passwords and other sensitive information.

    # /etc/default/tftpd-hpa
    
    TFTP_USERNAME="tftp"
    TFTP_DIRECTORY="/var/lib/tftpboot"
    TFTP_ADDRESS=":69"
    TFTP_OPTIONS="--secure"

    Instead of keeping any files directly in the /var/lib/tftpboot base directory I’ll use mktemp to create incoming and outgoing directories with hard-to-guess names. This prevents guessing common filenames.

    First create an outgoing directory owned by root mode 755. Files in this directory should be owned by root to prevent unauthorized or accidental overwriting. You wouldn’t want your expensive Cisco IOS firmware image accidentally or maliciously overwritten.

    $ cd /var/lib/tftpboot
    $ sudo chmod 755 $(sudo mktemp -d XXXXXXXXXX --suffix=-outgoing)

    Next create incoming directory owned by tftp mode 700 . This allows tftpd-hpa to create files in this directory if configured to do so.

    $ sudo chown tftp:tftp $(sudo mktemp -d XXXXXXXXXX --suffix=-incoming)
    $ ls -1
    ocSZiwPCkH-outgoing
    UHiI443eTG-incoming

    Configure tftpd-hpa to allow creation of new files. Simply add –create to TFTP_OPTIONS in /etc/default/tftpd-hpa.

    # /etc/default/tftpd-hpa
    
    TFTP_USERNAME="tftp"
    TFTP_DIRECTORY="/var/lib/tftpboot"
    TFTP_ADDRESS=":69"
    TFTP_OPTIONS="--secure --create"

    And lastly restart tftpd-hpa.

    $ sudo /etc/init.d/tftpd-hpa restart
    [ ok ] Restarting tftpd-hpa (via systemctl): tftpd-hpa.service.

    Step 3: Firewall rules

    If you have a software firewall enabled you’ll need to allow access to port 69/udp. Either add this rule to your firewall scripts if you manually configure iptables or run the following UFW command:

    $ sudo ufw allow tftp

    Step 4: Transfer files

    Before doing a firmware upgrade or other possibly destructive maintenance I always backup my switch config and firmware.

    cisco-switch#copy running-config tftp://192.168.0.1/UHiI443eTG-incoming/config-cisco-switch
    Address or name of remote host [192.168.0.1]? 
    Destination filename [UHiI443eTG-incoming/config-cisco-switch]? 
     
     !!
    3554 bytes copied in 0.388 secs (9160 bytes/sec)
    cisco-switch#copy flash:?
    flash:c1900-universalk9-mz.SPA.156-3.M2.bin flash:ccpexp flash:cpconfig-19xx.cfg flash:home.shtml
    flash:vlan.dat
    
    cisco-switch#copy flash:c1900-universalk9-mz.SPA.156-3.M2.bin tftp://192.168.0.1/UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin 
    Address or name of remote host [192.168.0.1]? 
    Destination filename [UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin]? 
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    85258084 bytes copied in 172.692 secs (493700 bytes/sec)

    Files in incoming will be owned by tftp mode 666 (world writable) by default. Remember to move those files to your outgoing directory and change ownership to root mode 644 for safe keeping.

    Once you’re sure your switch config and firmware is safely backed up it’s safe to copy new firmware to flash or do any other required destructive maintenance.

    Step 5: Prevent TFTP access

    It’s good practice on a notebook to deny services when not actively in-use. Assuming you have a software firewall be sure to deny access to your TFTP server when on the road or when connected to hostile networks.

    $ sudo ufw deny tftp
    Rule updated
    Rule updated (v6)
    $ sudo ufw status
    Status: active
    
    To Action From
    -- ------ ----
    CUPS ALLOW Anywhere 
    OpenSSH DENY Anywhere 
    69/udp DENY Anywhere 
    CUPS (v6) ALLOW Anywhere (v6) 
    OpenSSH (v6) DENY Anywhere (v6) 
    69/udp (v6) DENY Anywhere (v6)

    Read more
    facundo


    Hace un par de semanas finalmente salió el trámite de la Inspección General de Justicia sin nada para revisar o modificar... ¡está formada legalmente la Asociación Civil Python Argentina!

    Estatuto todo sellado

    Ahora estamos trabajando a full para sacar el CUIT en la AFIP, lo que nos va a permitir abrir una cuenta en el banco. De esta manera los chicos que están organizando la PyCon ya van a poder darle luz verde a los sponsors para que pongan plata.

    Más allá de ayudar organizativamente en la PyCon y en otros eventos, son cuatro las cosas que queremos empujar el primer par de años:

    • Becas de viaje: porque creemos que hay mucho valor en que la gente se conozca, así que trataremos de ayudar a que la gente pueda viajar a eventos que se organicen en el país
    • Traducciones en eventos: si van a venir disertantes grosos que no hablen castellano, hacer lo posible para que la mayoría pueda entenderlos
    • Descuentos en cursos: estamos barajando un par de modalidades
    • Sitio web de PyAr y otra infraestructura: tenemos que dar un salto en seriedad a la hora de mantener los distintos servicios que da el grupo

    Para eso (y para los costos operativos) básicamente vamos a necesitar dinero :) La Asociación se va a financiar de dos maneras, principalmente...

    Una es por aporte de los socios. La idea es que los socios, que se beneficiarían directa e indirectamente por la Asociación Civil, pongan un manguito por mes para ayudar a hacer cosas.

    El otro mecanismo es por aporte directo de empresas (de las cuales esperamos un manguito más grande, posiblemente anual).

    Ya les contaremos bien cuales serán los mecanismos, montos, y eso. ¡Estén atentos!

    Read more
    Alan Griffiths

    Mir release 0.26.3

    Mir 0.26.3 for all!

    By itself Mir 0.26.3 isn’t a very interesting release, just a few minor bugfixes: [https://launchpad.net/mir/0.26/0.26.3]

    The significant thing with Mir 0.26.3 is that we are making this version available across the latest releases of Ubuntu as well as 17.10 (Artful Ardvark). That is: Ubuntu 17.04 (Zesty Zapus), Ubuntu 16.10 (Yakkety Yak) and, last but not least, Ubuntu 16.04LTS (Xenial Xerus).

    This is important to those developing Mir based snaps. Having Mir 0.26 in the 16.04LTS archive removes the need to build Mir based snaps using the “stable-phone-overlay” PPA.

    Read more
    Stéphane Graber

    LXD logo

    Introduction

    As you may know, LXD uses unprivileged containers by default.
    The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

    The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

    The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

    From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

    LXD does offer a number of options related to unprivileged configuration:

    • Increasing the size of the default uid/gid map
    • Setting up per-container maps
    • Punching holes into the map to expose host users and groups

    Increasing the size of the default map

    As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

    In most cases you won’t have to change that. There are however a few cases where you may have to:

    • You need access to uid/gid higher than 65535.
      This is most common when using network authentication inside of your containers.
    • You want to use per-container maps.
      In which case you’ll need 65536 available uid/gid per container.
    • You want to punch some holes in your container’s map and need access to host uids/gids.

    The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

    On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

    But the common case, is a system with a recent version of shadow.
    An example of what the configuration may look like is:

    stgraber@castiana:~$ cat /etc/subuid
    lxd:100000:65536
    root:100000:65536
    
    stgraber@castiana:~$ cat /etc/subgid
    lxd:100000:65536
    root:100000:65536

    The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

    Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

    stgraber@castiana:~$ cat /etc/subuid
    lxd:100000:1000000000
    root:100000:1000000000
    
    stgraber@castiana:~$ cat /etc/subgid
    lxd:100000:1000000000
    root:100000:100000000

    After altering those files, you need to restart LXD to have it detect the new map:

    root@vorash:~# systemctl restart lxd
    root@vorash:~# cat /var/log/lxd/lxd.log
    lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
    lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
    lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
    lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
    lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
    lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
    lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
    lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
    lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
    lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
    lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
    root@vorash:~#

    As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

    You’ll then need to restart your containers to have them start using your newly expanded map.

    Per container maps

    Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

    This can be useful for two reasons:

    1. You are running software which alters kernel resource ulimits.
      Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
    2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

    The main downsides to using this feature are:

    • It’s somewhat wasteful with using 65536 uids and gids per container.
      That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
    • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

    To have a container use its own distinct map, simply run:

    stgraber@castiana:~$ lxc config set test security.idmap.isolated true
    stgraber@castiana:~$ lxc restart test
    stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
    [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

    The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
    Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

    As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

    If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

    stgraber@castiana:~$ lxc config set test security.idmap.size 200000
    stgraber@castiana:~$ lxc restart test
    stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
    [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

    If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

    stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
    error: Not enough uid/gid available for the container.

    Direct user/group mapping

    The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

    Now, what if you want to share your user’s home directory with a container?

    The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

    stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
    Device home added to test

    So that was pretty easy, but did it work?

    stgraber@castiana:~$ lxc exec test -- bash
    root@test:~# ls -lh /home/
    total 529K
    drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

    No. The mount is clearly there, but it’s completely inaccessible to the container.
    To fix that, we need to take a few extra steps:

    • Allow LXD’s use of our user uid and gid
    • Restart LXD to have it load the new map
    • Set a custom map for our container
    • Restart the container to have the new map apply
    stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
    lxd:201105:1
    root:201105:1
    
    stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
    lxd:200512:1
    root:200512:1
    
    stgraber@castiana:~$ sudo systemctl restart lxd
    
    stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -
    
    stgraber@castiana:~$ lxc restart test

    At which point, things should be working in the container:

    stgraber@castiana:~$ lxc exec test -- su ubuntu -l
    ubuntu@test:~$ ls -lh
    total 119K
    drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
    drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
    drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
    drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
    drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
    ubuntu@test:~$ 
    
    

    Conclusion

    User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

    In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

    Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

    Extra information

    The main LXD website is at: https://linuxcontainers.org/lxd
    Development happens on Github at: https://github.com/lxc/lxd
    Discussion forun: https://discuss.linuxcontainers.org
    Mailing-list support happens on: https://lists.linuxcontainers.org
    IRC support happens in: #lxcontainers on irc.freenode.net
    Try LXD online: https://linuxcontainers.org/lxd/try-it

    Read more
    admin

    Thursday June 8th, 2017

    The MAAS team is happy to announce the introduction of development summaries. We hope this helps to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS, and bugs fixed in released MAAS versions.

    Announcements

    With the MAAS 2.2 release out of the door, we are happy to announce that:

    • MAAS 2.3 is now opened for development.
    • MAAS is moving to GIT in Launchpad – In the coming weeks, MAAS source will now be hosted under a GIT repository in Launchpad, once we complete the work of updating all our internal processes (e.g. CI, Landers, etc).

    MAAS 2.3 (current development release)

    With the team now focusing efforts on the new development release, MAAS 2.3, the team has been working on the following features and improvements:

    • Started adding support for Django 1.11 – MAAS will continue to be backward compatible with Django 1.8.
    • Adding support for ‘upstream’ proxy – MAAS deployed machines will continue to use MAAS’ internal proxy, while allowing MAAS ‘ proxy to communicate with an upstream proxy.
    • Started adding network beaconing – New feature to support better network (subnet’s, vlans) discovery and allow fabric deduplication.
      • Officially registered IPv4 and IPv6 multicast groups for MAAS beaconing (224.0.0.118 and ff02::15a, respectively).
      • Implemented a mechanism to provide authenticated encryption using the MAAS shared secret.
      • Prototyped initial beaconing multicast join mechanism and receive path.

    Libmaas (python-libmaas)

    With the continuous improvement of the new MAAS Python Library (python-libmaas), we have focused our efforts on the following improvements the past week:

    • Add support to be able to provide nested objects and object sets.
    • Add support to be able to update any object accessible via the library.
    • Add ability to read interfaces (nested) under Machines, Devices, Rack Controllers and Region Controllers.
    • Add ability to read VLAN’s (nested) under Fabrics.

    Bug Fixes

    The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

    • Bug #1694767: RSD composition not setting local disk tags
    • Bug #1694759: RSD Pod refresh shows ComposedNodeState is “Failed”
    • Bug #1695083: Improve NTP IP address selection for MAAS DHCP clients.

    Questions?

    IRC – Find as on #maas @ freenode.

    ML – https://lists.ubuntu.com/mailman/listinfo/maas-devel

    Read more
    Colin Watson

    Here’s a brief changelog for this month.

    Bugs

    • Export searchTasks for the top-level bugs collection on the webservice, and implement a global bugs feed to go with it (#434244)

    Code

    • Fix git-to-git code imports on xenial
    • Backport upstream serf commit to fix svn-to-bzr code imports on xenial (#1690613)
    • Fix crash when unlinking a bug from a Git-based MP in UpdatePreviewDiffJob
    • Handle revision ID passed to BranchMergeProposal.setStatus for transitions to MERGED

    Infrastructure

    • Fix processing of purchased Launchpad commercial subscriptions
    • Some progress towards converting the build system to pip, though there’s quite a bit more work to do there

    Registry

    • Remove extra internal slashes from URLs requested by the mirror prober (#1692347)

    Snappy

    • Record the branch revision used to build a snap and return it along with other XML-RPC status information (#1679157)
    • Configure a git:// proxy for snap builds (#1663920)
    • Allow configuring a snap to build from the current branch of a Git repository rather than explicitly naming a branch (#1688224)

    Soyuz (package management)

    • Precache permissions for archives returned by Person.getVisiblePPAs (#1685202)
    • Drop requirement for source .buildinfo files to be signed
    • Make DistroSeries:+queue link to upload files via the webapp, to help dget users

    Read more
    Robin Winslow

    Nowadays free software is everywhere – from browsers to encryption software to operating systems.

    Even so, it is still relatively rare for the code behind websites and services to be opened up.

    Stepping into the open

    Three years ago we started to move our website projects to Github, and we also took this opportunity to start making them public. We started with the www.ubuntu.com codebase, and over the next couple of years almost all our team’s other sites have followed suit.

    canonical-websites org

    At this point practically all the web team’s sites are open source, and you can find the code for each site in our canonical-websites organisation.

    www.ubuntu.com developer.ubuntu.com www.canonical.com
    partners.ubuntu.com design.ubuntu.com maas.io
    tour.ubuntu.com snapcraft.io build.snapcraft.io
    cn.ubuntu.com jp.ubuntu.com conjure-up.io
    docs.ubuntu.com tutorials.ubuntu.com cloud-init.io
    assets.ubuntu.com manager.assets.ubuntu.com vanillaframework.io

    We’ve tried to make it as easy as possible to get them up and running, with accurate and simple README files. Each of our projects can be run in much the same way, and should work the same across Linux and macOs systems. I’ll elaborate more on how we manage this in a future post.

    README example

    We also have many supporting projects – Django modules, snap packages, Docker images etc. – which are all openly available in our canonical-webteam organisation.

    Reaping the benefits

    Opening up our sites in this way means that anyone can help out by making suggestions in issues or directly submitting fixes as pull requests. Both are hugely valuable to our team.

    Another significant benefit of opening up our code is that it’s actually much easier to manage:

    • It’s trivial to connect third party services, like Travis, Waffle or Percy;
    • Similarly, our own systems – such as our Jenkins server – don’t need special permissions to access the code;
    • And we don’t need to worry about carefully managing user permissions for read access inside the organisation.

    All of these tasks were previously surprisingly time-consuming.

    Designing in the open

    Shortly after we opened up the www.ubuntu.com codebase, the design team also started designing in the open, as Anthony Dillon recently explained.

    Read more
    Michael Hall

    After a little over 6 years, I am embarking on a new adventure. Today is my last day at Canonical, it’s bitter sweet saying goodbye precisely because it has been such a joy and an honor to be working here with so many amazing, talented and friendly people. But I am leaving by choice, and for an opportunity that makes me as excited as leaving makes me sad.

    Goodbye Canonical

    maltaI’ve worked at Canonical longer than I’ve worked at any company, and I can honestly say I’ve grown more here both personally and professionally than I have anywhere else. It launched my career as a Community Manager, learning from the very best in the industry how to grow, nurture, and excite a world full of people who share the same ideals. I owe so many thanks (and beers) to Jono Bacon, David Planella, Daniel Holbach, Jorge Castro, Nicholas Skaggs, Alan Pope, Kyle Nitzsche and now also Martin Wimpress. I also couldn’t have done any of this without the passion and contributions of everybody in the Ubuntu community who came together around what we were doing.

    As everybody knows by now, Canonical has been undergoing significant changes in order to set it down the road to where it needs to be as a company. And while these changes aren’t the reason for my leaving, it did force me to think about where I wanted to go with my future, and what changes were needed to get me there. Canonical is still doing important work, I’m confident it’s going to continue making a huge impact on the technology and open source worlds and I wish it nothing but success. But ultimately I decided that where I wanted to be was along a different path.

    Of course I have to talk about the Ubuntu community here. As big of an impact as Canonical had on my life, it’s only a portion of the impact that the community has had. From the first time I attended a Florida LoCo Team event, I was hooked. I had participated in open source projects before, but that was when I truly understood what the open source community was about. Everybody I met, online or in person, went out of their way to make me feel welcome, valuable, and appreciated. In fact, it was the community that lead me to work for Canonical in the first place, and it was the community work I did that played a big role in me being qualified for the job. I want to give a special shout out to Daniel Holbach and Jorge Castro, who built me up from a random contributor to a project owner, and to Elizabeth Joseph and Laura Faulty who encouraged me to take on leadership roles in the community. I’ve made so many close and lasting friendships by being a part of this amazing group of people, and that’s something I will value forever. I was a community member for years before I joined Canonical, and I’m not going anywhere now. Expect to see me around on IRC, mailing lists and other community projects for a long time to come.

    Hello Endless

    EndlessNext week I will be joining the team at Endless as their Community Manager. Endless is an order of magnitude smaller than Canonical, and they have a young community that it still getting off the ground. So even though I’ll have the same role I had before, there will be new and exciting challenges involved. But the passion is there, both in the company and the community, to really explode into something big and impactful. In the coming months I will be working to setup the tools, processes and communication that will be needed to help that community grow and flourish. After meeting with many of the current Endless employees, I know that my job will be made easier by their existing commitment to both their own community and their upstream communities.

    What really drew me to Endless was the company’s mission. It’s not just about making a great open source project that is shared with the world, they have a specific focus on social good and improving the lives of people who the current technology isn’t supporting. As one employee succinctly put it to me: the whole world, empowered. Those who know me well will understand why this resonates with me. For years I’ve been involved in open source projects aimed at early childhood education and supporting those in poverty or places without the infrastructure that most modern technology requires. And while Ubuntu covers much of this, it wasn’t the primary focus. Being able to work full time on a project that so closely aligned with my personal mission was an opportunity I couldn’t pass up.

    Broader horizons

    Over the past several months I’ve been expanding the number of communities I’m involved in. This is going to increase significantly in my new role at Endless, where I will be working more frequently with upstream and side-stream projects on areas of mutual benefit and interest. I’ve already started to work more with KDE, and I look forward to becoming active in GNOME and other open source desktops soon.

    I will also continue to grow my independent project, Phoenicia, which has a similar mission to Endless but a different technology and audience. Now that this is no longer competing in the XPRIZE competition, it releases some restrictions that we had to operate under and frees us to investigate new areas of innovation and collaboration. If you’re interested in game development, or making an impact on the lives of children around the world, come and see what we’re doing.

    If anybody wants to reach out to me to chat, you can still reach me at mhall119@ubuntu.com and soon at mhall119@endlessm.com, tweet me at @mhall119, connect on LinkedIn, chat on Telegram or circle me on Google+. And if we’re ever at a conference together give me a shout, I’d love to grab a drink and catch up.

    Read more
    Colin Ian King

    What is new in FWTS 17.05.00?

    Version 17.05.00 of the Firmware Test Suite was released this week as part of  the regular end-of-month release cadence. So what is new in this release?

    • Alex Hung has been busy bringing the SMBIOS tests in-sync with the SMBIOS 3.1.1 standard
    • IBM provided some OPAL (OpenPower Abstraction Layer) Firmware tests:
      • Reserved memory DT validation tests
      • Power management DT Validation tests
    • The first fwts snap was created
    •  Over 40 bugs were fixed
    As ever, we are grateful for all the community contributions to FWTS.  The full release details are available from the fwts-devel mailing list.

    I expect that the next upcoming ACPICA release will be integrated into the 17.06.00 FWTS release next month.

    Read more
    admin

    I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

    • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
    • Hardware Testing
    • DHCP Relay Support
    • Unmanaged Subnets
    • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
    • Various improvements and minor features.
    • MAAS Client Library
    • Intel Rack Scale Design support.

    For more information, please read the release notes are available here.

    Availability
    MAAS 2.2.0 is currently available in the following MAAS team PPA.
    ppa:maas/next
    Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.

    Read more
    facundo


    Al final salió una idea que venía dándome vuelta en la cabeza desde principios del año pasado, y que tardó sus meses en concretarse: voy a estar dando un Seminario de Introducción a Python junto a una empresa, con el objetivo de bajar el costo del curso a los asistentes (lo cubre en parte la empresa), y de esta manera poder hacer algo más largo y más masivo.

    La empresa con la cual voy a hacer este Seminario es Onapsis, bastante cercana a la comunidad de Python Argentina, ya que hace mucho que es sponsor de eventos, pone los famosos "pybus" para ir a las PyCones, hosteó un meetup, etc, etc.

    El Seminario es abierto al público en general, y será de 16 horas en total, cuatro sábados de Julio, durante la mañana, en CABA.

    El costo es súper accesible, $600, ya que parte lo cubre Onapsis, y la idea es hacerlo barato para que pueda venir la mayor cantidad de gente posible.  Así y todo los cupos son limitados (la oficina tiene un límite), por lo que cuanto antes consigan reserva la posición, mejor.

    Al final del Seminario entregaré un certificado de asistencia y la totalidad del curso en formato electrónico.

    Para realizar la reserva deben enviarme un mail así les confirmo disponibilidad y les paso los datos necesarios para realizar el pago (que podrá ser por depósito, transferencia bancaria, tarjeta de crédito, débito, etc.).

    Acá están todos los detalles del curso.

    Read more
    facundo

    Salió fades 6


    Salió la última versión de fades, el sistema que maneja automáticamente los virtualenvs en los casos que uno normalmente encuentra al escribir scripts y programas pequeños, e incluso ayuda a administrar proyectos grandes.

    Esta es una de las versiones que más cambios metimos! Estos son solo algunos de los puntos de la lista de cambios:

    - Instala no solamente desde PyPI sino también de repositorios remotos (GitHub, Bitbucket, Launchpad, etc) y directorios locales

        fades -d git+https://github.com/yandex/gixy.git@v0.1.3

        fades -d file://$PATH_TO_PROJECT

    - Creamos un video para mostrar las características de fades más relevantes

    - Selecciona el mejor virtualenv de los almacenados en casos de coincidencia múltiple

    - Agregamos una opción --clean-unused-venvs para borrar todos los virtualenvs que no fueron usados en los últimos días

        fades --clean-unused-venvs=30

    - Agregamos un --pip-options para pasarle los parámetros que sean necesarios a la ejecución subyacente de pip

        fades -d requests --pip-options="--no-cache-dir"

    La lista completa de cambios está en el release formal, esta es la documentación entera, y acá tienen como instalarlo y disfrutarlo.

    Read more
    Colin Ian King

    The Firmware Test Suite (FWTS) has an easy to use text based front-end that is primarily used by the FWTS Live-CD image but it can also be used in the Ubuntu terminal.

    To install and run the front-end use:

     sudo apt-get install fwts-frontend  
    sudo fwts-frontend-text

    ..and one should see a menu of options:


    In this demonstration, the "All Batch Tests" option has been selected:


    Tests will be run one by one and a progress bar shows the progress of each test. Some tests run very quickly, others can take several minutes depending on the hardware configuration (such as number of processors).

    Once the tests are all complete, the following dialogue box is displayed:


    The test has saved several files into the directory /fwts/15052017/1748/ and selecting Yes one can view the results log in a scroll-box:


    Exiting this, the FWTS frontend dialog is displayed:


    Press enter to exit (note that the Poweroff option is just for the fwts Live-CD image version of fwts-frontend).

    The tool dumps various logs, for example, the above run generated:

     ls -alt /fwts/15052017/1748/  
    total 1388
    drwxr-xr-x 5 root root 4096 May 15 18:09 ..
    drwxr-xr-x 2 root root 4096 May 15 17:49 .
    -rw-r--r-- 1 root root 358666 May 15 17:49 acpidump.log
    -rw-r--r-- 1 root root 3808 May 15 17:49 cpuinfo.log
    -rw-r--r-- 1 root root 22238 May 15 17:49 lspci.log
    -rw-r--r-- 1 root root 19136 May 15 17:49 dmidecode.log
    -rw-r--r-- 1 root root 79323 May 15 17:49 dmesg.log
    -rw-r--r-- 1 root root 311 May 15 17:49 README.txt
    -rw-r--r-- 1 root root 631370 May 15 17:49 results.html
    -rw-r--r-- 1 root root 281371 May 15 17:49 results.log

    acpidump.log is a dump of the ACPI tables in format compatible with the ACPICA acpidump tool.  The results.log file is a copy of the results generated by FWTS and results.html is a HTML formatted version of the log.

    Read more