Canonical Voices

facundo

Paseando por Chile


El finde pasado nos fuimos con Moni a Chile, a pasear un poco.

Es el primer viaje que hacemos sin niños después de esta semana (si es que se puede considerar que ahí también estábamos todavía de a dos).

La gran ventaja de ir sin niños es que basicamente no teníamos que ocuparnos de nadie más que de nosotros :D. Íbamos a donde queríamos, comíamos a la hora que queríamos, nos levantábamos o acostábamos a la hora que queríamos, etc. Lo que hace una pareja sin hijos, bah...

Solos, en la escalera-piano

Paseamos a morir. Hicimos base en Santiago, pero todo un día lo pasamos en Valparaíso.

A Valparaíso fuimos y volvimos en micro. Cuando llegamos, lo primero que hicimos fue tomarnos un bondi hasta La Sebastiana, la casa que tenía Neruda en esa ciudad. La recorrimos con ayuda de una audioguía, ¡buenísimo! Nos encantó.

La Sebastiana vista desde afuera

Despues salimos y caminamos un montón, paseando, parando para almorzar, y seguimos caminando, recorriendo los cerros, subiendo por un funicular... en general muchas escaleras, muchas subidas y bajadas, ASÍ teníamos las pantorrillas...

Terminamos llegando al puerto, donde paseamos un poco por una feria e hicimos la merienda. Luego un trole hasta el centro de la ciudad, y el micro de vuelta a Santiago.

Esquina random de Valparaíso

La escalera del zapato, con una gran frase

Valparaíso vista desde arriba

Los otros tres días los pasamos en Santiago, aunque no fueron enteros por los viajes para llegar e irnos de Chile.

Paseamos bastante también, y no solamente por shoppings como parece ser costumbre de los argentinos que están por allá (?). Estaba toda la ciudad muy influenciada por las Fiestas Patrias, que es algo que allá es muy importante.

La cordillera siempre presente

Mural al lado de La Piojera

Paseamos un rato por el centro, pero justo el sábado y estaba todo bastante cerrado. Fuimos al Mercado Central también, en donde comimos ricas comidas basadas en pescados y mariscos.

También subimos al Cerro Santa Lucía, un lindo paseo que no completamos del todo porque no nos daba el tiempo restante para subir y recorrer todo el castillo que tiene en su parte superior. Pero paseamos por toda la zona y aledaños, así como también en el barrio de Bellavista, donde almorzamos en la Cervecería Kunstmann, super recomendable!

El paseo había arrancado en La Chascona, la casa de Neruda en Santiago. La visita (y recorrida con la audioguía) es algo que también aquí vale mucho la pena. Esta casa, como la que mencioné antes, y la de Isla Negra que nos debemos, son mantenidas y expuestas por la Fundación Neruda, y hacen un excelente trabajo.

Detalles de La Chascona

Si tengo que resaltar algo feo de Santiago/Valparaíso, además que la primera es una "ciudad grande" que a priori no me gustan, es que el cablerío que se ve en los postes en la calle es sorprendente, no se entiende como puede ser que hayan tantos cables ni para qué están. Parece ser que es porque los ponen y luego cuando los dejan de usar no los sacan, también parece que se va a solucionar eventualmente.

Lo otro que noté es que todo está pintarrajeado, a nivel grafiti/vandalismo. Pero no sólo paredes. Todo. Colectivos, ventanas de casas particulares, negocios pequeños y grandes. Todo.

Las fotos de todo el viaje, acá.  Con Moni tenemos que hacernos más de estas escapadas, son reconfortantes :)

Read more
jdstrand

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server

Read more
Alan Griffiths

Mir support for Wayland

I’ve seen some confusion about how Mir is supporting Wayland clients on the Phoronix forums . What we are doing is teaching the Mir server library to talk Wayland in addition to its original client-server protocol. That’s analogous to me learning to speak another language (such as Dutch).

This is not anything like XMir or XWayland. Those are both implementations of an X11 server as a client of a Mir or Wayland. (Xmir is a client of a Mir server or and XWayland is a client of a Wayland server.) They both introduce a third process that acts as a “translator” between the client and server.

The Wayland support is directly in the Mir server and doesn’t rely on a translator. Mir’s understanding of Wayland is going to start pretty limited (Like my Dutch). At present it understands enough “conversational Wayland” for a client to render content and for the server to composite it as a window. We need to teach it more “verbs” (e.g. to support for the majority of window management requests) but there is a limited range of things that do work.

Once Mir’s support for Wayland clients is on a par with the support for “native” Mir clients we will likely phase out support for the latter.

We’re still testing things prior to the Mir 1.0 release, and Mir 1.0 will not support “everything Wayland”. If you are curious you can install a preview of the current development version from the “Mir Staging” PPA.

Read more
admin

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

Read more
Robin Winslow

I’ve been thinking about the usability of command-line terminals a lot recently.

Command-line interfaces remain mystifying to many people. Usability hobbyists seem as inclined to ask why the terminal exists, as how to optimise it. I’ve also had it suggested to me that the discipline of User Experience (UX) has little to offer the Command-Line Interface (CLI), because the habits of terminal users are too inherent or instinctive to be defined and optimised by usability experts.

As an experienced terminal user with a keen interest in usability, I disagree that usability has little to offer the CLI experience. I believe that the experience can be improved through the application of usability principles just as much as for more graphical domains.

Steps to learn a new CLI tool

To help demystify the command-line experience, I’m going to lay out some of the patterns of thinking and behaviour that define my use of the CLI.

New CLI tools I’ve learned recently include snap, kubectl and nghttp2, and I’ve also dabbled in writing command-line tools myself.

Below I’ll map out an example of the steps I might go through when discovering a new command-line tool, as a basis for exploring how these tools could be optimised for CLI users.

  1. Install the tool
    • First, I might try apt install {tool} (or brew install {tool} on a mac)
    • If that fails, I’ll probably search the internet for “Install {tool}” and hope to find the official documentation
  2. Check it is installed, and if tab-complete works
    • Type the first few characters of the command name (sna for snap) followed by <tab> <tab>, to see if the command name auto-completes, signifying that the system is aware of its existence
    • Hit space, and then <tab> <tab> again, to see if it shows me a list of available sub-commands, indicating that tab completion is set up correctly for the tool
  3. Try my first command
    • I’m probably following some documentation at this point, which will be telling me the first command to run (e.g. snap install {something}), so I’ll try that out and expect prompt succinct feedback to show me that it’s working
    • For basic tools, this may complete my initial interaction with the tool. For more complex tools like kubectl or git I may continue playing with it
  4. Try to do something more complex
    • Now I’m likely no longer following a tutorial, instead I’m experimenting on my own, trying to discover more about the tool
    • If what I want to do seems complex, I’ll straight away search the internet for how to do it
    • If it seems more simple, I’ll start looking for a list of subcommands to achieve my goal
    • I start with {tool} <tab> <tab> to see if it gives me a list of subcommands, in case it will be obvious what to do next from that list
    • If that fails I’ll try, in order, {tool} <enter>, {tool} -h, {tool} --help, {tool} help or {tool} /?
    • If none of those work then I’ll try man {tool}, looking for a Unix manual entry
    • If that fails then I’ll fall back to searching the internet again

UX recommendations

Considering my own experience of CLI tools, I am reasonably confident the following recommendations make good general practice guidelines:

  • Always implement a --help option on the main command and all subcommands, and if appropriate print out some help when no options are provided ({tool} <enter>)
  • Provide both short- (e.g. -h) and long- (e.g. --help) form options, and make them guessable
  • Carefully consider the naming of all subcommands and options, use familiar words where possible (e.g. help, clean, create)
  • Be consistent with your naming – have a clear philosophy behind your use of subcommands vs options, verbs vs nouns etc.
  • Provide helpful, readable output at all times – especially when there’s an error (npm I’m looking at you)
  • Use long-form options in documentation, to make commands more self-explanatory
  • Make the tool easy to install with common software management systems (snap, apt, Homebrew, or sometimes NPM or pip)
  • Provide tab-completion. If it can’t be installed with the tool, make it easy to install and document how to set it up in your installation guide
  • Command outputs should use the appropriate output streams (STDOUT and STDERR) and should be as user-friendly and succinct as possible, and ideally make use of terminal colours

Some of these recommendations are easier to implement than others. Ideally every command should consider their subcommands and options carefully, and implement --help. But writing auto-complete scripts is a significant undertaking.

Similarly, packaging your tool as a snap is significantly easier than, for example, adding software to the official Ubuntu software sources.

Although I believe all of the above to be good general advice, I would very much welcome research to highlight the relative importance of addressing each concern.

Outstanding questions

There are a number of further questions for which the answers don’t seem obvious to me, but I’d love to somehow find out the answers:

  • Once users have learned the short-form options (e.g. -h) do they ever use the long-form (e.g. --help)?
  • Do users prefer subcommands (mytool create {something}) or options (mytool --create {something})?
  • For multi-level commands, do users prefer {tool} {object} {verb} (e.g. git remote add {remote_name}), or {tool} {verb} {object} (e.g. kubectl get pod {pod_name}), or perhaps {tool} {verb}-{object} (e.g. juju remove-application {app_name})?
  • What patterns exist for formatting command output? What’s the optimal length for users to read, and what types of formatting do users find easiest to understand?

If you know of either authoritative recommendations or existing research on these topics, please let me know in the comments below.

I’ll try to write a more in-depth follow-up to this post when I’ve explored a bit further on some of these topics.

Read more
Anthony Dillon

Webteam development summary

Iteration 6

dating between 14th to the 25th of August

This iteration saw a lot of work on tutorials.ubuntu.com and on the migration of design.ubuntu.com from WordPress to a fresh new Jekyll site project. Continued research and planning into the new snapcraft.io site, with some beginnings of the development framework.

Vanilla Framework put a lot of emphasis into polishing the existing components and porting the old theme concept patterns into the code base.

Websites issues: 66 closed, 33 opened (551 in total)

Some highlights include:
– Fixing content of card touching card edge in tutorials – https://github.com/canonical-websites/tutorials.ubuntu.com/issues/312
– Migrate canonical.com to Vanilla: Polish and custom patterns – https://github.com/canonical-websites/www.canonical.com/issues/172
– Prepare for deploy of design.ubuntu.comhttps://github.com/canonical-websites/design.ubuntu.com/issues/54
– Redirect from https://www.ubuntu.com/usn/ to https://usn.ubuntu.com/usn were broken – https://github.com/canonical-websites/www.ubuntu.com/issues/2128
design.ubuntu.com/web-style-guide build page and then hide pages – https://github.com/canonical-websites/design.ubuntu.com/issues/66
– Snapcraft prototype: Snap page – https://github.com/canonical-websites/snapcraft.io/issues/346
– Create Flask skeleton application –https://github.com/canonical-websites/snapcraft-flask/issues/2

Vanilla Framework issues: 24 closed, 16 opened (43 in total)

Some highlights include:
– Combine the entire suite of brochure theme patterns to Vanilla’s code base – https://github.com/vanilla-framework/vanilla-framework/issues/1177
– Many improvements to the documentation theme – https://github.com/vanilla-framework/vanilla-docs-theme/issues/45
– External link icon seems stretched – https://github.com/vanilla-framework/vanilla-framework/issues/1058
– .p-heading–icon pattern remove text color – https://github.com/vanilla-framework/vanilla-framework/issues/1272
– Remove margin rules on card content – https://github.com/vanilla-framework/vanilla-framework/issues/1277

All of these projects are open source. So please file issues if you find any bugs or even better propose a pull request. See you in two weeks for the next update from the web team here at Canonical.

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Added parameters form for script parameters validation.
    • Accept and validate results from nodes.
    • Added hardware testing 7zip CPU benchmarking builtin script.
    • WIP – ability to send parameters to test scripts and process results of individual components. (e.g. will provide the ability for users to select which disk they want to test, and capture results accordingly)
    • WIP – disk benchmark test via Fio.
  • Network beaconing & better network discovery
    • MAAS controllers now send out beacon advertisements every 30 seconds, regardless of whether or not any solicitations were received.
  • Switch Support
    • Backend changes to automatically detect switches (during commissioning) and make use of the new switch model.
    • Introduce base infrastructure for NOS drivers, similar to the power management one.
    • Install the Rack Controller when deploying a supported Switch (Wedge 40, Wedge 100)
    • UI – Add a switch listing tab behind a feature flag.
  • Minor UI improvements
    • The version of MAAS installed on each controller is now reported on the controller details page.
  • python-libmaas
    • Added ability to power on, power off, and query the power state of a machine.
    • Added PowerState enum to make it easy to check the current power state of a machine.
    • Added ability to reference the children and parent interfaces of an interface.
    • Added ability to reference the owner of node.
    • Added base level `Node` object that `Machine`, `Device`, `RackController`, and `RegionController` extend from.
    • Added `as_machine`, `as_device`, `as_rack_controller`, and `as_region_controller` to the Node object. Allowing the ability to convert a `Node` into the type you need to perform an action on.
  • Bug fixes:
    • LP: #1676992 – force Postgresql restart on maas-region-controller installation.
    • LP: #1708512 – Fix DNS & Description misalignment
    • LP: #1711714 – Add cloud-init reporting for deployed Ubuntu Core systems
    • LP: #1684094 – Make context menu language consistent for IP ranges.
    • LP: #1686246 – Fix docstring for set-storage-layout operation
    • LP: #1681801 – Device discovery – Tooltip misspelled
    • LP: #1688066 – Add Spice graphical console to pod created VM’s
    • LP: #1711700 – Improve DNS reloading so its happens only when required.
    • LP: #1712423, #1712450, #1712422 – Properly handle a ScriptForm being sent an empty file.
    • LP: #1621175 – Generate password for BMC’s with non-spec compliant password policy
    • LP: #1711414 – Fix deleting a rack when it is installed via the snap
    • LP: #1702703 – Can’t run region controller without a rack controller installed.

Read more
Colin Ian King

Static analysis on the Linux kernel

There are a wealth of powerful static analysis tools available nowadays for analyzing C source code. These tools help to find bugs in code by just analyzing the source code without actually having to execute the code.   Over that past year or so I have been running the following static analysis tools on linux-next every weekday to find kernel bugs:

Typically each tool can take 10-25+ hours of compute time to analyze the kernel source; fortunately I have a large server at hand to do this.  The automated analysis creates an Ubuntu server VM, installs the required static analysis tools, clones linux-next and then runs the analysis.  The VMs are configured to minimize write activity to the host and run with 48 threads and plenty of memory to try to speed up the analysis process.

At the end of each run, the output from the previous run is diff'd against the new output and generates a list of new and fixed issues.  I then manually wade through these and try to fix some of the low hanging fruit when I can find free time to do so.

I've been gathering statistics from the CoverityScan builds for the past 12 months tracking the number of defects found, outstanding issues and number of defects eliminated:

As one can see, there are a lot of defects getting fixed by the Linux developers and the overall trend of outstanding issues is downwards, which is good to see.  The defect rate in linux-next is currently 0.46 issues per 1000 lines (out of over 13 million lines that are being scanned). A typical defect rate for a project this size is 0.5 issues per 1000 lines.  Some of these issues are false positives or very minor / insignficant issues that will not cause any run time issues at all, so don't be too alarmed by the statistics.

Using a range of static analysis tools is useful because each one has it's own strengths and weaknesses.  For example smatch and sparse are designed for sanity checking the kernel source, so they have some smarts that detect kernel specific semantic issues.  CoverityScan is a commercial product however they allow open source projects the size of the linux-kernel to be built daily and the web based bug tracking tool is very easy to use and CoverityScan does manage to reliably find bugs that other tools can't reach.  Cppcheck is useful as scans all the code paths by forcibly trying all the #ifdef'd variations of code - which is useful on the more obscure CONFIG mixes.

Finally, I use clang's scan-build and the latest verion of gcc to try and find the more typical warnings found by the static analysis built into modern open source compilers.

The more typical issues being found by static analysis are ones that don't generally appear at run time, such as in corner cases like error handling code paths, resource leaks or resource failure conditions, uninitialized variables or dead code paths.

My intention is to continue this process of daily checking and I hope to report back next September to review the CoverityScan trends for another year.

Read more
facundo

Encuentro 5.0


Tengo el agrado de anunciar, luego de mucho (demasiado) tiempo, una nueva liberación de Encuentro (que es, como ya sabrán, un programejo que permite buscar, descargar y ver contenido del Canal Encuentro, Paka Paka, BACUA, Educ.ar, Decime quien sos vos, TED y otros).

Encuentro (¡nuevo logo y tipografía!)

Esta nueva liberación se justifica principalmente por dos motivos: renovación, y cambio estético.

La renovación es porque tuvimos (junto a Diego Mascialino, que me ayudó un montón) que renovar la mitad de los backends, ya que algunos habían cambiado tanto que no andaban más y tuvimos que meterle bastante laburo para sacarlos adelante.

El cambio estético viene de la mano de Cecilia Schiebel, quien de onda (o sea, porque tuvo la idea y se ofreció ella, y además no me quiso cobrar ni un peso) renovó la estética del sitio web, y armó un nuevo ícono e imágenes para el programa.

También hay un par más de correcciones o mejoras, pero nada demasiado importante.

El detalle de la nueva versión, instaladores, etc, en la página oficial.

¡Que lo disfruten!

Read more
Sergio Schvezov

UbuConLA 2017 Summit Summary

These are my notes about UbuconLA, a bit of social activities and thoughts on the talks related to the snappy ecosystem. Arriving I left late on thursday for a planned night arrival into Lima, Leo was going to be arriving around 2 hours earlier than me flying in from Costa Rica. Once the plane I was on coming from Cordoba, Argentina landed into Lima I got onto an ad-hoc telegram group we had created and mentioned that I had landed and discovered Leo was still trying to Uber out of there.

Read more
admin

Hello MAASters!

I’m happy to annouce that MAAS 2.3.0 Alpha 2 has now been released and it is currently available in Ubuntu Artful, PPA for Xenial and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use Alpha 1, please use the following PPA:
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode –beta

MAAS 2.3.0 (alpha2)

Important announcements

Advanced Network for CentOS & Windows

The MAAS team is happy to announce that MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New Features & Improvements

CentOS Networking support

MAAS can now perform machine network configuration for CentOS, giving CentOS networking feature parity with Ubuntu. The following can now be configured for MAAS deployed CentOS images:

  • Static network configuration.

  • Bonds, VLAN and bridge interfaces.

Thanks for the cloud-init team for improving the network configuration support for CentOS.

Windows Networking support

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Network Discovery & Beaconing

MAAS now sends out encrypted beacons to facilitate network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS.

UI improvements

Minor UI improvements have been made

  • Renamed “Device Discovery” to “Network Discovery”.

  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Issues fixed in this release

Issues fixed in this release are detailed at:

https://launchpad.net/maas/+milestone/2.3.0alpha1

Read more
Anthony Dillon

Cookie notification component

We’ve all seen the annoying cookie notification which website owners are legally obliged to include on their sites. We can’t avoid them, so let’s have some fun with them.

Previously, for Canonical’s sites, the cookie notification was a shared CSS file and JavaScript file that we injected into the head of each site. This resulted in a cookie policy notification appearing at the bottom of the site.

Job done, I hear you say. But the styling of the notification was becoming dated and after the redesign of the Vanilla notification pattern we wanted to align them. So we decided to create a small service/component to serve cookie notifications in a consistent and more manageable way.

The JavaScript

We started by developing a small JavaScript function in ES6 that took an optional object to set two options: a custom notification message and an auto destroy timeout value in milliseconds.

To initialise the cookie notification component you simply add the following to your site’s footer.

var options = {
  'content': 'We use cookies to improve your experience. By your continued use of this site you accept such use.
 This notice will disappear by itself. To change your settings please see our policy.',
  'duration': 3000
}
ubuntu.cookiePolicy.setup(options);

 

The styling

We wanted to use Vanilla’s notification styling as the base for the cookie message. We could simply copy the SCSS or even the compiled CSS into the component and call it a day. But that would get out of date quickly.

To avoid this, we made Vanilla framework an npm dependency of the cookie project and include just the notification styling. Importantly, this gave us a version of Vanilla that we could use to indicate if the package needs to be updated and the effects of the update. We could also update the project’s styling simply by running npm update.

Serving the cookie notification

We decided to deliver this project with Bower as our package manager of choice, as we wanted to dictate the location that the Bower components are installed, which is defined in our .bowerrc file.

One tradeoff of using Bower as a package manager is that the built CSS and JS need to be committed into the GitHub repository. This is due to the way in which Bower installs packages: it effectivity clones the project into the components directory. Therefore the pull requests are not as clean but it can be worked around by mentioning which files to ignore in the pull request description.

This means we don’t need the target site or app to support Sass or ES6 compilation. Simply embed the built CSS and JS file into the head of the site or app.

<link rel="stylesheet" type="text/css" media="screen" href="/bower-components/canonical-cookie-policy/build/css/cookie-policy.css" />
<script src="/bower-components/canonical-cookie-policy/build/js/cookie-policy.js"></script>

 

Conclusion

This process has helped to make our small components much more manageable, by splitting them into their own small project, giving them a home and a test HTML file which can run locally to see the development of component on its own.

We plan to continue to split out components into their own projects when it makes sense and delivers them in this fashion.

Read more
Dustin Kirkland

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

The team is preparing and testing the next official release, MAAS 2.3 alpha2. It is currently undergoing a heavy round of testing and will be announced separately the beginning of the upcoming week. In the past three weeks, the team has:

  • Support for CentOS Network configuration
    We have completed the work to support CentOS Advanced Networking, which provides the ability for users to configure VLAN, bond and bridge interfaces, bringing it feature parity with Ubuntu. This will be available in MAAS 2.3 alpha 2.
  • Support for Windows Network configuration
    MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information [1].
  • Hardware Testing Phase 2

    • Testing scripts now define a type field that informs MAAS for which component will be tested and where the resulting metrics will apply. This may be node, cpu, memory, or storage, defaults to node.
    • Completed work to support the definition and parsing of a YAML based description for custom test scripts. This allows the user to defined the test’s title, description, and the metrics the test will output, which allows MAAS to parse and eventually display over the UI/API.
  • Network beaconing & better network discovery

    • Beaconing is now fully functional for controller registration and interface updates!
    • When registering or updating a new controller (either the first standalone controller, or a secondary/HA controller), new interfaces that have been determined to be on an existing VLAN will not cause a new fabric to be created in MAAS.
  • Switch modeling
    • The basic database model for the new switching model has been implemented.
    • On-going progress of presenting switches in the node listing is under way.
    • Work is in-progress to allow MAAS to deploy a rack controller which will be utilized when deploying a new switch with MAAS.
  • Minor UI improvements
    • Renamed “Device Discovery” to “Network Discovery”.
    • Discovered devices where MAAS cannot determine the hostname now just show the hostname as “unknown” and grayed out instead of using the MAC address manufacturer as the hostname.
  • Bug fixes:
    • LP: #1704444 – MAAS API returns 500 internal server error instead of raising actual error.
    • LP: #1705501 – django warning on install
    • LP: #1707971 – MAAS becomes unstable after rack controller restarts
    • LP: #1708052 – Quick erase doesn’t remove md superblock
    • LP: #1710681 – Cannot delete an Ubuntu image, “Update Selection” is disabled

MAAS 2.2.2 Released in the Ubuntu Archive!

MAAS 2.2.2 has now also been released in the Ubuntu Archive. For more details on MAAS 2.2.2, please see [2].

 

[1]: https://maas.io/contact-us

[2]: https://lists.ubuntu.com/archives/maas-devel/2017-August/002663.html

Read more
facundo

Dos experiencias nuevas


Por un lado, el miércoles 9 pasado hicimos un evento de PyAr (en oficinas de Onapsis) que nunca habíamos hecho antes: un Consultorio Python.

Coorganizado por Nico y yo, la idea era replicar de forma personal un poco lo que sucede en el canal de IRC día a día: que alguien venga con una duda o consulta y que algunos que saben más o tienen más experiencia piensen un rato para resolver la inquietud.

Parte de la gracia era no sólo solucionar el problema, sino también que gente más nueva vea que hacerlo "no es magia", sino que todos buscamos en internet, todos hacemos razonamientos esquivos, todos vamos elaborando la respuesta, y así...

Estuvo muy bueno. La gente se animó a participar, fuimos pensando distintos problemas que nada que ver entre uno y otro, etc. Y todo mientras comíamos sanguchitos y tomábamos unas cervezas, cortesía de Onapsis (¡gracias!). Fue divertido, y el feedback de la gente también estuvo bien, les gustó.

Fotito que tuiteó Chechu:

Consultorio Python

Por otro lado, el domingo pasado en las PASO fui por primera vez autoridad de mesa en una elección. Presidente de mesa, incluso.

Llegué siete y media a la escuela, entramos, me dieron la urna con cosas adentro, una bolsa con boletas, un bolsín también con planillas, y unas viandas frías/secas.

Mi mesa era en el primer piso, así que subí, y ahí nomás fueron llegando fiscales: la mayoría generales, y una fiscal de Cambiemos que estuvo todo el día conmigo en mi mesa. El suplente que correspondía a mi mesa quedó como presidente en una de las mesas de abajo, de la cual faltaron las dos autoridades (bah, faltaron ambos autoridades de ambas mesas de abajo!).

Abrí la urna, que contenía una caja con útiles y planillas varias. Preparé el cuarto oscuro, reordenando las mesas y acomodando las boletas por número de lista. Algunos fiscales ayudaban, pero en general me dejaban sólo (yo prefería eso).

Luego preparé el padrón principal, más otro para control, y le dí uno de control a la fiscal. Hice el acta de apertura a las ocho de la mañana, y voté :). Fue muy loco, me firmé mi propio comprobante :p.

Mi comprobante de que voté, firmado por mí :p

Luego, empezó a venir la gente, de forma lenta. Pero a eso de las nueve empezaron a venir más. Y ahí nomás ya nos saturamos: desde la 9:30 a las 15hs no paramos nunca. Por suerte con la fiscal (y con otro que la vino a reemplazar una media hora mientras ella iba a votar a otro lado) tuvimos una buena dinámica y no teníamos tiempos muertos (las esperas eran en gran parte por los que tardaban adentro del cuarto oscuro).

También una vez por hora venían los fiscales generales, y revisábamos el cuarto oscuro para ver si estaba todo bien. Yo también entraba cada tanto y revisaba. Era necesario, la gente hace maldades: en una se robaron las boletas de Cambiemos, en otra se robaron las boletas del Frente de Izquierda, y también pasó que habían puesto las boletas de 1Pais arriba de las de Unidad Ciudadana, tapándolas.

Desde las tres estuvo más tranquilo. Hasta eso de las cinco y cuarto, que por media hora vino un poco más de gente, pero ya al final no apareció nadie, al punto que empecé a acomodar todo faltando diez minutos para las seis. Contamos los votos que teníamos registrados (un par de veces, teníamos algunas diferencias, pero las subsanamos), y luego cerramos la mesa.

Agarré todo, me metí en el cuarto oscuro, y empezó el recuento. Abrí la urna, y contamos todos los sobres (yo los contaba de a diez, y se los pasaba a la fiscal que validaba que hubieran diez como le había pedido). Luego de que vimos que teníamos la misma cantidad de sobres que votos registrados, empecé a abrirlos.

Acá vinieron varios fiscales a querer ayudarme con la apertura de los sobres, a lo que yo me fui negando siempre. El sobre lo abría yo, veía lo que había, y ahí le pasaba el contenido a la fiscal de siempre o a otro que ayudó un poco (había un par más de fiscales, pero no se metieron) y los iban apilando por lista (en dos grandes grupos: lista entera, y cortes). Yo también separaba sobres incompletos o raros: en blanco, o con votos parciales, o que parecían nulos.

Ya con todos los sobres abiertos, empecé a contar lista por lista todas las boletas (dos veces) e íbamos anotando en un "acta-poster" que llevaron los fiscales de Cambiemos (¡esa acta-poster fue de una ayuda bárbara!). Anotamos las listas completas y los cortes, partido por partido, y luego los votos parciales (con cargos en blanco), luego los otros blancos, y finalmente estudiamos los nulos.

Para terminar sumamos todo, casillero por casillero (un fiscal con calculadora, pero yo terminaba antes mentalmente y veía que estuviera bien). Y sumamos todas las columnas. Y nos dio redondito redondito :D.

Planilla con los totales de votos

Ahí empezaron todos a llenar sus planillas. Y yo también: el acta de escrutinio, el certificado de escrutinio, y el telegrama. Y empezar a acomodar todas las cosas (que se entregan al correo de forma muy predeterminada). Luego firmas por todos lados (yo como presidente de mesa a todo lo que hice yo e hicieron los fiscales, y ellos también entre sí y a lo que llené yo), y terminar de irnos.

Salí de la esuela a las nueve de la noche. Muerto de cansancio, pero contento, fue una experiencia buenísima.

Debo admitir que no vi ni una actitud sospechosa de parte de los fiscales, en ningún momento. Eso sí, siempre los mantuve "a raya" y no dejaba que se metieran en cosas que me parecían que tenía que resolver yo y sobre las que tenía que confiar yo en el resultado. Aunque tardáramos (especialmente en el recuento de votos).

En Octubre repito, ya en la elección "real", aprovechando toda la experiencia que gané :D

Read more
Inayaili de León Persson

Vanilla Framework has a new website

We’re happy to announce the long overdue Vanilla Framework website, where you can find all the relevant links and resources needed to start using Vanilla.

 

vanillaframework.io homepageThe homepage of vanillaframework.io

 

When you visit the new site, you will also see the new Vanilla logo, which follows the same visual principles as other logos within the Ubuntu family, like Juju, MAAS and Landscape.

We wanted to make sure that the Vanilla site showcased what you can do with the framework, so we kept it nice and clean, using only Vanilla patterns.

We plan on extending it to include more information about Vanilla and how it works, how to prototype using Vanilla, and the design principles that are behind it, so keep tuned.

And remember you can follow Vanilla on Twitter, ask questions on Slack and file new pattern proposals and issues on GitHub.

Read more
Alan Griffiths

Mir related ppas

Mir staging ppa

A few weeks ago I was reminded of the “Mir staging ppa” and saw that, with the discontinuation of Unity8 it had fallen into disuse. After considering deleting it I eventually removed some Ubuntu series that are no longer supported and added MirAL trunk.

That means that for Xenial, Zesty and Artful you can get the latest development version of Mir and MirAL like this:

sudo add-apt-repository ppa:mir-team/staging
sudo apt-get update
sudo apt-get upgrade

miral-release ppa

Now for the Internet of Things we are using UbuntuCore16 and snaps, but occasionally it is useful to have debs too. For Mir it is easy to use the underlying 16.04LTS (Xenial) archive, but MirAL isn’t in that archive.

To make up for that I’ve created a “miral-release” ppa containing the latest release of MirAL built for both Xenial and Zesty. Vis:

sudo add-apt-repository ppa:alan-griffiths/miral-release
sudo apt-get update
sudo apt-get upgrade

 

Read more
facundo

Eligiendo héroes


Dos grandes jugadores, dos grandes jugadas.

La primera (en orden cronológico) es este golazo de Messi que sirvió para que el Barça le gane al Real Madrid (su archi-oponente) en tiempo vencido, en un partido de Abril de este año, click para el video:

Jugada de Messi

La segunda jugada es esta tapa de Manu Ginóbili a uno de los jugadores estrellas del momento (James Harden), lo que permitió que los San Antonio Spurs ganen el quinto juego contra Houston, en las Semifinales de la Conferencia Oeste, post-temporada de la NBA, Mayo de este año, click para el video:

Jugada de Manu

Miren las dos jugadas, son fantásticas. Ahora miren los festejos de Messi y de Manu.

¿Saben qué me "jode"? Que Messi va y hace la señal de la cruz.

Perdón, pero yo a mis héroes los elijo laicos.

Read more
admin

Hello MAASters! The MAAS development summaries are back!

The past three weeks the team has been made good progress on three main areas, the development of 2.3, maintenance for 2.2, and out new and improved python library (libmaas).

MAAS 2.3 (current development release)

The first official MAAS 2.3 release has been prepared. It is currently undergoing a heavy round of testing and will be announced separately once completed. In the past three weeks, the team has:

  • Completed Upstream Proxy UI
    • Improve the UI to better configure the different proxy modes.
    • Added the ability to configure an upstream proxy.
  • Network beaconing & better network discovery
  • Started Hardware Testing Phase 2
      • UX team has completed the initial wireframes and gathered feedback.
      • Started changes to collect and gather better test results.
  • Started Switch modeling
      • Started changes to support switch and switch port modeling.
  • Bug fixes
    • LP: #1703403 – regiond workers can use too many postgres connections
    • LP: #1651165 – Unable to change disk name using maas gui
    • LP: #1702690 – [2.2] Commissioning a machine prefers minimum kernel over commissioning global
    • LP: #1700802 – [2.x] maas cli allocate interfaces=<label>:ip=<ADDRESS> errors with Unknown interfaces constraint Edit
    • LP: #1703713 – [2.3] Devices don’t have a link from the DNS page
    • LP: #1702976 – Cavium ThunderX lacks power settings after enlistment apparently due to missing kernel
    • LP: #1664822 – Enable IPMI over LAN if disabled
    • LP: #1703713 – Fix missing link on domain details page
    • LP: #1702669 – Add index on family(ip) for each StaticIPAddress to improve execution time of the maasserver_routable_pairs view.
    • LP: #1703845 – Set the re-check interval for rack to region RPC connections to the lowest value when a RPC connection is closed or lost.

MAAS 2.2 (current stable release)

  • Last week, MAAS 2.2 was SRU’d into the Ubuntu Archives and to our latest LTS release, Ubuntu Xenial, replacing the MAAS 2.1 series.
  • This week, a new MAAS 2.2 point release has also been prepared. It is currently undergoing heavy testing. Once testing is completed, it will be released in a separate announcement.

Libmaas

Last week, the team has worked on increasing the level

  • Added ability to create machines.
  • Added ability to commission machines.
  • Added ability to manage MAAS networking definitions. Including Subnet, Fabrics, Spaces, vlans, IP Ranges, Static Routes and DHCP.

Read more
facundo

En tu cara, planeta redondo


Ejercicio de Python. El objetivo es tener una serie de timestamps, en función de un registro "tipo cron" que indique periodicidad, desde un punto de partida, hasta "ahora".

El problema es que el "ahora" es de Buenos Aires, mientras que el servidor está en Holanda (o podría estar en cualquier lado).

Lo resolvemos con pytz y croniter. Veamos...

Arranquemos un intérprete interactivo dentro de un virtualenv con las dos libs que acabo de mencionar (y las importamos, además de datetime):

    $ fades -d pytz -d croniter
    *** fades ***  2017-07-26 18:27:20,009  INFO     Hi! This is fades 6.0, automatically managing your dependencies
    *** fades ***  2017-07-26 18:27:20,009  INFO     Need to install a dependency with pip, but no builtin, doing it manually...
    *** fades ***  2017-07-26 18:27:22,979  INFO     Installing dependency: 'pytz'
    *** fades ***  2017-07-26 18:27:24,431  INFO     Installing dependency: 'croniter'
    Python 3.5.2 (default, Nov 17 2016, 17:05:23)
    [GCC 5.4.0 20160609] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import croniter
    >>> import pytz
    >>> import datetime

Veamos que el server tiene horarios "complicados" (en el momento de hacer esto, acá en Buenos Aires son las 18:09):

    >>> datetime.datetime.now()
    datetime.datetime(2017, 7, 26, 23, 9, 51, 476140)
    >>> datetime.datetime.utcnow()
    datetime.datetime(2017, 7, 26, 21, 9, 56, 707279)

Instanciemos croniter, indicando repeticiones todas las horas a las 20 (a propósito, de manera que cuando iteremos desde hace una semana hasta "ahora", debería llegar hasta ayer, porque ahora son las 18 y pico acá, pero justo UTC o la hora del server son más que las 20...):

    >>> cron = croniter.croniter("0 20 * * * ", datetime.datetime(year=2017, month=7, day=20))

Pidamos la hora UTC actual, agregándole metadata de que es UTC, justamente:

    >>> utc_now = pytz.utc.localize(datetime.datetime.utcnow())
    >>> utc_now
    datetime.datetime(2017, 7, 26, 21, 15, 27, 508732, tzinfo=<UTC>)

Pidamos un timezone para Buenos Aires, y el "ahora" de antes pero calculado para esta parte del planeta:

    >>> bsas_tz = pytz.timezone("America/Buenos_Aires")
    >>> bsas_now = utc_now.astimezone(bsas_tz)
    >>> bsas_now
    datetime.datetime(2017, 7, 26, 18, 15, 27, 508732, tzinfo=<DstTzInfo 'America/Buenos_Aires' -03-1 day, 21:00:00 STD>)

Ahora hagamos un loop, pidiendo las fechas a cron y mostrándola, mientras que no sea más que ahora (notar que para compararla, hay que localizarla con el mismo timezone).

    >>> while True:
    ...     next_ts = cron.get_next(datetime.datetime)
    ...     bsas_next_ts = bsas_tz.localize(next_ts)
    ...     if bsas_next_ts > bsas_now:
    ...         break
    ...     print(bsas_next_ts)
    ...
    2017-07-20 20:00:00-03:00
    2017-07-21 20:00:00-03:00
    2017-07-22 20:00:00-03:00
    2017-07-23 20:00:00-03:00
    2017-07-24 20:00:00-03:00
    2017-07-25 20:00:00-03:00

Vemos que tuvimos fechas arrancando el 20 de julio, y tenemos "varios días a las 20hs" hasta ayer, porque todavía no es "hoy a las 20hs". ¡Listo!

Read more