Canonical Voices

Dustin Kirkland

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin

      Read more
      Alan Griffiths

      miral-workspaces

      “Workspaces” have arrived on MirAL trunk (lp:miral).

      We won’t be releasing 1.3 with this feature just yet (as we want some experience with this internally first). But if you build from source there’s an example to play with (bin/miral-app).

      As always, bug reports and other suggestions are welcome.

      Note that the miral-shell doesn’t have transitions and other effects like fully featured desktop environments.

      Read more
      facundo

      Resumen veraniego de películas


      Muchas, muchas películas vistas. Igual no entro en ritmo en ver más; estoy complicado en encontrar ese par de horas en que los niños están tranquilos y yo no estoy muy cansado :p

      • Alice Through the Looking Glass: +0. Divertida, un flash, pero tampoco mucho más que una colección de momentos interesantes.
      • All Is Lost: -0. La supervivencia de alguien con una seguidilla de malas suertes; mirala sólo si te interesa esto de "estar solo y sobrevivir como se pueda".
      • Captain America: Civil War: +0. La típica pelea entre superheroes, pero no se me hizo pesada; de bonus tiene una temática interesante de pensar, sobre el control de gobiernos sobre las armas.
      • Clouds of Sils Maria: -0. Aunque tiene muchas charlas interesantes, la historia en sí no tiene ritmo, y no va a ningún lado.
      • Danny Collins: +0. Linda historia, no del todo esperado lo que sucede, emotiva, bien armada.
      • El Ardor: +0. Buena la historia, buena la ambientación, y creo que muestra bien una realidad que conocemos muy poco.
      • Ex Machina: -0. No me gustó, pero no sé bien por qué. ¿Le faltó suspenso? ¿Muy plana? Lo que plantean a nivel inteligencia artificial está bien, sin embargo (me hubiese gustado más profundidad, pero bueno, es una película para las masas, no un documental).
      • Fantastic Four: -0. Un punto de vista diferente del clásico, pero bleh.
      • Home Sweet Hell: -0. Con sus partes muy graciosas, pero la historia no llega a ser.
      • La Vénus à la fourrure: +1. La dinámica entre dos personas, la linea entre la realidad y la ficción. Me encantó.
      • Laggies: +0. Apenas lenta, pero buena historia, buen desarrollo, me gustó como muestra la evolución de la decisión del personaje principal.
      • Match: +0. Linda historia, buenas actuaciones. Potente.
      • Nina: +1. Mis más grandes respetos para Zoe Saldaña. Maravillosa. Deslumbrante. Me gustaría saber qué piensa una o un fan de Nina Simone sobre esta película.
      • Pan: -0. Una versión distinta del clásico, bastante renovada, no me llegó a atrapar.
      • Pixels: +0. Divertida y pasatista, me gustó estando Adam Sandler y todo. Tampoco es la gran cosa, eh, pero es más que nada piola en función de los videojuegos viejos...
      • Predestination: +1. Muy buena historia, no vas entendiendo de qué va hasta que te enroscó y después ya caiste en la (buena) trampa.
      • Stealing Beauty: -0. Una linda historia, una maravillosa fotografía, pero le falta "consistencia", es muy etérea, no sé. Y lenta.
      • The November Man: -0. Una de acción y espías wannabe, no mucho.
      • The Right Kind of Wrong: +0. Sólo una comedia romántica, pasatista, pero divertida.
      • Time Lapse: +1. LA historia no es muy profunda, pero maneja muy bien la temporalidad (o los saltos en la misma...).
      • Under the Skin: -1. Hay una historia, ahí, pero la película es EXTREMADAMENTE lenta :(.
      • VANish: -0. Bruta, violenta, y cruda. Pero nada más.
      • Vice: -0. Con algunos dejos de temática interesante, en la que podrían haber incursionado sobre la parte conceptual de los robots, pero la película va por otro lado.


      Un montonazo para ver! Y eso que no estoy encontrando un buen lugar para enterarme de los trailers que van saliendo. Por ahora estoy usando este canal de YouTube, pero no tiene todo. Me sugirieron IMDb, también, pero aunque tiene algunas cosas que el otro no, tiene muy poco y no parece estar del todo bien ordenado.

      • Amateur (2016; Thriller) Martin (Esteban Lamothe) is a lonely television director, who becomes obsessed with his neighbor, Isabel (Jazmin Stuart), when he finds an amateur porn video in which she participates. But Isabel is the wife of Battaglia (Alejandro Awada), the owner of the television station where Martin works. As a strange love encounter takes place between Martin and Isabel, he discovers a secret that puts them both in danger. [D: Sebastian Perillo; A: Alejandro Awada, Esteban Lamothe, Jazmín Stuart]
      • Blade Runner 2049 (2017; Sci-Fi) Thirty years after the events of the first film, a new blade runner, LAPD Officer K (Ryan Gosling), unearths a long-buried secret that has the potential to plunge what's left of society into chaos. K's discovery leads him on a quest to find Rick Deckard (Harrison Ford), a former LAPD blade runner who has been missing for 30 years. [D: Denis Villeneuve; A: Ryan Gosling, Ana de Armas, Jared Leto]
      • Colossal (2016; Action, Sci-Fi, Thriller) A woman discovers that severe catastrophic events are somehow connected to the mental breakdown from which she's suffering. [D: Nacho Vigalondo; A: Dan Stevens, Anne Hathaway, Jason Sudeikis]
      • DxM (2015; Action, Sci-Fi, Thriller) A group of brilliant young students discover the greatest scientific breakthrough of all time: a wireless neural network, connected via a quantum computer, capable of linking the minds of each and every one of us. They realise that quantum theory can be used to transfer motor-skills from one brain to another, a first shareware for human motor-skills. They freely spread this technology, believing it to be a first step towards a new equality and intellectual freedom. But they soon discover that they themselves are part of a much greater and more sinister experiment as dark forces emerge that threaten to subvert this technology into a means of mass-control. MindGamers takes the mind-bender thriller to the next level with an immersive narrative and breath-taking action. [D: Andrew Goth; A: Dominique Tipper, Sam Neill, Tom Payne]
      • Elle (2016; Comedy, Drama, Thriller) Michèle seems indestructible. Head of a successful video game company, she brings the same ruthless attitude to her love life as to business. Being attacked in her home by an unknown assailant changes Michèle's life forever. When she resolutely tracks the man down, they are both drawn into a curious and thrilling game-a game that may, at any moment, spiral out of control. [D: Paul Verhoeven; A: Isabelle Huppert, Laurent Lafitte, Anne Consigny]
      • Frank & Lola (2016; Crime, Drama, Mystery, Romance, Thriller) A psychosexual noir love story, set in Las Vegas and Paris, about love, obsession, sex, betrayal, revenge and, ultimately, the search for redemption. [D: Matthew Ross; A: Imogen Poots, Michael Shannon, Michael Nyqvist]
      • Ghost in the Shell (2017; Action, Drama, Sci-Fi, Thriller) Based on the internationally acclaimed sci-fi manga series, "Ghost in the Shell" follows the Major, a special ops, one-of-a-kind human cyborg hybrid, who leads the elite task force Section 9. Devoted to stopping the most dangerous criminals and extremists, Section 9 is faced with an enemy whose singular goal is to wipe out Hanka Robotic's advancements in cyber technology. [D: Rupert Sanders; A: Scarlett Johansson, Michael Pitt, Michael Wincott]
      • Guardians of the Galaxy Vol. 2 (2017; Action, Sci-Fi) Set to the backdrop of 'Awesome Mixtape #2,' Marvel's Guardians of the Galaxy Vol. 2 continues the team's adventures as they traverse the outer reaches of the cosmos. The Guardians must fight to keep their newfound family together as they unravel the mysteries of Peter Quill's true parentage. Old foes become new allies and fan-favorite characters from the classic comics will come to our heroes' aid as the Marvel cinematic universe continues to expand. [D: James Gunn; A: Chris Sullivan, Pom Klementieff, Chris Pratt]
      • Kiki, el amor se hace (2016; Comedy) Through five stories, the movie addresses sex and love: Paco and Ana are a marriage looking for reactivate the passion of their sexual relations, long time unsatisfied; Jose Luis tries to recover the affections of his wife Paloma, sit down on a wheelchair after an accident which has limited her mobility; Mª Candelaria and Antonio are a marriage trying by all way to be parents, but she has the trouble that no get an orgasm when make love with him; Álex try to satisfy Natalia's fantasies, while she starts to doubt if he finally will ask her in marriage; and finally, Sandra is a single woman in a permanent searching for a man to fall in love. All them love, fear, live and explore their diverse sexual paraphilias and the different sides of sexuality, trying to find the road to happiness. [D: Paco León; A: Natalia de Molina, Álex García, Jacobo Sánchez]
      • Life (2017; Horror, Sci-Fi, Thriller) Six astronauts aboard the space station study a sample collected from Mars that could provide evidence for extraterrestrial life on the Red Planet. The crew determines that the sample contains a large, single-celled organism - the first example of life beyond Earth. But..things aren't always what they seem. As the crew begins to conduct research, and their methods end up having unintended consequences, the life form proves more intelligent than anyone ever expected. [D: Daniel Espinosa; A: Rebecca Ferguson, Jake Gyllenhaal, Ryan Reynolds]
      • Little Murder (2011; Crime, Drama, Thriller) In post-Katrina New Orleans, a disgraced detective encounters the ghost of a murdered woman who wants to help him identify her killer. [D: Predrag Antonijevic; A: Josh Lucas, Terrence Howard, Lake Bell]
      • Logan (2017; Action, Drama, Sci-Fi) In the near future, a weary Logan cares for an ailing Professor X in a hide out on the Mexican border. But Logan's attempts to hide from the world and his legacy are up-ended when a young mutant arrives, being pursued by dark forces. [D: James Mangold; A: Doris Morgado, Hugh Jackman, Dafne Keen]
      • Passengers (2016; Adventure, Drama, Romance, Sci-Fi) The spaceship, Starship Avalon, in its 120-year voyage to a distant colony planet known as the "Homestead Colony" and transporting 5,258 people has a malfunction in one of its sleep chambers. As a result one hibernation pod opens prematurely and the one person that awakes, Jim Preston (Chris Pratt) is stranded on the spaceship, still 90 years from his destination. [D: Morten Tyldum; A: Jennifer Lawrence, Chris Pratt, Michael Sheen]
      • Personal Shopper (2016; Drama, Mystery, Thriller) Revolves around a ghost story that takes place in the fashion underworld of Paris. [D: Olivier Assayas; A: Kristen Stewart, Lars Eidinger, Sigrid Bouaziz]
      • Pirates of the Caribbean: Dead Men Tell No Tales (2017; Action, Adventure, Comedy, Fantasy) Captain Jack Sparrow finds the winds of ill-fortune blowing even more strongly when deadly ghost pirates led by his old nemesis, the terrifying Captain Salazar, escape from the Devil's Triangle, determined to kill every pirate at sea...including him. Captain Jack's only hope of survival lies in seeking out the legendary Trident of Poseidon, a powerful artifact that bestows upon its possessor total control over the seas. [D: Joachim Rønning, Espen Sandberg; A: Kaya Scodelario, Johnny Depp, Javier Bardem]
      • Spider-Man: Homecoming (2017; Action, Adventure, Sci-Fi) A young Peter Parker/Spider-Man, who made his sensational debut in Captain America: Civil War, begins to navigate his newfound identity as the web-slinging superhero in Spider-Man: Homecoming. Thrilled by his experience with the Avengers, Peter returns home, where he lives with his Aunt May, under the watchful eye of his new mentor Tony Stark, Peter tries to fall back into his normal daily routine - distracted by thoughts of proving himself to be more than just your freindly neighborhood Spider-Man - but when the Vulture emerges as a new villain, everything that Peter holds most important will be threatened. [D: Jon Watts; A: Robert Downey Jr., Tom Holland, Angourie Rice]
      • T2 Trainspotting (2017; Comedy, Drama) First there was an opportunity......then there was a betrayal. Twenty years have gone by. Much has changed but just as much remains the same. Mark Renton (Ewan McGregor) returns to the only place he can ever call home. They are waiting for him: Spud (Ewen Bremner), Sick Boy (Jonny Lee Miller), and Begbie (Robert Carlyle). Other old friends are waiting too: sorrow, loss, joy, vengeance, hatred, friendship, love, longing, fear, regret, diamorphine, self-destruction and mortal danger, they are all lined up to welcome him, ready to join the dance. [D: Danny Boyle; A: Ewan McGregor, Logan Gillies, Ben Skelton]
      • The Discovery (2017; Romance, Sci-Fi) Writer-director Charlie McDowell returns to Sundance this year with a thriller about a scientist (played by Robert Redford) who uncovers scientific proof that there is indeed an afterlife. His son is portrayed by Jason Segel, who's not too sure about his father's "discovery", and Rooney Mara plays a mystery woman who has her own reasons for wanting to find out more about the afterlife. [D: Charlie McDowell; A: Rooney Mara, Riley Keough, Robert Redford]
      • The Whole Truth (2016; Drama, Thriller) Defense attorney Richard Ramsay takes on a personal case when he swears to his widowed friend, Loretta Lassiter, that he will keep her son Mike out of prison. Charged with murdering his father, Mike initially confesses to the crime. But as the trial proceeds, chilling evidence about the kind of man that Boone Lassiter really was comes to light. While Ramsay uses the evidence to get his client acquitted, his new colleague Janelle tries to dig deeper - and begins to realize that the whole truth is something she alone can uncover. [D: Courtney Hunt; A: Keanu Reeves, Renée Zellweger, Gugu Mbatha-Raw]
      • The Comedian (2016; Comedy) A look at the life of an aging insult comic named Jack Burke. [D: Taylor Hackford; A: Robert De Niro, Leslie Mann, Harvey Keitel]
      • The Mummy (2017; Action, Adventure, Fantasy, Horror) Though safely entombed in a crypt deep beneath the unforgiving desert, an ancient princess whose destiny was unjustly taken from her is awakened in our current day, bringing with her malevolence grown over millennia, and terrors that defy human comprehension. [D: Alex Kurtzman; A: Tom Cruise, Sofia Boutella, Russell Crowe]
      • Valerian and the City of a Thousand Planets (2017; Action, Adventure, Sci-Fi) Rooted in the classic graphic novel series, Valerian and Laureline- visionary writer/director Luc Besson advances this iconic source material into a contemporary, unique and epic science fiction saga. Valerian (Dane DeHaan) and Laureline (Cara Delevingne) are special operatives for the government of the human territories charged with maintaining order throughout the universe. Valerian has more in mind than a professional relationship with his partner- blatantly chasing after her with propositions of romance. But his extensive history with women, and her traditional values, drive Laureline to continuously rebuff him. Under directive from their Commander (Clive Owen), Valerian and Laureline embark on a mission to the breathtaking intergalactic city of Alpha, an ever-expanding metropolis comprised of thousands of different species from all four corners of the universe. Alpha's seventeen million inhabitants have converged over time- uniting their talents, technology and resources for the betterment of all. Unfortunately, not everyone on Alpha shares in these same objectives; in fact, unseen forces are at work, placing our race in great danger. [D: Luc Besson; A: Dane DeHaan, Cara Delevingne, Ethan Hawke]
      • Vampyres (2015; Horror) Faithful to the sexy, twisted 1974 cult classic by Joseph Larraz, Vampyres is an English-language remake pulsating with raw eroticism, wicked sado-masochism and bloody, creative gore. Victor Matellano (Wax (2014, Zarpazos! A Journey through Spanish Horror, 2013) directs this tale set in a stately English manor inhabited by two older female vampires and with their only cohabitant being a man imprisoned in the basement. Their lives and lifestyle are upended when a trio of campers come upon their lair and seek to uncover their dark secrets, a decision that has sexual and blood-curdling consequences. [D: Víctor Matellano; A: Marta Flich, Almudena León, Alina Nastase]
      • Zero Days (2016; Documentary) Documentary detailing claims of American/Israeli jointly developed malware Stuxnet being deployed not only to destroy Iranian enrichment centrifuges but also threaten attacks against Iranian civilian infrastructure. Adresses obvious potential blowback of this possibly being deployed against the US by Iran in retaliation. [D: Alex Gibney; A: David Sanger, Emad Kiyaei, Eric Chien]
      • Collateral Beauty (2016; Drama, Romance) When a successful New York advertising executive suffers a great tragedy, he retreats from life. While his concerned friends try desperately to reconnect with him, he seeks answers from the universe by writing letters to Love, Time and Death. But it's not until his notes bring unexpected personal responses that he begins to understand how these constants interlock in a life fully lived, and how even the deepest loss can reveal moments of meaning and beauty [D: David Frankel; A: Will Smith, Edward Norton, Kate Winslet]
      • Passage to Mars (2016; Documentary, Adventure) The journals of a true NASA Arctic expedition unveils the adventure of a six-man crew's aboard an experimental vehicle designed to prepare the first human exploration of Mars. A voyage of fears and survival, hopes and dreams, through the beauties and the deadly dangers of two worlds: the High Arctic and Mars, a planet that might hide the secret of our origins. [D: Jean-Christophe Jeauffre; A: Zachary Quinto, Charlotte Rampling, Pascal Lee]


      Finalmente, el conteo de pendientes por fecha:

      (Abr-2011)    4
      (Ago-2011)   11   4
      (Ene-2012)   17  11   3
      (Jul-2012)   15  14  11
      (Nov-2012)   11  11  11   6
      (Feb-2013)   15  14  14   8   2
      (Jun-2013)   16  15  15  15  11   2
      (Sep-2013)   18  18  18  17  16   8
      (Dic-2013)   14  14  12  12  12  12   4
      (Abr-2014)        9   9   8   8   8   3
      (Jul-2014)           10  10  10  10  10   5   1
      (Nov-2014)               24  22  22  22  22   7
      (Feb-2015)                   13  13  13  13  10
      (Jun-2015)                       16  16  15  13  11
      (Dic-2015)                           21  19  19  18
      (May-2016)                               26  25  23
      (Sep-2016)                                   19  19
      (Feb-2017)                                       26
      Total:      121 110 103 100  94  91  89 100  94  97

      Read more
      Alan Griffiths

      MirAL 1.2

      There’s a new MirAL release (1.2.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

      Unsurprisingly, given the project’s original goal, the ABI is unchanged.

      Since my last update the integration of libmiral into QtMir has progressed and libmiral has been used in the latest updates to Unity8.

      The changes in 1.2.0 fall are:

      A new libmirclientcpp-dev package

      This is a “C++ wrapper for libmirclient” and has been split
      from libmiral-dev.

      Currently it comprises RAII wrappers for some Mir client library types: MirConnection, MirWindowSpec, MirWindow and MirWindowId. In addition, the WindowSpec wrapper provides named constructors and function chaining to enable code like the following:

      auto const window = mir::client::WindowSpec::
          for_normal_window(connection, 50, 50, mir_pixel_format_argb_8888)
          .set_buffer_usage(mir_buffer_usage_software)
          .set_name(a_window.c_str())
          .create_window();
      

      Refresh the “Building and Using MirAL” doc

      This has been rewritten (and renamed) to reflect the presence of MirAL in the Ubuntu archives and make installation (rather than “build it yourself”) the default approach.

      Bug fixes

      • [libmiral] Chrome-less shell hint does not work any more (LP: #1658117)
      • “$ miral-app -kiosk” fails with “Unknown command line options:
        –desktop_file_hint=miral-shell.desktop” (LP: #1660933)
      • [libmiral] Fix focus and movement rules for Input Method and Satellite
        windows. (LP: #1660691)
      • [libmirclientcpp-dev] WindowSpec::set_state() wrapper for mir_window_spec_set_state()
        (LP: #1661256)

      Read more

      Snappy Libertine

      Libertine is software suite for runnin X11 apps in non-X11 environments and installing deb-based applications on a system without dpkg. Snappy is a package management system to confine applications from one another. Wouldn’t it be cool to run libertine as a snap?

      Yes. Yes it would.

      snapd

      The first thing to install is snapd itself. You can find installation instructions for many Linux distros at snapcraft.io, but here’s the simple command if you’re on a debian-based operating system:

      1
      
      $ sudo apt install snapd
      

      Ubuntu users may be surprised to find that snapd is already installed on their systems. snapd is the daemon for handling all things snappy: installing, removing, handling interface connections, etc.

      lxd

      We use lxd as our container backend for libertine in the snap. lxd is essentially a layer on top of lxc to give a better user experience. Fortunately for us, lxd has a snap all ready to go. Unfortunately, the snap version of lxd is incompatible with the deb-based version, so you’ll need to completely remove that before continuing. Skip this step if you never installed lxd:

      1
      2
      3
      4
      
      $ sudo apt remove --purge lxd lxd-client
      $ sudo zpool destroy lxd                 # if you use zfs
      $ sudo ip link set lxdbr0 down           # take down the bridge (lxdbr0 is the default)
      $ sudo brctl delbr lxdbr0                # delete the bridge
      

      For installing, in-depth instructions can be found in this blog post by one of the lxd devs. In short, we’re going to create a new group called lxd, add ourselves to it, and then add our own user ID and group ID to map to root within the container.

      1
      2
      3
      4
      5
      6
      
      $ sudo groupadd --system lxd                      # Create the group on your system
      $ sudo usermod -G lxd -a $USER                    # Add the current user
      $ newgrp lxd                                      # update current session with new group
      $ echo root:`id --user ${USER}`:1 >> /etc/subuid  # Setup subuid to map correctly
      $ echo root:`id --group ${USER}`:1 >> /etc/subgid # Setup subgid to map correctly
      $ sudo snap install lxd                           # actually install the snap!
      

      We also need to initialize lxd manually. For me, the defaults all work great. The important pieces here are setting up a new network bridge and a new filestore for lxd to use. You can optionally use zfs if you have it installed (zfsutils-linux should do it on Ubuntu). Generally, I just hit “return” as fast as the questions show up and everything turns out alright. If anything goes wrong, you may need to manually delete zpools, network bridges, or reinstall the lxd snap. No warranties here.

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      
      $ sudo lxd init
      Do you want to configure a new storage pool (yes/no) [default=yes]?
      Name of the new storage pool [default=default]:
      Name of the storage backend to use (dir or zfs) [default=zfs]:
      Create a new ZFS pool (yes/no) [default=yes]?
      Would you like to use an existing block device (yes/no) [default=no]?
      Would you like LXD to be available over the network (yes/no) [default=no]?
      Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
      Would you like to create a new network bridge (yes/no) [default=yes]?
      What should the new bridge be called [default=lxdbr0]?
      What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?
      

      You should now be able to run lxd.lxc list without errors. It may warn you about running lxd init, but don’t worry about that if your initialization succeeded.

      libertine

      Now we’re onto the easy part. libertine is only available from edge channels in the app store, but we’re fairly close to having a version that we could push into more stable channels. For the latest and greatest libertine:

      1
      2
      
      $ sudo snap install --edge libertine
      $ sudo snap connect libertine:libertined-client libertine:libertined
      

      If we want libertine to work fully, we need to jump through a couple of hoops. For starters, dbus-activation is not fully functional at this time for snaps. Lucky for us, we can fake this by either running the d-bus service manually (/snap/bin/libertined), or by adding this file to /usr/share/dbus-1/services/com.canonical.libertine.Service.service

      /usr/share/dbus-1/services/com.canonical.libertine.Service.service
      1
      2
      3
      
      [D-BUS Service]
      Name=com.canonical.libertine.Service
      Exec=/snap/bin/libertine.libertined --cache-output
      

      Personally, I always create the file, which will allow libertined to start automatically on the session bus whenever a user calls it. Hopefully d-bus activation will be fixed sooner rather than later, but this works fine for now.

      Another issue is that existing deb-based libertine binaries may conflict with the snap binaries. We can fix this by adjusting PATH in our .bashrc file:

      $HOME/.bashrc
      1
      2
      
      # ...
      export PATH=/snap/bin:$PATH
      

      This will give higher priority to snap binaries (which should be the default, IMO). One more thing to fix before running full-force is to add an environment variable to /etc/environment such that the correct libertine binary is picked up in Unity 8:

      /etc/environment
      1
      2
      
      # ...
      UBUNTU_APP_LAUNCH_LIBERTINE_LAUNCH=/snap/bin/libertine-launch
      

      OK! Now we’re finally ready to start creating containers and installing packages:

      1
      2
      3
      4
      
      $ libertine-container-manager create -i my-container
      # ... (this could take a few minutes)
      $ libertine-container-manager install-package -i my-container -p xterm
      # ... (and any other packages you may want)
      

      If you want to launch your apps in Unity 7 (why not?):

      1
      2
      
      $ libertine-launch -i my-container xterm
      # ... (lots of ourput, hopefully an open window!)
      

      When running Unity 8, your apps should show up in the app drawer with all the other applications. This will all depend on libertined running, so make sure that it runs at startup!

      I’ve been making a lot of improvements on the snap lately, especially as the ecosystem continues to mature. One day we plan for a much smoother experience, but this current setup will let us work out some of the kinks and find issues. If you want to switch back the deb-based libertine, you can just install it through apt and remove the change to /etc/environment.

      Read more
      Leo Arias

      There is a huge announcement coming: snaps now run in Ubuntu 14.04 Trusty Tahr.

      Take a moment to note how big this is. Ubuntu 14.04 is a long-term release that will be supported until 2019. Ubuntu 16.04 is also a long-term release that will be supported until 2021. We have many many many users in both releases, some of which will stay there until we drop the support. Before this snappy new world, all those users were stuck with the versions of all their programs released in 2014 or 2016, getting only updates for security and critical issues. Just try to remember how your favorite program looked 5 years ago; maybe it didn't even exist. We were used to choose between stability and cool new features.

      Well, a new world is possible. With snaps you can have a stable base system with frequent updates for every program, without the risk of breaking your machine. And now if you are a Trusty user, you can just start taking advantage of all this. If you are a developer, you have to prepare only one release and it will just work in all the supported Ubuntu releases.

      Awesome, right? The Ubuntu devs have been doing a great job. snapd has already landed in the Trusty archive, and we have been running many manual and automated tests on it. So we would like now to invite the community to test it, explore weird paths, try to break it. We will appreciate it very much, but all of those Trusty users out there will love it, when they receive loads of new high quality free software on their oldie machines.

      So, how to get started?

      If you are already running Trusty, you will just have to install snapd:

      $ sudo apt update && sudo apt install snapd
      

      Reboot your system after that in case you had a kernel update pending, and to get the paths for the new snap binaries set up.

      If you are running a different Ubuntu release, you can Install Ubuntu in a virtual machine. Just make sure that you install the http://releases.ubuntu.com/14.04/ubuntu-14.04.5-desktop-amd64.iso.

      Once you have Trusty with snapd ready, try a few commands:

      $ snap list
      $ sudo snap install hello-world
      $ hello-world
      $ snap find something
      

      screenshot of snaps running in Trusty

      Keep searching for snaps until you find one that's interesting. Install it, try it, and let us know how it goes.

      If you find something wrong, please report a bug with the trusty tag. If you are new to the Ubuntu community or get lost on the way, come and join us in Rocket Chat.

      And after a good session of testing, sit down, relax, and get ohmygiraffe. With love from popey:

      $ sudo snap install ohmygiraffe
      $ ohmygiraffe
      

      screenshot of ohmygiraffe

      Read more
      facundo

      Vacaciones en Neuquén


      En enero nos tomamos con la familia un par de semanas y nos fuimos a pasar unas vacaciones en Neuquén. Como siempre, hicimos el viaje en dos días, pero la novedad es que no fuimos solos, ibamos en "caravana de dos autos", nosotros y mi mamá y Diana en el otro.

      El lugar base, como en otras oportunidades, fue la casa que se están armando Diana y Gus en Piedra del Águila. Allí estuvimos varios días, e hicimos de todo.

      Parando a almorzar en la ruta, ¡ni un árbol!

      Obviamente, un punto fuerte fue el comer :p. Es que es un clásico: el horno de barro construido por Di es un golazo. Ahí hicimos un pernil de cerdo con verduras, un costillar de cerdo y bondiola, también con verduras (tirar cuatro o cinco choclos con las chalas adentro y dejarlos una horita lo hacíamos siempre!), pizzas caseras, de todo.

      Para bajar la comida (?) paseamos bastante. Algunas recorridas sólo para descansar, como un paseito pequeño una tarde al perilago (nos metimos al agua, que estaba linda), o un día en la vera del Río Limay, justo abajo del Embalse de Pichí Picún Leufú, donde también almorzamos. La pasaron bien hasta los perros, Mafalda (como pudo, con las piedras, está muy viejita) y Fidel. Nosotros nos divertimos tirando piedras con Gus, Felu y hasta Male! Y obvio: descansamos, dormimos, caminamos por el agua, etc.

      Por otro lado, también hicimos un paseo por los cerros de Piedra del Águila, escalando bastante, paseando por las cimas, esquivando cardos y pinches varios, bajando con mucho cuidado. Male se la re bancó. Felu iba como loco. Estuvo muy bueno, incluso haciendo tanto tantísimo viento en la cima (te hacía perder el equilibrio!).

      En la cima de la montaña

      A nivel de actividades dentro de la casa, se destaca jugamos varios tutes cabrero. Incluso Felipe aprendió a jugar, ¡¡y casi gana uno!! Yo tuve suerte, gané un par, y el último que jugamos lo gané yo solito, porque hice un capote cuando quedábamos sólo tres y estábamos al borde de salir.

      También chusmeamos mucho y nos entrometemos en la imprenta, donde Gus trata de trabajar normalmente mientras nosotros estamos visitando. Los chicos se entretienen anillando papelitos, a mí me fascina los automatismos de las máquinas, Moni acomoda e intercala facturas, etc. Pobre Gus.

      Los chicos también estuvieron ayudando un poco en la huerta, cosechando unas frutillas caseras (estaban asombrósamente ricas). No faltó un juego de tirarse agua con el regador entre Felu, Male y Gus...

      Almorzando sobre el Limay

      Unos pehuenes cerca de una montaña con forma rara, camino a Villa Pehuenia

      Un día nos lo tomamos y nos fuimos hasta el Chocón, con mi vieja.

      Visitamos nuevamente el museo de la ciudad, ya que los niños crecen y aprovechan otras cosas. Y a decir verdad, uno también aprende siempre algo nuevo con cada visita.

      Guarda que te come

      Fue una complicación almorzar. Fuimos al restaurant del camping (habíamos ido también dos años atrás y estaba bueno), y nos enteramos que tenían cerveza artesanal: buenísimo! Pero vimos que la carta era muy reducida. Decidimos quedarnos igual, pero a la hora de pedir sólo tenían sánguche de lomo ($250!!), ravioles, y alguna cosita más. O sea, nos tomamos las cervezas y jugos, y nos fuimos.

      Encontramos otro restaurant, que parecía supercheto pero igual entramos al predio: en la puerta, en el horario, decía: "abrimos cuando llegamos, cerramos cuando nos vamos". Ok, tenía ganas de dejarles notita de "me voy a dejar mi dinero en otro lado".

      Al final pasamos por un almacén, compramos material para sanguchitos, y almorzamos bajo unos arbolitos :)

      Con Felu visitándo la estatua del Águila, en Piedra del ídem

      También hicimos un paseo más largo, esta vez con Diana y Gus. Nos llegamos hasta Villa Pehuenia, donde hicimos noche y casi no paseamos. Visitamos el lago y tomamos unos mates ahí, y comimos rico en un lugarcito lindo.

      En el lago de Villa Pehuenia

      Al otro día bien temprano nos fuimos para Chile. Tuvimos una espera bárbara para cruzar: tres horas del lado argentino hasta que hicimos todos los trámites. Del lado chileno resolvimos todo en una hora (contando con que tuve que volver a las oficinas argentinas para que corrigieran un número).

      Estuvimos un par de días solamente, como para conocer algunos lugares y ver si da para una estadía más larga. Alquilamos una cabaña linda en Villarrica, alejada del centro. El centro de la ciudad es muy lindo, por donde paseamos bastante (hay una graaaaaaaan feria semiartesanal donde compramos cositas lindas para la casa), fuimos a comer, comprar cosas, etc. Había bastante gente.

      Alrmorzando en Temuco

      El volcán de Villarrica

      Uno de los días nos fuimos a Temuco, una ciudad bastante más grande, a unos 80km. Paseamos un rato también por el centro, compramos un par de cosas, almorzamos muy rico (en Vicuña Mackenna 530: unas muy buenas sopas, una de champignones y otra de camarones, y una espectacular lasaña de berenjenas, más una ensalada de verdes), y visitamos un museo mapuche.

      Al lado del museo mapuche, en el mismo predio pero al aire libre, había una feria medieval: gente enseñando esgrima con espadas, contando cuentos, vendiendo todo tipo de cosas estilo medieval (ropas, armas, libros, lo que se te ocurra).

      Felipe en una plaza de Temuco

      Felipe flasheó cuando entró a la feria y vio a una chica con orejas tipo elfo, :), aunque también nos colgamos en la clase de esgrima, y en otro lugar donde había un "duende del bosque" contando un cuento con acertijos.

      Al volver a Argentina, del lado de Chile nos hicieron problema porque faltaba un sello (de algo del auto) en los papeles de la entrada al país. Nos faltaba a nosotros, a Gus y Diana, y a otra persona que estaba después en la cola. Se ve que le pifiaron o se olvidaron cuando pasamos dos días atrás. En fin, protestamos un poco y listo, dieron el ok (?). Nosotros apuntábamos a tener 3 o 4 horas de cola del lado de Argentina, como pasó dos días antes cuando nosotros hicimos el camino inverso, ¡pero no había nadie! Se ve que justo al ser domingo a la mañana, zafamos, resolvimos todo en media hora y nos fuimos para Aluminé.

      Moni y Male en Aluminé

      En Aluminé teníamos reservadas dos habitaciones en un hostal que resultó ser bárbaro (Diana y Gus ya lo conocían). Las habitaciones eran lindas, el desayuno casero, pero lo mejor era el parque y las parrillas, y un quincho totalmente comunal (con parrilla interna, heladera, horno, hornallas, microondas y muchas mesas).

      Al otro día de llegar hicimos rafting, lo que resultó toda una experiencia!  Felu remó un poco y todo, Male iba en el medio y se asustó un toque al romper los rápidos; igual en la mitad del paseo ellos dos se metieron en el rio, conmigo, Diana y Gus. Eso sí, el agua estaba muy fria, por suerte el guia (que era un capo, nos iba contando cosas del rio o de la naturaleza de la región) le prestó una remera a Malena y otra (la propia!) a Felipe, para que no tomaran frío mojados.

      Atacándo el rápido

      Felu experimentado remador

      Luego del rafting en sí nos quedamos disfrutando la tardecita en el rio, y nos volvimos que yo tenía que hacer unos pollos a la parrilla.

      Al otro día ya arrancamos la vuelta a Piedra del Águila, pero en el camino nos desviamos un poco para pasear por el Parque Nacional Lanin (aunque el volcán no se puedo ver mucho porque estaba muy nublado), y luego también fuimos a ver unas pinturas rupestres que casi ni quedaban luego de vandalismos por el humano estúpido.

      En Piedra estuvimos un día entero, y ya al siguiente partimos viaje a Buenos Aires, donde llegamos luego de hacer noche en Catriló.

      En la cima, buscando las pinturas rupestres

      Los pimpollos en el lago de Villa Pehuenia

      Unas vacaciones bárbaras. Muchas fotos acá.

      Read more
      Stéphane Graber

      LXD logo

      LXD on other operating systems?

      While LXD and especially its API have been designed in a mostly OS-agnostic way, the only OS supported for the daemon right now is Linux (and a rather recent Linux at that).

      However since all the communications between the client and daemon happen over a REST API, there is no reason why our default client wouldn’t work on other operating systems.

      And it does. We in fact gate changes to the client on having it build and pass unit tests on Linux, Windows and MacOS.

      This means that you can run one or more LXD daemons on Linux systems on your network and then interact with those remotely from any Linux, Windows or MacOS machine.

      Setting up your LXD daemon

      We’ll be connecting to the LXD daemon over the network, so you’ll need to make sure it’s listening and has a password configured so that new clients can add themselves to the trust store.

      This can be done with:

      lxc config set core.https_address "[::]:8443"
      lxc config set core.trust_password "my-password"

      In my case, that remote LXD can be reached with “djanet.maas.mtl.stgraber.net”, you’ll want to replace that with your LXD server’s FQDN or IP in the commands used below.

      Windows client

      Pre-built native binaries

      Our Windows CI service builds a tarball for every commit. You can grab the latest one here:
      https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts

      Then unpack the archive and open a command prompt in the directory where you unpacked the lxc.exe binary.

      Build from source

      Alternatively, you can build it from source, by first installing Go using the latest MSI based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

      And then in a command prompt, run:

      git config --global http.https://gopkg.in.followRedirects true
      go get -v -x github.com/lxc/lxd/lxc

      Use Ubuntu on Windows (“bash”)

      For this, you need to use Windows 10 and have the Windows subsystem for Linux enabled.
      With that done, start an Ubuntu shell by launching “bash”. And you’re done.
      The LXD client is installed by default in the Ubuntu 16.04 image.

      Interact with the remote server

      Regardless of which method you picked, you’ve now got access to the “lxc” command and can add your remote server.

      Using the native build does have a few restrictions to do with Windows terminal escape codes, breaking things like the arrow keys and password hiding. The Ubuntu on Windows way uses the Linux version of the LXD client and so doesn’t suffer from those limitations.

      MacOS client

      Even though we do have MacOS CI through Travis, they don’t host artifacts for us and so don’t have prebuilt binaries for people to download.

      Build from source

      Similarly to the Windows instructions, you can build the LXD client from source, by first installing Go using the latest DMG based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

      Once that’s done, open a new Terminal window and run:

      export GOPATH=~/go
      go get -v -x github.com/lxc/lxd/lxc
      sudo ln -s ~/go/bin/lxc /usr/local/bin/

      At which point you can use the “lxc” command.

      Conclusion

      The LXD client can be built on all the main operating systems and on just about every architecture, this makes it very easy for anyone to interact with existing LXD servers, whether they’re themselves using a Linux machine or not.

      Thanks to our pretty strict backward compatibility rules, the version of the client doesn’t really matter. Older clients can talk to newer servers and newer clients can talk to older servers. Obviously in both cases some features will not be available, but normal container worflow operations will work fine.

      Extra information

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      Read more
      Leo Arias

      After a little break, on the first Friday of February we resumed the Ubuntu Testing Days.

      This session was pretty interesting, because after setting some of the bases last year we are now ready to dig deep into the most important projects that will define the future of Ubuntu.

      We talked about Ubuntu Core, a snap package that is the base of the operating system. Because it is a snap, it gets the same benefits as all the other snaps: automatic updates, rollbacks in case of error during installation, read-only mount of the code, isolation from other snaps, multiple channels on the store for different levels of stability, etc.

      The features, philosophy and future of Core were presented by Michael Vogt and Zygmunt Krynicki, and then Federico Giménez did a great demo of how to create an image and test it in QEMU.

      Click the image below to watch the full session.

      Alt text

      There are plenty of resources in the Ubuntu websites related to Ubuntu Core.

      To get started, we recommend to follow this guide to run the operating system in a virtual machine.

      After that, and if you are feeling brave and want to help Michael, Zygmund and Federico, you can download the candidate image instead, from http://cdimage.ubuntu.com/ubuntu-core/16/candidate/pending/ubuntu-core-16-amd64.img.xz This is the image that's being currently tested, so if you find something wrong or weird, please report a bug in Launchpad.

      Finally, if you want to learn more about the snaps that compose the image and take a peek at the things that we'll cover in the following testing days, you can follow the tutorial to create your own Core image.

      On this session we were also accompanied by Robert Wolff who works on 96boards at Linaro. He has an awesome show every Thursday called Open Hours. At 96boards they are building open Linux boards for prototyping and embedded computing. Anybody can jump into the Open Hours to learn more about this cool work.

      The great news that Robert brought is that both Open Hours and Ubuntu Testing Days will be focused on Ubuntu Core this month. He will be our guest again next Friday, February 10th, where he will be talking about the DragonBoard 410c. Also my good friend Oliver Grawert will be with us, and he will talk about the work he has been doing to enable Ubuntu in this board.

      Great topics ahead, and a full new world of possiblities now that we are mixing free software with open hardware and affordable prototyping tools. Remember, every Friday in http://ubuntuonair.com/, no se lo pierda.

      Read more
      Stéphane Graber

      LXD logo

      What’s Ubuntu Core?

      Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

      Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

      The current release of Ubuntu Core is called series 16 and was released in November 2016.

      Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

      Requirements

      As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

      • An up to date Ubuntu system using the official Ubuntu kernel
      • An up to date version of LXD

      Creating an Ubuntu Core container

      The Ubuntu Core images are currently published on the community image server.
      You can launch a new container with:

      stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
      Creating ubuntu-core
      Starting ubuntu-core

      The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

      Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

      stgraber@dakara:~$ lxc list
      +-------------+---------+----------------------+----------------------------------------------+------------+-----------+
      |     NAME    |  STATE  |          IPV4        |                      IPV6                    |    TYPE    | SNAPSHOTS |
      +-------------+---------+----------------------+----------------------------------------------+------------+-----------+
      | ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0         |
      +-------------+---------+----------------------+----------------------------------------------+------------+-----------+

      You can then interact with that container the same way you would any other:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap list
      Name       Version     Rev  Developer  Notes
      core       16.04.1     394  canonical  -
      pc         16.04-0.8   9    canonical  -
      pc-kernel  4.4.0-45-4  37   canonical  -
      root@ubuntu-core:~#

      Updating the container

      If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

      If you want to immediately force an update, you can do it with:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap refresh
      pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded
      core (stable) 16.04.1 from 'canonical' upgraded
      root@ubuntu-core:~# snap version
      snap 2.17
      snapd 2.17
      series 16
      root@ubuntu-core:~#

      And then reboot the system and check the snapd version again:

      root@ubuntu-core:~# reboot
      root@ubuntu-core:~# 
      
      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap version
      snap 2.21
      snapd 2.21
      series 16
      root@ubuntu-core:~#

      You can get an history of all snapd interactions with

      stgraber@dakara:~$ lxc exec ubuntu-core snap changes
      ID  Status  Spawn                 Ready                 Summary
      1   Done    2017-01-31T05:14:38Z  2017-01-31T05:14:44Z  Initialize system state
      2   Done    2017-01-31T05:14:40Z  2017-01-31T05:14:45Z  Initialize device
      3   Done    2017-01-31T05:21:30Z  2017-01-31T05:22:45Z  Refresh all snaps in the system

      Installing some snaps

      Let’s start with the simplest snaps of all, the good old Hello World:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap install hello-world
      hello-world 6.3 from 'canonical' installed
      root@ubuntu-core:~# hello-world
      Hello World!

      And then move on to something a bit more useful:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap install nextcloud
      nextcloud 11.0.1snap2 from 'nextcloud' installed

      Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

      If you feel like testing the latest LXD straight from git, you can do so with:

      stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap install lxd --edge
      lxd (edge) git-c6006fb from 'canonical' installed
      root@ubuntu-core:~# lxd init
      Name of the storage backend to use (dir or zfs) [default=dir]: 
      
      We detected that you are running inside an unprivileged container.
      This means that unless you manually configured your host otherwise,
      you will not have enough uid and gid to allocate to your containers.
      
      LXD can re-use your container's own allocation to avoid the problem.
      Doing so makes your nested containers slightly less safe as they could
      in theory attack their parent container and gain more privileges than
      they otherwise would.
      
      Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
      Would you like LXD to be available over the network (yes/no) [default=no]? 
      Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
      Would you like to create a new network bridge (yes/no) [default=yes]? 
      What should the new bridge be called [default=lxdbr0]? 
      What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
      What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
      LXD has been successfully configured.

      And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

      root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
      Creating nested-core
      Starting nested-core 
      root@ubuntu-core:~# lxc list
      +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
      |    NAME     |  STATE  |         IPV4        |                       IPV6                    |    TYPE    | SNAPSHOTS |
      +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
      | nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0         |
      +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+

      Conclusion

      If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

      Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

      And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

      Extra information

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      Read more
      Alan Griffiths

      mircade-snap

      mircade, miral-kiosk and snapcraft.io

      mircade is a proof-of-concept game launcher for use with miral-kiosk. It looks for installed games, works out if they use a toolkit supported by Mir and allows the user to play them.

      miral-kiosk is a proof-of-concept Mir server for kiosk style use. It has very basic window management designed to support a single fullscreen application.

      snapcraft.io is a packaging system that allows you to package applications (as “snaps”) in a way that runs on multiple linux distributions. You first need to have snapcraft installed on your target system (I used a dragonboard with Ubuntu Core as described in my previous article).

      The mircade snap takes mircade and a few open games from the Ubuntu archive to create an “arcade style” snap for playing these games.

      Setting up the Mir snaps

      The mircade snap is based on the “Mir Kiosk Snaps” described here.

      Mir support on Ubuntu Core is currently work in progress so the exact incantations for installing the mir-libs and mir-kiosk snaps to work with mircade varies slightly from the referenced articles (to work around bugs) and will (hopefully) change in the near future. Here’s what I found works at the time of writing:

      $ snap install mir-libs --channel edge
      $ snap install mir-kiosk --channel edge --devmode
      $ snap connect mir-kiosk:mir-libs mir-libs:mir-libs
      $ sudo reboot

      Installing the mircade-snap

      I found that installing the mircade snap sometimes ran out of space on the dragonboard /tmp filesystem. So…

      $ TMPDIR=/writable/ snap install mircade --devmode --channel=edge
      $ snap connect mircade:mir-libs mir-libs:mir-libs
      $ snap disconnect mircade:mir;snap connect mircade:mir mir-kiosk:mir
      $ snap disable mircade;sudo /usr/lib/snapd/snap-discard-ns mircade;snap enable mircade

      Using mircade on the dragonboard

      At this point you should see an orange screen with the name of a game. You can change the game by touching/clicking the top or bottom of the screen (or using the arrow keys). Start the current game by touching/clicking the middle of the screen or pressing enter.

      Read more
      Christian Brauner

      lxc exec vs ssh

      Recently, I’ve implemented several improvements for lxc exec. In case you didn’t know, lxc exec is LXD‘s client tool that uses the LXD client api to talk to the LXD daemon and execute any program the user might want. Here is a small example of what you can do with it:

      asciicast

      One of our main goals is to make lxc exec feel as similar to ssh as possible since this is the standard of running commands interactively or non-interactively remotely. Making lxc exec behave nicely was tricky.

      1. Handling background tasks

      A long-standing problem was certainly how to correctly handle background tasks. Here’s an asciinema illustration of the problem with a pre LXD 2.7 instance:

      asciicast

      What you can see there is that putting a task in the background will lead to lxc exec not being able to exit. A lot of sequences of commands can trigger this problem:

      chb@conventiont|~
      > lxc exec zest1 bash
      root@zest1:~# yes &
      y
      y
      y
      .
      .
      .
      

      Nothing would save you now. yes will simply write to stdout till the end of time as quickly as it can…
      The root of the problem lies with stdout being kept open which is necessary to ensure that any data written by the process the user has started is actually read and sent back over the websocket connection we established.
      As you can imagine this becomes a major annoyance when you e.g. run a shell session in which you want to run a process in the background and then quickly want to exit. Sorry, you are out of luck. Well, you were.
      The first, and naive approach is obviously to simply close stdout as soon as you detect that the foreground program (e.g. the shell) has exited. Not quite as good as an idea as one might think… The problem becomes obvious when you then run quickly executing programs like:

      lxc exec -- ls -al /usr/lib
      

      where the lxc exec process (and the associated forkexec process (Don’t worry about it now. Just remember that Go + setns() are not on speaking terms…)) exits before all buffered data in stdout was read. In this case you will cause truncated output and no one wants that. After a few approaches to the problem that involved, disabling pty buffering (Wasn’t pretty I tell you that and also didn’t work predictably.) and other weird ideas I managed to solve this by employing a few poll() “tricks” (In some sense of the word “trick”.). Now you can finally run background tasks and cleanly exit. To wit:
      asciicast

      2. Reporting exit codes caused by signals

      ssh is a wonderful tool. One thing however, I never really liked was the fact that when the command that was run by ssh received a signal ssh would always report -1 aka exit code 255. This is annoying when you’d like to have information about what signal caused the program to terminate. This is why I recently implemented the standard shell convention of reporting any signal-caused exits using the standard convention 128 + n where n is defined as the signal number that caused the executing program to exit. For example, on SIGKILL you would see 128 + SIGKILL = 137 (Calculating the exit codes for other deadly signals is left as an exercise to the reader.). So you can do:

      chb@conventiont|~
      > lxc exec zest1 sleep 100
      

      Now, send SIGKILL to the executing program (Not to lxc exec itself, as SIGKILL is not forwardable.):

      kill -KILL $(pidof sleep 100)
      

      and finally retrieve the exit code for your program:

      chb@conventiont|~
      > echo $?
      137
      

      Voila. This obviously only works nicely when a) the exit code doesn’t breach the 8-bit wall-of-computing and b) when the executing program doesn’t use 137 to indicate success (Which would be… interesting(?).). Both arguments don’t seem too convincing to me. The former because most deadly signals should not breach the range. The latter because (i) that’s the users problem, (ii) these exit codes are actually reserved (I think.), (iii) you’d have the same problem running the program locally or otherwise.
      The main advantage I see in this is the ability to report back fine-grained exit statuses for executing programs. Note, by no means can we report back all instances where the executing program was killed by a signal, e.g. when your program handles SIGTERM and exits cleanly there’s no easy way for LXD to detect this and report back that this program was killed by signal. You will simply receive success aka exit code 0.

      3. Forwarding signals

      This is probably the least interesting (or maybe it isn’t, no idea) but I found it quite useful. As you saw in the SIGKILL case before, I was explicit in pointing out that one must send SIGKILL to the executing program not to the lxc exec command itself. This is due to the fact that SIGKILL cannot be handled in a program. The only thing the program can do is die… like right now… this instance… sofort… (You get the idea…). But a lot of other signals SIGTERM, SIGHUP, and of course SIGUSR1 and SIGUSR2 can be handled. So when you send signals that can be handled to lxc exec instead of the executing program, newer versions of LXD will forward the signal to the executing process. This is pretty convenient in scripts and so on.

      In any case, I hope you found this little lxc exec post/rant useful. Enjoy LXD it’s a crazy beautiful beast to play with. Give it a try online https://linuxcontainers.org/lxd/try-it/ and for all you developers out there: Checkout https://github.com/lxc/lxd and send us patches. </p>
            <a href=Read more

      Stéphane Graber

      LXD logo

      Introduction

      So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

      But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

      In fact, you can find packages in the following Linux distributions (let me know if I missed one):

      We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

      One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

      But there is an easy alternative that will get you a working LXD on Debian today!
      Use the same LXD snap package as I mentioned in a previous post, but on Debian!

      Requirements

      • A Debian “testing” (stretch) system
      • The stock Debian kernel without apparmor support
      • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

      Installing snapd and LXD

      Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

      apt install snapd
      snap install lxd

      If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

      . /etc/profile.d/apps-bin-path.sh

      And now it’s time to configure LXD with:

      root@debian:~# lxd init
      Name of the storage backend to use (dir or zfs) [default=dir]:
      Create a new ZFS pool (yes/no) [default=yes]?
      Name of the new ZFS pool [default=lxd]:
      Would you like to use an existing block device (yes/no) [default=no]?
      Size in GB of the new loop device (1GB minimum) [default=15]:
      Would you like LXD to be available over the network (yes/no) [default=no]?
      Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
      Would you like to create a new network bridge (yes/no) [default=yes]?
      What should the new bridge be called [default=lxdbr0]?
      What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
      What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
      LXD has been successfully configured.

      And finally, you can start using LXD:

      root@debian:~# lxc launch images:debian/stretch debian
      Creating debian
      Starting debian
      
      root@debian:~# lxc launch ubuntu:16.04 ubuntu
      Creating ubuntu
      Starting ubuntu
      
      root@debian:~# lxc launch images:centos/7 centos
      Creating centos
      Starting centos
      
      root@debian:~# lxc launch images:archlinux archlinux
      Creating archlinux
      Starting archlinux
      
      root@debian:~# lxc launch images:gentoo gentoo
      Creating gentoo
      Starting gentoo

      And enjoy your fresh collection of Linux distributions:

      root@debian:~# lxc list
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
      |   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
      | archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
      | centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
      | debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
      | gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
      | ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
      +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

      Conclusion

      The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

      There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

      • All containers are shutdown and restarted on upgrades
      • No support for bash completion

      If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

      Extra information

      The snapd website can be found at: http://snapcraft.io

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      Read more
      Stéphane Graber

      LXD logo

      Introduction

      For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

      Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

      It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

      It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

      This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

      Requirements

      This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

      Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

      sudo sysctl fs.inotify.max_user_instances=1048576  
      sudo sysctl fs.inotify.max_queued_events=1048576  
      sudo sysctl fs.inotify.max_user_watches=1048576  
      sudo sysctl vm.max_map_count=262144

      Setting up the container

      Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

      This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

      Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

      lxc init ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
      printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set kubernetes raw.lxc -
      lxc config device add kubernetes mem unix-char path=/dev/mem
      lxc start kubernetes

      Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

      lxc exec kubernetes -- apt update
      lxc exec kubernetes -- apt dist-upgrade -y
      lxc exec kubernetes -- apt install squashfuse -y
      lxc exec kubernetes -- ln -s /bin/true /usr/local/bin/udevadm
      lxc exec kubernetes -- snap install conjure-up --classic

      And the last setup step is to configure LXD networking inside the container.
      Answer with the default for all questions, except for:

      • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
      • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
      lxc exec kubernetes -- lxd init

      And that’s it for the container configuration itself, now we can deploy Kubernetes!

      Deploying Kubernetes with conjure-up

      As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
      This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

      Start it with:

      lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
      • Select “Kubernetes Core”
      • Then select “localhost” as the deployment target (uses LXD)
      • And hit “Deploy all remaining applications”

      This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

      Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

      Interact with your new Kubernetes

      We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

      root@kubernetes:~# sudo -u ubuntu -i
      ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
      Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

      You can then grab the service address from the Juju action output:

      ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
      results:
       address: microbot.10.97.218.226.xip.io
      status: completed
      timing:
       completed: 2017-01-13 10:26:14 +0000 UTC
       enqueued: 2017-01-13 10:26:11 +0000 UTC
       started: 2017-01-13 10:26:12 +0000 UTC

      Now actually using the Kubernetes tools, we can check the state of our new pods:

      ubuntu@kubernetes:~$ kubectl.conjure-up-kubernetes-core-be8 get pods
      NAME READY STATUS RESTARTS AGE
      default-http-backend-w9nr3 1/1 Running 0 21m
      microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
      microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
      microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
      microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
      microbot-1855935831-mfvst 1/1 Running 0 18s
      nginx-ingress-controller-bj5gh 1/1 Running 0 21m

      After a little while, you’ll see everything’s running:

      ubuntu@kubernetes:~$ ./kubectl get pods
      NAME READY STATUS RESTARTS AGE
      default-http-backend-w9nr3 1/1 Running 0 23m
      microbot-1855935831-cn4bs 1/1 Running 0 2m
      microbot-1855935831-dh70k 1/1 Running 0 2m
      microbot-1855935831-fqwjp 1/1 Running 0 2m
      microbot-1855935831-ksmmp 1/1 Running 0 2m
      microbot-1855935831-mfvst 1/1 Running 0 2m
      nginx-ingress-controller-bj5gh 1/1 Running 0 23m

      At which point, you can hit the service URL with:

      ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
       <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

      Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

      Conclusion

      Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

      This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

      Extra information

      The conjure-up website can be found at: http://conjure-up.io
      The Juju website can be found at: http://www.ubuntu.com/cloud/juju

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      Read more
      facundo

      Parque Acuático


      Entre Navidad y Año Nuevo nos tomamos unos días de vacaciones con la familia.

      Esta vez nos fuimos, por primera vez, a un Parque Acuático.

      La verdad es que lo pasamos bárbaro. Yo le tenía un poco de aprensión por si Malena iba a disfrutarlo (Felipe, siendo más grande, seguro que sí). Ambos la pasaron genial, así como también Moni y yo.

      Moni y Male disfrutando

      El primer día llegamos a la tardecita y estaba nublado y fresco, así que en el parque acuático propiamente dicho no había nadie. Nosotros tampoco nos metimos, sino que fuimos directamente a las piletas con aguas termales, así estábamos calentitos :)

      Piletas con aguas termales

      Pero lo que más disfrutamos fué el parque acuático propiamente dicho, con todas sus variantes de juegos para tirarse al agua. Al principio Male se quedaba en los juegos para niños, pero luego del primer día también se tiró mucho de la rampa grande.

      Juegos de los niños

      Felu y Male en la rampa grande

      Felu se tiró de casi todos lados (excepto el más salvaje, que era casi caída libre), incluso se tiró de los juegos grandes un montón de veces, en loop: se tiraba, subía, se tiraba, subía, se tiraba...

      Felipe en el juego que te hace girar

      También aprovechamos para pasear y conocer Concepción del Uruguay. Incluso una de las tardes vinieron familiares de Moni desde Concordia, y nos fuimos a las playas de Banco Pelay, donde nos metimos en el rio y jugamos con la arena hasta que se hizo de noche y nos fuimos al pueblo a comernos unas pizzas :)
      http://www.turismoentrerios.com/cdeluruguay/pelay.htm

      Moni con la prima Sandra y la tia Rosa

      Almorzando con la familia

      La escapada de pocos días al parque acuático mostró ser una copada forma de desconectar. Seguro repetiremos.

      Read more
      deviceguy

      Movin' on...

      A year has gone by since I started work with Canonical. As it turns out, I must be on my way. Where to? Not real sure at this moment, there seems plenty of companies using Qt & QML these days. \0/


      But saying that, I am open to suggestions. LinkedIn
       
      Plenty of IoT and devices using sensors around. Heck, even Moto Z phone has some great uses for sensor gestures similar to what I wrote for QtSensors while I was at Nokia.

      But a lack of companies that allow freelance or remote work. The last few years I have worked remotely doing work for Jolla and Canonical. Both fantastic companies to work for, which really have it together for working remotely.

      I am still surprised that only a handful of companies regularly allow remote work. I do not miss the stuffy non window opening offices and the long daily commute, which sometimes means riding a motorcycle through hail! (I do not suggest this for anyone)

      Of course, I am still maintainer for QtSensors, QtSystemInfo for the Qt Project, and Sensor Framework for Mer, and always dreaming up new ways to use sensors. Still keeping tabs on QtNetwork bearer classes.

      Although I had to send back the Canonical devices, I still have Ubuntu on my Nexus 4. I still have my Jolla phones and tablet.

      That said, I still have this blog here, and besides spending my time looking for a new programming gig, I am (always) preparing to release a new album. http://llornkcor.com
      and always willing to work with anyone needing music/audio/soundtrack work.

      Read more