Canonical Voices

Prakash

In 2015, many Linux and open-source vets still distrust Microsoft’s conversion to open-source. In 2006, no one believed that when Sam Ramji, who oversaw BEA Systems move to open-source software, became Director of Platform Technology Strategy for Microsoft’s Open Source Software Labs, that Microsoft was doing more than paying lip-service to open source. They were wrong. Now, years after leaving Microsoft, Ramji is returning to play a major open-source leadership role as the new CEO for the Cloud Foundry Foundation.

Read More: http://www.zdnet.com/article/sam-ramiji-takes-lead-at-the-open-source-cloud-foundry-foundation/

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150210 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Our Vivid kernel has been rebased to v3.18.5 upstream stable. It’s
been uploaded to the archive, 3.18.0-13.14. Please test and let us
know your results.
We would like to push the v3.19 based kernel we have up to the archive
soon. We are just cleaning up some DKMS drivers before we do so. For anyone interested in getting an early preview, we have a v3.19 based
kernel available for testing in our ckt PPA.
—–
Important upcoming dates:
Thurs Feb 19 – 14.04.2 Point Release (~1 week away, yes this was
delayed)
Thurs Feb 26 – Beta 1 Freeze (~2 weeks away)
Thurs Mar 26 – Fina l Beta (~6 weeks away)
Thurs Apr 09 – Kernel Freeze (~8 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Kernel Prep Week
  • Precise – Kernel Prep Week
  • Trusty – Kernel Prep Week
  • Utopic – Kernel Prep Week

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle had ended. Waiting for next cycle to start on Feb. 08.
    cycle: 06-Feb through 28-Feb
    ====================================================================
    06-Feb Last day for kernel commits for this cycle
    08-Feb – 14-Feb Kernel prep week.
    15-Feb – 28-Feb Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
facundo


Estos meses le puse bastante pila a las películas... mitad que no vi tantas series (no arranqué ninguna nueva) y que tuve bastante tiempo solo en casa (me encanta aprovecharlo para ponerme tranquilo a ver una película, con la luz ambiente y el volumen que se me canta, sin tener que frenarla cada diez minutos por algo).

Ojo, también algunas de estas las vi con la familia, :). Y una en el cine (es obvio cual, ¿no?).

  • Cloud Atlas: +1. Una GRAN historia, contada de una forma bastante compleja. La peli es larga, y trabaja Tom Hanks, pero así y todo lo vale.
  • Elysium: +1. Me gustó bastante, la mezcla de futurismo bien hecho tanto en tecnología como en descripción de la sociedad siempre me atrae.
  • John Dies at the End: -1. De tan bizarra que aburre... tiene dos o tres flashes interesantes, pero no vale su peso en aire.
  • Movie 43: -0. Tiene algunos cortos con partes graciosas, la mayoría es una bizarrada. Si te gustan las pelis bizarras onda chachacha, dale una oportunidad. Lo que no entiendo es cómo pelis como esta juntan tantos actores de primera linea...
  • Now You See Me: +1. Muy bueno como se va armando el nudo y el desenlace... como en todo acto de magia, guarda con las distracciones...
  • Pearl Jam Twenty: +1. Muy buena película! No sólo la historia en sí de Pearl Jam, sino como está contada, mostrada, etc.
  • RED 2: +0. Muy violenta, como esperaba, pero también divertida (como también esperaba); pochoclera, está bien.
  • Red Lights: +1. Muy buenas actuaciones para una historia interesante. Cierra por todos lados.
  • Robot & Frank: +0. Una linda comedia o drama suave. Es interesante, no muy profunda, pero está bien.
  • Sin retorno: +0. Una buena historia, buenas actuaciones, te mantiene atrapado hasta el final.
  • The East: +1. Me gustó mucho, no sólo por la temática sino por cómo cuentan los sentimientos y actitudes de las personas involucradas.
  • The Hobbit: The Battle of the Five Armies: +0. Cierra bien, tiene puentes al resto de la historia, está bien; el detalle es que no es más que una historia para niños
  • The Man with the Iron Fists: -0. si te gustan las películas de orientales con karate exagerado, mirala, está muy bien hecha. Yo ya confirmo que nunca más una de estas.
  • The Numbers Station: +0. Una buena película de acción y suspenso.
  • The Wolverine: -0. Es como todo un poco más de lo mismo... la historia no está del todo mal, pero bleh.
  • Upside Down: +0. Una comedia romántica ligera, pero muy interesante a nivel de ciencia ficción. Un poco apurado todo al final, sino merecía más puntaje...
  • World War Z: +0. Bastante buena para ser una película de zombies...

Escena de Elysium


No hay tantas nuevas anotadas. Había más, realmente, producto de ir viendo trailers, pero al preparar este post me di cuenta que cuatro ya las había anotado antes :)

  • Avengers: Age of Ultron (2015; Action, Adventure, Fantasy, Sci-Fi) When Tony Stark tries to jumpstart a dormant peacekeeping program, things go awry and Earth's Mightiest Heroes, including Iron Man, Captain America, Thor, The Incredible Hulk, Black Widow and Hawkeye, are put to the ultimate test as the fate of the planet hangs in the balance. As the villainous Ultron emerges, it is up to The Avengers to stop him from enacting his terrible plans, and soon uneasy alliances and unexpected action pave the way for a global adventure. [D: Joss Whedon; A: Scarlett Johansson, Hayley Atwell, Chris Evans]
  • Clouds of Sils Maria (2014; Drama) At the peak of her international career, Maria Enders is asked to perform in a revival of the play that made her famous twenty years ago. But back then she played the role of Sigrid, an alluring young girl who disarms and eventually drives her boss Helena to suicide. Now she is being asked to step into the other role, that of the older Helena. She departs with her assistant to rehearse in Sils Maria; a remote region of the Alps. A young Hollywood starlet with a penchant for scandal is to take on the role of Sigrid, and maria finds herself on the other side of the mirror, face to face with an ambiguously charming woman who is, in essence, an unsettling reflection of herself. [D: Olivier Assayas; A: Juliette Binoche, Kristen Stewart, Chloë Grace Moretz]
  • Fantastic Four (2015; Action, Fantasy, Sci-Fi) FANTASTIC FOUR, a contemporary re-imagining of Marvel's original and longest-running superhero team, centers on four young outsiders who teleport to an alternate and dangerous universe, which alters their physical form in shocking ways. Their lives irrevocably upended, the team must learn to harness their daunting new abilities and work together to save Earth from a former friend turned enemy. [D: Josh Trank; A: Kate Mara, Miles Teller, Toby Kebbell]
  • Home Sweet Hell (2015; Comedy, Drama) Don Champagne seems to have it all: a successful business, a perfect house, perfect kids and a perfect wife. Unfortunately, when his wife, Mona (Katherine Heigl), learns of Don's affair with a pretty new salesgirl (Jordana Brewster), this suburban slice of heaven spirals out of control. Don soon realizes that Mona will stop at nothing, including murder, to maintain their storybook life where "perception is everything". [D: Anthony Burns; A: Katherine Heigl, Jordana Brewster, Patrick Wilson]
  • Match (2014; Comedy, Drama, Music) Tobi Powell (Patrick Stewart), an aging Juilliard dance professor with a colorful and international past, is interviewed by a woman and her husband (Carla Gugino & Matthew Lillard) for a dissertation she's writing about the history of dance in New York in the 1960's. As the interview proceeds, it becomes increasingly clear that there are ulterior motives to the couple's visit. Explosive revelation is followed by questions about truth versus belief. MATCH is a story about responsibility, artistic commitment...and love. [D: Stephen Belber; A: Patrick Stewart, Carla Gugino, Matthew Lillard]
  • VANish (2015; Action, Crime, Horror, Thriller) A kidnapped young woman is forced on a road trip full of murder and mayhem that takes place entirely in her captor's getaway van. [D: Bryan Bockbrader; A: Maiara Walsh, Tony Todd, Danny Trejo]
  • Vice (2015; Action, Adventure, Sci-Fi, Thriller) Julian Michaels (Bruce Willis) has designed the ultimate resort: VICE, where anything goes and the customers can play out their wildest fantasies with artificial inhabitants who look, think and feel like humans. When an artificial (Ambyr Childers) becomes self-aware and escapes, she finds herself caught in the crossfire between Julian's mercenaries and a cop (Thomas Jane) who is hell-bent on shutting down Vice, and stopping the violence once and for all. [D: Brian A Miller; A: Ambyr Childers, Thomas Jane, Bryan Greenberg]
  • Danny Collins (2015; Comedy, Drama) Inspired by a true story, Al Pacino stars as aging 1970s rocker Danny Collins, who can't give up his hard-living ways. But when his manager (Christopher Plummer) uncovers a 40 year-old undelivered letter written to him by John Lennon, he decides to change course and embarks on a heartfelt journey to rediscover his family, find true love and begin a second act. [D: Dan Fogelman; A: Melissa Benoist, Al Pacino, Jennifer Garner]
  • Ex Machina (2015; Drama, Sci-Fi) Caleb, a 26 year old coder at the world's largest internet company, wins a competition to spend a week at a private mountain retreat belonging to Nathan, the reclusive CEO of the company. But when Caleb arrives at the remote location he finds that he will have to participate in a strange and fascinating experiment in which he must interact with the world's first true artificial intelligence, housed in the body of a beautiful robot girl. [D: Alex Garland; A: Oscar Isaac, Alicia Vikander, Domhnall Gleeson]
  • Into the Woods (2014; Adventure, Comedy, Fantasy, Musical) Into the Woods is a modern twist on the beloved Brothers Grimm fairy tales in a musical format that follows the classic tales of Cinderella, Little Red Riding Hood, Jack and the Beanstalk, and Rapunzel-all tied together by an original story involving a baker and his wife, their wish to begin a family and their interaction with the witch who has put a curse on them. [D: Rob Marshall; A: Anna Kendrick, Daniel Huttlestone, James Corden]
  • Pan (2015; Adventure, Comedy, Family, Fantasy) The story of an orphan who is spirited away to the magical Neverland. There, he finds both fun and dangers, and ultimately discovers his destiny -- to become the hero who will be forever known as Peter Pan. [D: Joe Wright; A: Amanda Seyfried, Rooney Mara, Hugh Jackman]
  • Predestination (2014; Action, Drama, Mystery, Sci-Fi, Thriller) PREDESTINATION chronicles the life of a Temporal Agent sent on an intricate series of time-travel journeys designed to ensure the continuation of his law enforcement career for all eternity. Now, on his final assignment, the Agent must pursue the one criminal that has eluded him throughout time. [D: Michael Spierig, Peter Spierig; A: Ethan Hawke, Sarah Snook, Christopher Kirby]
  • Terminator Genisys (2015; Action, Adventure, Sci-Fi, Thriller) After finding himself in a new time-line, Kyle Reese teams up with John Connor's mother Sarah and an aging terminator to try and stop the one thing that the future fears, "Judgement Day". [D: Alan Taylor; A: Emilia Clarke, J.K. Simmons, Arnold Schwarzenegger]

Escena del trailer de Ex Machina


Finalmente, el conteo de pendientes por fecha:

(Ene-2009)    1
(May-2009)
(Oct-2009)
(Mar-2010)   16   4
(Sep-2010)   18  18   9   2   1
(Dic-2010)   12  12  12   5   1
(Abr-2011)   23  23  23  22  17   4
(Ago-2011)   11  11  11  11  11  11   4
(Ene-2012)   21  18  17  17  17  17  11   3
(Jul-2012)   15  15  15  15  15  15  14  11
(Nov-2012)       12  12  11  11  11  11  11   6
(Feb-2013)           19  19  16  15  14  14   8   2
(Jun-2013)               19  18  16  15  15  15  11
(Sep-2013)                   18  18  18  18  17  16
(Dic-2013)                       14  14  12  12  12
(Abr-2014)                            9   9   8   8
(Jul-2014)                               10  10  10
(Nov-2014)                                   24  22
(Feb-2015)                                       13
Total:      117 113 118 121 125 121 110 103 100  94

Read more
Prakash

If you have been eyeing the OnePlus One, you have good news!

Buy the OnePlus One without an invite at Amazon India.

This deal is only Tomorrow Tuesday 10th Feb at 10AM.

 


Read more
Prakash

Raspberry Pi 2 is here.

  • A 900MHz quad-core ARM Cortex-A7 CPU
  • 1GB LPDDR2 SDRAM
  • Compatible with Raspberry Pi 1
  • $35

And now runs Snappy Ubuntu Core.

 


Read more
Nicholas Skaggs

Unity 8 Desktop Testing

While much of the excitement around unity8 and the next generation of ubuntu has revolved around mobile, again I'd like to point your attention to the desktop. The unity8 desktop is starting to evolve and gain more "desktopy" features. This includes things like window management and keyboard shortcuts for unity8, and MIR enhancements with things like native library support for rendering and support for X11 applications.

I hosted a session with Stephen Webb at UOS last year where we discussed the status of running unity8 on the desktop. During the session I mentioned my own personal goal of having some brave community members running unity8 as there default desktop this cycle. Now, it's still a bit early to realize that goal, but it is getting much closer! To help get there, I would encourage you to have a look at unity8 on your desktop and start running it. The development teams are ready for feedback and anxious to get it in shape on the desktop.

So how do you get it? Check out the unity8 desktop wiki page which explains how you can run unity8, even if you are on a stable version of ubuntu like the LTS. Install it locally in an lxc container and you can login to a unity8 desktop on your current pc. Check it out! After you finish playing, please don't forget to file bugs for anything you might find. The wiki page has you covered there as well. Enjoy unity8!

Read more
Daniel Holbach

Did you always want to write an app for Ubuntu and thought that HTML5 might be a good choice? Well picked!

finished

We now have training materials up on developer.ubuntu.com which will get you started in all things related to Ubuntu devices. The great thing is that you just write this app once and it’ll work on the phone, the desktop and whichever device Ubuntu is going to run next on.

The example used in the materials is a RSS reader written by my friend, Adnane Belmadiaf. If you go through the steps one by one you’ll notice how easy it is to get stuff done. :-)

This is also a good workshop you could give in your LUG or LoCo or elsewhere. Maybe next weekend at Ubuntu Global Jam too? :-)

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150203 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel (jsalisbury for ogasawara)

Our Vivid kernel has been rebased to the v3.18.4 upstream stable. It’s
been uploaded to the archive, 3.18.0-12.13. Please test and let us
know your results. We will be rebasing to v3.18.5 shortly and uploading
as well. We’ll also be rebasing our unstable branch to v3.19-rc7 and will
upload to our ckt PPA shortly.
—–
Important upcoming dates:
Thurs Feb 5 – 14.04.2 Point Release (~2 days away)
Thurs Feb 26 – Beta 1 Freeze (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – None
  • Precise – None
  • Trusty – None
  • Utopic – None

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle had ended. Waiting for next cycle to start on Feb. 08.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


Similarly, Github itself now also "suggests" random repo names.



I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's

unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

versus

b9643037-4a79-412c-b7fc-80baa7233a31

Shell

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install petname
$ petname
itchy-Marvin
$ petname -w 3
listlessly-easygoing-Radia
$ petname -s ":" -w 5
onwardly:unflinchingly:debonairly:vibrant:Chandler

Python

That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install python-petname
$ python-petname
flaky-Megan
$ python-petname -w 4
mercifully-grimly-fruitful-Salma
$ python-petname -s "" -w 2
filthyLaurel

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)
boomingly_tangible_Mikayla

Golang


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/dustinkirkland/golang-petname

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install golang-petname
$ golang-petname
quarrelsome-Cullen
$ golang-petname -words=1
Vivian
$ golang-petname -separator="|" -words=10
snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

Using it in your own Golang code looks as simple as this:

package main
import (
"fmt"
"math/rand"
"time"
"github.com/dustinkirkland/golang-petname"
)
func main() {
flag.Parse()
rand.Seed(time.Now().UnixNano())
fmt.Println(petname.Generate(2, ""))
}
Gratuitous picture of my pets, 7 years later.
Cheers,
happily-hacking-Dustin

Read more
XiaoGuo, Liu

为Ubuntu Scope创建online account

有许多的Web services需要登陆个人用户账号才可以访问自己账号里的数据。在文章“如何使用online account来创建微博Scope”中,它详细地介绍了如何在Scope中使用Ubuntu平台所提供的online account来访问微博的个人数据,并展示该数据。该文章中,有一个简单的例程显示从微博个人账号中获取的图片及信息并展示。

 

       

Read more
Prakash

Who says you can’t have fast, good and cheap? The Document Foundation’s latest release of the most popular open-source office suite, LibreOffice 4.4 is quite fast on Linux, Mac OS X, and Windows; it works well on all three desktop operating systems, and it won’t cost you a penny.

Read More: http://www.zdnet.com/article/the-best-open-source-office-suite-libreoffice-4-4-gets-new-release/

Download Here: http://www.libreoffice.org/download/libreoffice-fresh/?version=4.4.0

Read more
beuno

After a few weeks of being coffee-deprived, I decided to disassemble my espresso machine and see if I could figure out why it leaked water while on, and didn't have enough pressure to produce drinkable coffee.
I live a bit on the edge of where other people do, so my water supply is from my own pump, 40 meters into the ground. It's as hard as water gets. That was my main suspicion. I read a bit about it on the interwebz and learned about descaling, which I'd never heard about. I tried some of the home-made potions but nothing seemed to work.
Long story short, I'm enjoying a perfect espresso as I write this.

I wanted to share a bit with the internet people about what was hard to solve, and couldn't find any instructions on. All I really did was disassemble the whole thing completely, part by part, clean them, and make sure to put it back together tightening everything that seemed to need pressure.
I don't have the time and energy to put together a step-by-step walk-through, so here's the 2 tips I can give you:

1) Remove ALL the screws. That'll get you there 95% there. You'll need a philips head, a torx head, a flat head and some small-ish pliers.
2) The knob that releases the steam looks unremovable and blocks you from getting the top lid off. It doesn't screw off, you just need to pull upwards with some strength and care. It comes off cleanly and will go back on easily. Here's a picture to prove it:

DeLongi eco310.r

Hope this helps somebody!

Read more
Daniel Holbach

In a recent conversation we thought it’d be a good idea to share tips and tricks, suggestions and ideas with users of Ubuntu devices. Because it’d help to have it available immediately on the phone, an app could be a good idea.

I had a quick look at it and after some discussion with Rouven in my office space, it looked like hyde could fit the bill nicely. To edit the content, just write a bit of Markdown, generate the HTML (nice and readable templates – great!) and done.

Unfortunately I’m not a CSS or HTML wizard, so if you could help out making it more Ubuntu-y, that’d be great! Also: if you’re interested in adding content, that’d be great.

I pushed the code for it up on Launchpad, there are also the first bugs open already. Let’s make it look pretty and let’s share our knowledge with new Ubuntu devices users. :-)

Oh, and let’s see that we translate the content as well! :-)

Read more
jdstrand

Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

Background

Ubuntu Touch has several goals that all apply to Snappy:

  • we want system-image upgrades
  • we want to replace the distro archive model with an app store model for Snappy systems
  • we want developers to be able to get their apps to users quickly
  • we want a dependable application lifecycle
  • we want the system to be easy to understand and to develop on
  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Snappy adds a few things to the above (that pertain to this conversation):

  • we want the system to be bulletproof (transactional updates with rollbacks)
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

Let’s look at what all these mean more closely.

system-image upgrades

  • we want system-image upgrades
  • we want the system to be bulletproof (transactional updates with rollbacks)

We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

app store

  • we want to replace the distro archive model with an app store model
  • we want developers to be able to get their apps to users quickly

Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

application lifecycle

  • dependable application lifecycle

This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

(I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

security

  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

easy to use

  • we want the system to be easy to understand and to develop on
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

  1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
  2. apps declaring specific exceptions to the security policy
  3. apps declaring to use restricted security policy
  4. apps declaring to run (effectively) unconfined
  5. apps shipping hand-crafted policy (that can be strict or lenient)

(Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

Moving forward

The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

immediate term

The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

The user experience for this will be discussed and refined on the mailing list in the coming days.

short term

After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

future

Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

Read more
Sergio Schvezov

Preliminary support for dtb override from OEM snaps

Today the always in motion ppa ppa:snappy-dev/tools has landed support for overriding the dtb provided by the platform in the device part with one provided by the oem snap.

The package.yaml for the oem snap has been extended a bit to support this, an example follows for extending the am335x-boneblack platform.


name: mydevice.sergiusens
vendor: sergiusens
icon: meta/icon.png
version: 1.0
type: oem

branding:
    name: My device
        subname: Sergiusens Inc.

        store:
            oem-key: 123456

            hardware:
                dtb: mydtb.dtb

The path hardware/dtb key in the yaml holds a value which is the path to the dtb withing the package, so in this case, I put mydtb.dtb in the root of the snap.

After that it’s just a snappy build away:

snappy build .

In order to get this properly provisioned, first we need the latest ubuntu-device-flash from the ppa:snappy-dev/tools, so let’s get it

sudo add-apt-repository ppa:snappy-dev/tools 
sudo apt update
sudo apt install ubuntu-device-flash

And now we are ready to flash

sudo ubuntu-device-flash core \
    --platform am335x-boneblack \
    --size 4 \
    --install mydevice_sergiusens_1.0_all.snap
    --output bbb_custom.img

If everything went well, the boot partiton will hold your custom dtb instead of the default one, specifying --platform is required for this.

Please note that some of these things described here are subject to change.

Read more
Daniel Holbach

What do Kinshasa, Omsk, Paris, Mexico City, Eugene, Denver, Tempe, Catonsville, Fairfax, Dania Beach, San Francisco and various places on the internet have in common?

Right, they’re all participating in the Ubuntu Global Jam on the weekend of 6-8 February! See the full list of teams that are part of the event here. (Please add yours if you haven’t already.)

What’s great about the event is that there are just two basic aims:

  1. do something with Ubuntu
  2. get together and have fun!

What I also like a lot is that there’s always something new to do. Here are just 3 quick examples of that:

App Development Schools

We have put quite a bit of work into putting training materials together, now, you can take them out to your team and start writing Ubuntu apps easily.

Snappy

As one tech news article said “Robots embrace Ubuntu as it invades the internet of things“. Ubuntu’s newest foray, making it possible to bring a stable and secure OS to small devices where you can focus on apps and functionality, is attracting a number of folks on the mailing lists (snappy-devel, snappy-app-devel)  and elsewhere. Check out the mailing lists and the snappy site to find out more and have a play with it.

Unity8 on Desktop

Convergence is happening and what’s working great on the phone is making its way onto the desktop. You can help making this happen, by installing and testing it. Your feedback will be much appreciated.

Unity-8-Is-Starting-to-Look-More-Like-a-Desktop-for-Ubuntu-Video-465329-5

maxresdefault

 

Read more
Ben Howard

One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.


Screenshot
https://cloud-images.ubuntu.com/locator
In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..



The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

Read more
Ben Howard

A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

And we were wrong.

So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

If you want to install the LTS kernel on your existing instance(s), simply run:

  • sudo apt-get update
  • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
  • sudo reboot


[1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

Read more
Robin Winslow

In the design team we keep some projects in Launchpad (as canonical-webmonkeys), and some project in Github (as UbuntuDesign), meaning we work in both Bazaar and Git.

The need to synchronise Github to Launchpad

Some of our Github projects need to be also stored in Launchpad, as some of our systems only have access to Launchpad repositories.

Initally we were converting these projects manually at regular intervals, but this quickly became too cumbersome.

The Bazaar synchroniser

To manage this we created a simple web-service project to synchronise Git projects to Bazaar. This script basically automates the techniques described in our previous article to pull down the Github repository, convert it to Bazaar and push it up to Launchpad at a specified location.

It’s a simple Python WSGI app which can be run directly or through a server that understands WSGI like gunicorn.

Setting up the server

Here’s a guide to setting up our bzr-sync project on a server somewhere to sync Github to Launchpad.

System dependencies

Install necessary system dependencies:

User permissions

First off, you’ll have to make sure you set up a user on whichever server is to run this service which has read access to your Github projects and write access to your Launchpad projects:

Cloning the project

Then you should clone the project and install dependencies. We placed it at /srv/bzr-sync but you can put it anywhere:

Preparing gunicorn

We should serve this over HTTPS, so our auth_token will remain secret. This means you’ll need a SSL certificate keyfile and certfile. You should get one from a certificate authority, but for testing you could just generate a self-signed-certificate.

Put your certificate files somewhere accessible (like /srv/bzr-sync/certs/), and then test out running your server with gunicorn:

Try out the sync server

You should now be able to synchronise a Github repository with Launchpad by pointing your browser at:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

You should be able to see the progress of the conversion as command-line output from the above gunicorn command.

Add upstart job

Rather than running the server directly, we can setup an upstart job to manage running the process. This way the bzr-sync service will restart if the server restarts.

Here’s an example of an upstart job, which we placed at /etc/init/bzr-sync.conf:

You can now start the bzr-sync server as a service:

And output will be logged to /etc/upstart/bzr-sync.log.

Setting up Github projects

Now to use this sync server to automatically synchronise your Github projects to Launchpad, you simply need to add a post-commit webhook to ping a URL of the form:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

Creating a webhook

Creating a webhook

In your repository settings, select “Webhooks and Services”, then “Add webhook”, and enter the following information:

  • Payload URL: https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}
  • Content type: “application/json”
  • Secret: -leave blank-
  • Select Just the push event
  • Tick Active
Saving a webhook

Saving a webhook

NB: Notice the Disable SSL verification button. By default, the hook will only work if your server has a valid certificate. If you are testing with a self-signed one then you’ll need to disable this SSL verification.

Now whenever you commit to your Github repository, Github should ping the URL, and the server should synchronise your repository into Launchpad.

Read more
facundo

Vacaciones en el sur


Este verano volvimos a Piedra del Águila, en modo vacaciones pero también visita a mi hermana y cuñado, que viven allá desde hace unos años.

El viaje es largo, especialmente para los niños, pero haciéndolo en dos tirones (o sea, en dos días, durmiendo en algún hotel en la mitad del viaje) se hace soportable. Pero tampoco es para hacerlo seguido, y es en parte por eso que pasaron tres años desde la última visita.

En aquella oportunidad pusimos carpa en el terreno de los chicos, y los ayudamos a empezar a construir las habitaciones. Esta vez las habitaciones estaban totalmente terminadas y habitables, más el taller donde funciona la imprenta, más el garage (que usamos como habitación nuestra), más un montón de comodidades (como el horno de barro!).

Primeras habitaciones de la casa que están haciendo Diana y Gus a pulmón.

Hicimos bastante fiaca durante las vacaciones... yo, por ejemplo, dormí siesta todos los días (normalmente no duermo), leí un montón, charlamos mucho, comimos demasiado. Aprovechamos bastante el horno de barro: hicimos pollo y cordero, siempre con verduras que al horno de barro quedan geniales, incluso el choclo.

Malena jugando con las gallinas (que pensaban que ella les iba a dar de comer...).

El horno de barro que construyó mi hermana; ahí hicimos cordero, pollo, pan y muchas verduras.

Muy atípico cielo, una gran tormenta que apenas nos mojó un rato (pero pegó fuerte ~100km más al norte).

También paseamos bastante. Hicimos algunas actividades cortitas y cercanas, como subir hasta el águila representativa del pueblo, pasar una tarde en el perilago, una caminata al cerro que está al lado de la casa de los chicos, pasamos una tarde en un lugarcito muy lindo aguas abajo del embalse Pichi Picún Leufú, e incluso hicimos una caminata bastante complicada para llegar a una bahía que nos habían contado, con visita incluída a los restos de una ciudad abandonada.

Quizás la que más se destaca de todas las actividades que hicimos en Piedra del Águila fue escalar la pared vertical de una formación que corona un cerro de las afueras de la ciudad.

Fuimos guiados y supervisados por Esteban Martinez, que ya había subido y colgado las sogas de seguridad. La caminata hasta arriba del cerro no fue simple (ni la bajada, especialmente para mí, que llevé casi todo el tiempo a Malena a upa), pero llegamos a una pequeña superficie casi horizontal al costado de la pared. Allí fuimos escalando por turnos, trepando a la roca con la fuerza de piernas y brazos, mientras que alguien desde abajo mantenía tensas las cuerdas de seguridad, por las dudas que nos cayéramos (y con estas cuerdas, luego bajábamos descolgándonos). La verdad es que estuvo buenísimo, aunque las primera vez te da un poco de cagazo el estar solamente agarrado/apoyado con manos y pies a varios metros de altura...).

A mitad de la caminata hasta el cerro (lo que realmente escalamos es la parecita vertical de arriba de todo).

Ya casi arriba de todo, Gustavo y Diana viendo como Esteban prepara los equipos.

Casi llegando a la cima, parece más fácil y menos divertido de lo que realmente es :).

Haciendo la parte de seguridad mientras escalaba Felipe.

Por otro lado, no sólo nos quedamos en casa de los chicos o hicimos paseos cercanos. En dos oportunidades nos tomamos el día entero, saliendo temprano y volviendo tarde, para hacer una recorrida a algún lugar más lejos y conocerlo.

En uno de esos días nos fuimos al Chocón, a unos 150km al norte de Piedra del Águila. Fuimos principalmente al Museo Municipal, donde está expuesto el Giganotosaurus Carolinii (hasta el momento considerado el dinosaurio carnívoro más grande de todos los tiempos, aún superior al Tyrannsaurus Rex) que se descubrió justamente en esa zona por Rubén Carolini a fines del siglo pasado.

Todo el Chocón está coloreado con la temática dinosauril, y está muy bien que así sea (todas las ciudades deberían explotar más sus capacidades turísticas, siempre hay algo que mostrar). Pero no sólo eso tiene la ciudad, sino unos paisajes hermosos al Lago Ramos Mexía, y obviamente la represa.

La familia a los pies del simpático dinosaurio que cuida la entrada a El Chocón.

Dinosaurio reconstruido, en el museo.

Tres bestias feroces.

Otro día nos fuimos al Lago Huechulafquen. Fuimos desde Piedra del Águila para el sur por la RN237, hasta el Río Collón Curá, y de ahí subimos por la RN234 y la RN40 hasta Junín de los Andes. Ahí almorzamos, y seguimos camino al lago. En este último tramo tardamos bastante, porque no sólo es de ripio, sino que hay caminos de cornisa sinuosos y en pendiente, nada trivial de recorrer pero tampoco algo imposible, sólo se tarda más de lo calculado (y también porque nos clavamos una siesta a la mitad de la recorrida :p ).

Obviamente, los paisajes pagan todo eso con creces.

Vista desde donde paramos a tomar unos mates.

Malena, haciendo un enchastre de si misma al jugar con la tierra finita del lugar.

Felipe

Vista del volcán Lanín desde la ruta.

La vuelta la encaramos un par de horas antes de que anochezca, es que quería hacer sí o sí todo el camino hasta Junín de los Andes y también la ruta desde ahí hasta el cruce con la 237 antes que sea noche cerrada, por seguridad y comodidad.

Los cuatro junto al monumento representativo del nombre del pueblo.

Entre una cosa y la otra se fueron pasando los días y tuvimos que regresar a casa. Hace rato que no nos tomábamos más de una semana de vacaciones, y lo disfrutamos un montón, pero también te dan ganas de volver a casa, :)

Read more