Canonical Voices

Stéphane Graber

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc launch ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay -c raw.lxc=lxc.aa_profile=unconfined
lxc config device add kubernetes mem unix-char path=/dev/mem

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt-add-repository ppa:conjure-up/next -y
lxc exec kubernetes -- apt-add-repository ppa:juju/stable -y
lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
facundo

Parque Acuático


Entre Navidad y Año Nuevo nos tomamos unos días de vacaciones con la familia.

Esta vez nos fuimos, por primera vez, a un Parque Acuático.

La verdad es que lo pasamos bárbaro. Yo le tenía un poco de aprensión por si Malena iba a disfrutarlo (Felipe, siendo más grande, seguro que sí). Ambos la pasaron genial, así como también Moni y yo.

Moni y Male disfrutando

El primer día llegamos a la tardecita y estaba nublado y fresco, así que en el parque acuático propiamente dicho no había nadie. Nosotros tampoco nos metimos, sino que fuimos directamente a las piletas con aguas termales, así estábamos calentitos :)

Piletas con aguas termales

Pero lo que más disfrutamos fué el parque acuático propiamente dicho, con todas sus variantes de juegos para tirarse al agua. Al principio Male se quedaba en los juegos para niños, pero luego del primer día también se tiró mucho de la rampa grande.

Juegos de los niños

Felu y Male en la rampa grande

Felu se tiró de casi todos lados (excepto el más salvaje, que era casi caída libre), incluso se tiró de los juegos grandes un montón de veces, en loop: se tiraba, subía, se tiraba, subía, se tiraba...

Felipe en el juego que te hace girar

También aprovechamos para pasear y conocer Concepción del Uruguay. Incluso una de las tardes vinieron familiares de Moni desde Concordia, y nos fuimos a las playas de Banco Pelay, donde nos metimos en el rio y jugamos con la arena hasta que se hizo de noche y nos fuimos al pueblo a comernos unas pizzas :)
http://www.turismoentrerios.com/cdeluruguay/pelay.htm

Moni con la prima Sandra y la tia Rosa

Almorzando con la familia

La escapada de pocos días al parque acuático mostró ser una copada forma de desconectar. Seguro repetiremos.

Read more
deviceguy

Movin' on...

A year has gone by since I started work with Canonical. As it turns out, I must be on my way. Where to? Not real sure at this moment, there seems plenty of companies using Qt & QML these days. \0/


But saying that, I am open to suggestions. LinkedIn
 
Plenty of IoT and devices using sensors around. Heck, even Moto Z phone has some great uses for sensor gestures similar to what I wrote for QtSensors while I was at Nokia.

But a lack of companies that allow freelance or remote work. The last few years I have worked remotely doing work for Jolla and Canonical. Both fantastic companies to work for, which really have it together for working remotely.

I am still surprised that only a handful of companies regularly allow remote work. I do not miss the stuffy non window opening offices and the long daily commute, which sometimes means riding a motorcycle through hail! (I do not suggest this for anyone)

Of course, I am still maintainer for QtSensors, QtSystemInfo for the Qt Project, and Sensor Framework for Mer, and always dreaming up new ways to use sensors. Still keeping tabs on QtNetwork bearer classes.

Although I had to send back the Canonical devices, I still have Ubuntu on my Nexus 4. I still have my Jolla phones and tablet.

That said, I still have this blog here, and besides spending my time looking for a new programming gig, I am (always) preparing to release a new album. http://llornkcor.com
and always willing to work with anyone needing music/audio/soundtrack work.

Read more
kevin gunn

1)Put the latest ubuntu-core image for dragonboard on boot (you’ll want a screen and keyboard at least)

You can find the image here http://releases.ubuntu.com/ubuntu-core/16/

Make sure you’re on the latest with the following


ssh$ snap refresh core

 

2)Then install the mir-libs and mir-kiosk

 

ssh$ snap install mir-libs --channel=edge
ssh$ snap install mir-kiosk --channel=edge
ssh$ snap install ubuntu-app-platform

 

 

3)Using the snap built from this branch

https://code.launchpad.net/~osomon/webbrowser-app/mirkiosk-snap  

This particular snap

https://code.launchpad.net/~osomon/+snap/webbrowser-mirkiosk/+build/16501

Seemed to work find, download copy over and install


ssh$ snap install webbrowser-app*.snap --devmode --dangerous

 

4) NOTE: because of bug  you have to do the following, hopefully the pull request will get merged soon and this step we can remove

 

ssh$ snap disconnect webbrowser-app:mir
ssh$ snap disconnect webbrowser-app:platform
ssh$ snap connect webbrowser-app:mir mir-kiosk:mir
ssh$ snap connect webbrowser-app:platform ubuntu-app-platform:platform
ssh$ snap disable webbrowser-app
ssh$ snap enable webbrowser-app

 

5) Now launch and use


$ webbrowser-app

 

If you should experience a crash of the web browser, just restart with the same command. Also, you will see some spew at the console you may ignore from the browser launching related to audio and Qt stuff.

 

Debugging: if you should find things aren’t working as expected, as in you do not see the web browser. Try rebooting first, which should auto launch mir-kiosk, then repeat the connection process and launching the browser. If that still doesn’t work, inspect all the connections via ssh$ snap interfaces and make sure mir-kiosk:mir-libs, webbrowser-app:mir-kiosk, webbrowser-app:ubuntu-app-platform, webbrowser-app:mir-libs are all connected as expected. Feel free to ping me or others on freenode at #snappy or #ubuntu-unity or #ubuntu-mir

Read more
Colin Ian King

The BPF Compiler Collection (BCC) is a toolkit for building kernel tracing tools that leverage the functionality provided by the Linux extended Berkeley Packet Filters (BPF).

BCC allows one to write BPF programs with front-ends in Python or Lua with kernel instrumentation written in C.  The instrumentation code is built into sandboxed eBPF byte code and is executed in the kernel.

The BCC github project README file provides an excellent overview and description of BCC and the various available BCC tools.  Building BCC from scratch can be a bit time consuming, however,  the good news is that the BCC tools are now available as a snap and so BCC can be quickly and easily installed just using:

 sudo snap install --devmode bcc  

There are currently over 50 BCC tools in the snap, so let's have a quick look at a few:

cachetop allows one to view the top page cache hit/miss statistics. To run this use:

 sudo bcc.cachetop  



The funccount tool allows one to count the number of times specific functions get called.  For example, to see how many kernel functions with the name starting with "do_" get called per second one can use:

 sudo bcc.funccount "do_*" -i 1  


To see how to use all the options in this tool, use the -h option:

 sudo bcc.funccount -h  

I've found the funccount tool to be especially useful to check on kernel activity by checking on hits on specific function names.

The slabratetop tool is useful to see the active kernel SLAB/SLUB memory allocation rates:

 sudo bcc.slabratetop  


If you want to see which process is opening specific files, one can snoop on open system calls use the opensnoop tool:

 sudo bcc.opensnoop -T


Hopefully this will give you a taste of the useful tools that are available in BCC (I have barely scratched the surface in this article).  I recommend installing the snap and giving it a try.

As it stands,BCC provides a useful mechanism to develop BPF tracing tools and I look forward to regularly updating the BCC snap as more tools are added to BCC. Kudos to Brendan Gregg for BCC!

Read more
Dustin Kirkland


What's yours?

Happy 2017!
:-Dustin

Read more
facundo

Bronca con los dos dedos en V


Hoy volví en el auto, a casa. Escuchando mucha música. Pasó este tema, y me di cuenta que es lo que siento con respecto al 2016.

[Pinchen aquí]

Bronca cuando ríen satisfechos
al haber comprado sus derechos
Bronca cuando se hacen moralistas
y entran a correr a los artistas

Bronca cuando a plena luz del día
sacan a pasear su hipocresía
Bronca de la brava, de la mía
bronca que se puede recitar

Para los que toman lo que es nuestro
con el guante de disimular
Para el que maneja los piolines
de la marioneta universal

Para el que ha marcado las barajas
y recibe siempre la mejor
Con el as de espadas nos domina
y con el de bastos entra a dar y dar y...

¡Marcha! Un, dos...
No puedo ver tanta mentira organizada
Sin responder,
Con voz ronca mi bronca, mi bronca

Bronca porque matan con descaro
pero nunca nada queda claro
Bronca porque roba el asaltante
pero también roba el comerciante

Bronca porque está prohibido todo
hasta lo que haré de cualquier modo
Bronca porque no se paga fianza
si nos encarcelan la esperanza

Los que mandan tienen este mundo
repodrido y dividido en dos
Culpa de su afán de conquistarse
por la fuerza o por la explotación

Bronca pues entonces cuando quieren
que me corte el pelo sin razón,
Es mejor tener el pelo libre
que la libertad con fijador

¡Marcha!  No puedo ver
Tanto desastre organizado
sin responder con voz ronca
mi bronca mi bronca

Bronca sin fusiles y sin bombas
Bronca con los dos dedos en V
Bronca que también es esperanza
Marcha de la bronca y de la fe

Feliz 2017.

Read more
UbuntuTouch

在最新的snapd 2.20中,它开始支持一个叫做classic模式的snap 应用开发.这种classic可以使得我们的应用开发者能够快速地开发我们所需要的应用,这是因为我们不必要对我们的现有的应用做太多的改变.在classic模式下的应用,它可以看见host系统的所有的位于"/"下的文件,就像我们目前正常的应用一样.但是在安装我们的应用后,它的所有文件将位于/snap/foo/current下.它的执行文件将位于/snap/bin目录下,就像我们目前的所有其它的snap应用一样.

当我们安装我们的classic模式下的snap应用时,我们需要使用--classic选项.在上传我们的应用到Ubuntu Core商店时,也需要人工检查.它可以看见位于/snap/core/current下的所有的文件,同时也可以对host里的任何位置的文件进行操作.这样做的目的是为了能够使得开发者快速地发布自己的以snap包为格式的应用,并在以后的开发中逐渐运用Ubuntu Core的confinement以得到完善.在目前看来,classic模式下的应用在可以遇见的将来不能够安装到all-snap系统中,比如Ubuntu Core 16.

对于classic模式的应用来说,它的"/"目录对应于host系统的"/".更多的信息可以参阅地址:http://snapcraft.io/docs/reference/confinement


安装

在开发之前,我们在desktop上安装core而不是ubuntu-core.我们可以用snap list命令来查看:

liuxg@liuxg:~$ snap list
Name          Version  Rev  Developer  Notes
core          16.04.1  714  canonical  -
firefox-snap  0.1      x1              classic
hello         1.0      x1              devmode
hello-world   6.3      27   canonical  -

如果你的系统里是安装的ubuntu-core的话,建议大家使用devtool中的reset-state来使得我们的系统恢复到最初的状态(没有任何安装的snap).在以后的snapd发布中,我们将不再有ubuntu-core这个snap了.我们也可以适用如下的方法来删除ubuntu-core snap并安装上core snap:

$ sudo apt purge -y snapd
$ sudo apt install snapd
$ sudo snap install core

另外对于有的开发者来说从stable channel得不到最新的snap 2.20,我们可以在我们的Ubuntu Destkop中,打开"System Settings"/"Software & Updates"/"Developer Options":


我们可以打开上面所示的开关,就可以得到最新的所有关于我们Ubuntu桌面系统的发布的软件.snap 2.20版本目前就在这个xenial-proposed之中.

在今天的教程中,我们来做一个例程来进行将讲解:

https://github.com/liu-xiao-guo/helloworld-classic

在上面的例程中,它的snapcraft.yaml的文件如下:

snapcraft.yaml

name: hello
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: classic
type: app  #it can be gadget or framework

apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo
 createfile:
   command: bin/createfile
 createfiletohome:
   command: bin/createfiletohome
 listhome:
   command: bin/listhome
 showroot:
   command: bin/showroot

parts:
 hello:
  plugin: dump
  source: .    

从上面的例程中,我们可以看出来,我们在confinement的位置定义为:

confinement: classic

这定义了我们的这个snap应用是一个classic的应用.我们安装时也必须使用--classic的选项来进行安装.细心的开发者会发现,在我们的应用中,我们没有定义任何的plug,也就是我们没有使用任何的interface.大家可以和我们的另外一个项目https://github.com/liu-xiao-guo/helloworld-demo进行比较一下.

就像我们之前所说的,我们只希望能尽快把我们的应用以snap形式发布,在classic模式下,我们暂时不考虑安全的问题.

我们可以打包我们的应用,并以如下的命令来进行安装:

$ sudo snap install hello_1.0_amd64.snap --classic --dangerous

我们的脚本showroot内容如下:

#!/bin/bash

cd /
echo "list all of the content in the root:"
ls

echo "show the home content:"
cd home
ls

当我们运行我们的应用showroot时,我们可以看到:

liuxg@liuxg:~/snappy/desktop/helloworld-classic$ hello.showroot 
list all of the content in the root:
bin    core  home	     lib	 media	proc  sbin  sys  var
boot   dev   initrd.img      lib64	 mnt	root  snap  tmp  vmlinuz
cdrom  etc   initrd.img.old  lost+found  opt	run   srv   usr  vmlinuz.old
show the home content:
liuxg  root.ini
liuxg@liuxg:~/snappy/desktop/helloworld-classic$ ls /
bin    core  home            lib         media  proc  sbin  sys  var
boot   dev   initrd.img      lib64       mnt    root  snap  tmp  vmlinuz
cdrom  etc   initrd.img.old  lost+found  opt    run   srv   usr  vmlinuz.old

显然,它可以看到我们整个host系统的文件目录.这个应用时间上可以对它所看到的文件及目录进行操作.
当然,我们也可以运行evil脚本:

#!/bin/sh

set -e
echo "Hello Evil World!"

echo "This example demonstrates the app confinement"
echo "You should see a permission denied error next"

echo "Haha" > /var/tmp/myevil.txt

echo "If you see this line the confinement is not working correctly, please file a bug"
运行结果如下:

liuxg@liuxg:~/snappy/desktop/helloworld-classic$ hello.evil
Hello Evil World!
This example demonstrates the app confinement
You should see a permission denied error next
If you see this line the confinement is not working correctly, please file a bug

显然在我们没有使用interface的情况下,我们可以想其它的任何目录进行操作,并写入我们想要的数据.confinement在classic模式下不起任何的作用.对于我们开发者来说,我们只需要快速地把我的应用打包为snap即可.

最后,作为一个速成的例子,我们通过classic模式来快速地把Firefox打包为一个snap:

Firefox snapcraft.yaml

name: firefox-snap
version: '0.1'
summary: "A Firefox snap"
description: "Firefox in a classic confined snap"

grade: devel
confinement: classic

apps:
  firefox-snap:
    command: firefox
    aliases: [firefox]

parts:
  firefox:
    plugin: dump
    source: https://download.mozilla.org/?product=firefox-50.1.0-SSL&os=linux64&lang=en-US
    source-type: tar

在这里,我们直接下载我们需要的版本,并进行打包.安装并运行我们的Firefox应用:



整个项目的源码在地址:https://github.com/liu-xiao-guo/firefox-snap





作者:UbuntuTouch 发表于2017/1/6 13:48:04 原文链接
阅读:141 评论:2 查看评论

Read more
UbuntuTouch

[原]如何提高编译snap应用的速度

在我们编译打包snap应用时,我们时常会发现在我们的代码或snapcraft.yaml中每次做一次小的改动后,重新运行snapcraft命令时,都会从Ubuntu archive中重新下载所需要的包.如果一个包很大的话,这需要很长的时间才可以完成.如果是在Desktop的情况下,我们有时可以使用VPN来解决这个问题.这种情况特别是发生在我们需要使用ARM板子进行编译打包的时候,因为我在这些板子上甚至不能运行VPN.那么我们如何来解决这个问题呢?

很幸运的是,我们的同事ogra帮我们设计了一个叫做packageproxy的snap包.我们可以通过如下的命令来安装:

$ sudo snap install packageproxy

安装完后,我们可以通过snap list来发现:

liu-xiao-guo@localhost:~$ snap list
Name            Version       Rev  Developer  Notes
classic         16.04         17   canonical  devmode
core            16.04.1       716  canonical  -
grovepi-server  1.0           x1              devmode
packageproxy    0.1           3    ogra       -
pi2             16.04-0.17    29   canonical  -
pi2-kernel      4.4.0-1030-3  22   canonical  -

在我们的ARM板子,比如树莓派中,我们通过安装classic应用,进入到classic的环境中:

$ sudo snap install classic --devmode --edge
$ sudo classic

当然具体的步骤,我们可以参照文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".在进入到我们的classic环境后,我们需要多/etc/apt中的sources.list文件进行修改.为了保险起见,我们首先可以通过如下的命令来保存原先的sources.list文件

(classic)liu-xiao-guo@localhost:/etc/apt$ sudo cp sources.list sources.list.bak

这样以前的文件被保存于sources.list.bak文件中.如果我们打开sources.list文件,我们可以看见它的内容如下:

sources.list

# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial main restricted
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main restricted
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial universe
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial universe
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial multiverse
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates multiverse

## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-backports main restricted universe multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-backports main restricted universe multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu xenial partner
# deb-src http://archive.canonical.com/ubuntu xenial partner

deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security main restricted
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security main restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security universe
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security universe
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security multiverse

显然,在上面的文件中,所有的源都指向http://ports.ubuntu.com/ubuntu-ports/.也就是说每次我们重新编译我们的snap应用时,它都会从上面的地址进行下载.如果一个包很大的话,它就会造成我们的编译的时间过长.这显然不是我们所期望的.如果,我们把上面的http://ports.ubuntu.com/ubuntu-ports/换成http://localhost:9999/ubuntu-ports/,那么整个sources.list文件的内容如下:

sources.list

# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://localhost:9999/ubuntu-ports/ xenial main restricted
# deb-src http://localhost:9999/ubuntu-ports/ xenial main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://localhost:9999/ubuntu-ports/ xenial-updates main restricted
# deb-src http://localhost:9999/ubuntu-ports/ xenial-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://localhost:9999/ubuntu-ports/ xenial universe
# deb-src http://localhost:9999/ubuntu-ports/ xenial universe
deb http://localhost:9999/ubuntu-ports/ xenial-updates universe
# deb-src http://localhost:9999/ubuntu-ports/ xenial-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://localhost:9999/ubuntu-ports/ xenial multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial multiverse
deb http://localhost:9999/ubuntu-ports/ xenial-updates multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial-updates multiverse

## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://localhost:9999/ubuntu-ports/ xenial-backports main restricted universe multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial-backports main restricted universe multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu xenial partner
# deb-src http://archive.canonical.com/ubuntu xenial partner

deb http://localhost:9999/ubuntu-ports/ xenial-security main restricted
# deb-src http://localhost:9999/ubuntu-ports/ xenial-security main restricted
deb http://localhost:9999/ubuntu-ports/ xenial-security universe
# deb-src http://localhost:9999/ubuntu-ports/ xenial-security universe
deb http://localhost:9999/ubuntu-ports/ xenial-security multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial-security multiverse

也就是说,每当我们重新下载我们的包的时候,它将从我们的本地地址http://localhost:9999/ubuntu-ports/进行下载.
  • 如果这个包曾经被下载过,那么packageproxy将帮我们从本地的cache中直接提取,从而不需要重新下载
  • 如果这个包从来没有被现在过,那么packageproxy将帮助我们从网上进行下载,并保存于本地以备以后重复使用
为了能够在命令行中进行修改sources.list文件,我们可以使用如下的命令:

sudo sed -i 's/http:\/\/ports.ubuntu.com\/ubuntu-ports/http:\/\/localhost:9999\/ubuntu-ports/g' /etc/apt/sources.list

这显然是一种非常好的方法.在第一编译的时候,它可能需要一些时间.但是以后的编译,它可以直接从本地提取从而加速我们的编译的速度.
在我们设置完上面的步骤后,我们可以通过如下的命令来进行系统的更新及安装:

$ sudo apt-get update
$ sudo apt install snapcraft git-core build-essential

这样我们就安装好我们的编译的环境了.通过这样的配置过后,在第一次编译我的应用时,如果需要的包从来没有下载过,就会慢一些.第二次编译我们的应用时,就会发现速度快很多.当然,我们可以把我们的地址指向某一个设备的IP地址,而不使用localhost,从而使大家从同一个设备中提取所需要的包.这种方法适合于网路环境比较差的时候.特别适合一些hackathon活动.作为一个展示的例子http://paste.ubuntu.com/23789982/,我们可以看到在clean项目后,编译的速度大大提高了.

如果在使用过程中,出现如下的错误:

(classic)liu-xiao-guo@localhost:~$ sudo apt-get update
Err:1 http://localhost:9999/ubuntu-ports xenial InRelease
  Could not connect to localhost:9999 (127.0.0.1). - connect (111: Connection refused) [IP: 127.0.0.1 9999]
Err:2 http://localhost:9999/ubuntu-ports xenial-updates InRelease
  Unable to connect to localhost:9999: [IP: 127.0.0.1 9999]
Err:3 http://localhost:9999/ubuntu-ports xenial-backports InRelease
  Unable to connect to localhost:9999: [IP: 127.0.0.1 9999]
Err:4 http://localhost:9999/ubuntu-ports xenial-security InRelease
  Unable to connect to localhost:9999: [IP: 127.0.0.1 9999]

这种情况可能是由于packageproxy在运行时出现一些问题,我们可以通过删除在如下地址下的文件来解决:

liu-xiao-guo@localhost:/var/snap/packageproxy/3$ ls
approx.conf  config.yaml  hosts.allow  hosts.deny  lockfile.lock  var

我们可以删除上面的lockfile.lock来解决这个问题.

另外,这种方法也适合在Ubuntu Desktop下的snap编译打包,我们只需要把上面的"ubuntu-ports"修改为"ubuntu"即可.这个练习就留给开发者.

如果大家想删除所有已经下载的包以减少存储空间:

  • 使用snap remove packageproxy命令来删除这个应用
  • 删除rm -rf /var/snap/packageproxy/3/var/cache/approx/*所有的文件









作者:UbuntuTouch 发表于2017/1/13 10:04:38 原文链接
阅读:311 评论:4 查看评论

Read more
UbuntuTouch

LXD作为一容器的hypervisor,它对LXC提供了更多的新的用户体验.在今天的教程中,我们来介绍如何利用LXD来在不同的Ubuntu Desktop版本下编译我们的snap应用.


1)安装LXD及命令行工具


我们可以参照链接来安装我们的LXD:https://linuxcontainers.org/lxd/getting-started-cli/.为了方便,我们可以利用已经做好的Ubuntu Image:

liuxg@liuxg:~$ lxc launch ubuntu:yakkety
Creating flying-snake
Starting flying-snake

在这里,我们创建了一个叫做flying-snake的容器.这个名字是自动生产的.它是基于Ubuntu 16.10的yakkety.
如果你想有一个自己的容器的名称,你也可以使用如下的命令来生产:

$ lxc launch ubuntu:yakkety foobar

这里的foobar将是我们生成的容器的名称而不是像上面自动生成的flying-snake.

我们可以利用如下的命令来查看:

liuxg@liuxg:~$ lxc list
+----------------------+---------+-------------------+------+------------+-----------+
|         NAME         |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+----------------------+---------+-------------------+------+------------+-----------+
| flying-snake         | RUNNING | 10.0.1.143 (eth0) |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+
| immortal-feline      | STOPPED |                   |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+
| vivid-x86-armhf      | STOPPED |                   |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+
| xenial-desktop-amd64 | STOPPED |                   |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+

2)创建一个用户


我们可以利用如下的命令来创建一个属于自己的用户:

liuxg@liuxg:~$ lxc exec flying-snake -- adduser liuxg
Adding user `liuxg' ...
Adding new group `liuxg' (1001) ...
Adding new user `liuxg' (1001) with group `liuxg' ...
Creating home directory `/home/liuxg' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for liuxg
Enter the new value, or press ENTER for the default
	Full Name []: liuxg
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] y

请注意这里的flying-snake为我们刚才创建的container的名称.开发者必须根据自己的名称进行选择.我为这个container创建了一个叫做liuxg的用户.为用户添加管理员权限:

liuxg@liuxg:~$ lxc exec flying-snake -- adduser liuxg sudo
Adding user `liuxg' to group `sudo' ...
Adding user liuxg to group sudo
Done.

$ lxc exec flying-snake -- visudo
通过上面的命令,启动编辑器,并在文件的最后,加入:

<username>   ALL=(ALL) NOPASSWD: ALL



注意这里的liuxg是我们刚才创建的用户名.开发者需要替换为自己的用户名.

更新系统并安装所需要的工具:

$ lxc exec flying-snake -- apt update -qq
$ lxc exec flying-snake -- apt upgrade -qq
$ lxc exec flying-snake -- apt install -qq -y snapcraft build-essential


3)登陆并编译我们的应用


我们可以通过如下的命令来登陆:

$ lxc exec flying-snake -- sudo -iu liuxg

注意这里的liuxg是我们之前创建的用户.

liuxg@liuxg:~$ lxc exec flying-snake -- sudo -iu liuxg
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

liuxg@flying-snake:~$ 
liuxg@flying-snake:~$ ls -al
total 20
drwxr-xr-x 2 liuxg liuxg 4096 Jan  4 02:52 .
drwxr-xr-x 4 root  root  4096 Jan  4 02:52 ..
-rw-r--r-- 1 liuxg liuxg  220 Jan  4 02:52 .bash_logout
-rw-r--r-- 1 liuxg liuxg 3771 Jan  4 02:52 .bashrc
-rw-r--r-- 1 liuxg liuxg  655 Jan  4 02:52 .profile
liuxg@flying-snake:~$ mkdir apps
liuxg@flying-snake:~$ cd apps/
liuxg@flying-snake:~/apps$ git clone https://github.com/liu-xiao-guo/alias
Cloning into 'alias'...
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 4 (delta 0), pack-reused 0
Unpacking objects: 100% (4/4), done.
Checking connectivity... done.
liuxg@flying-snake:~/apps$ ls
alias
liuxg@flying-snake:~/apps$ cd alias/
liuxg@flying-snake:~/apps/alias$ ls
hello.sh  snapcraft.yaml
liuxg@flying-snake:~/apps/alias$ snapcraft 
Preparing to pull aliases 
Pulling aliases 
Preparing to build aliases 
Building aliases 
Staging aliases 
Priming aliases 
Snapping 'my-alias' |                                                                
Snapped my-alias_0.1_amd64.snap

我们可以看到我们已经在yakkety (16.10)的环境中把我们的应用打包为一个snap.

我们可以利用 lxc file pull命令来把我们的容器里的文件拷入到我们的host:

lxc file pull first/etc/hosts .
我们可以利用:

$ lxc stop flying-snake

来停止我们的container.

liuxg@liuxg:~/tmp$ lxc stop flying-snake
liuxg@liuxg:~/tmp$ lxc list
+----------------------+---------+------+------+------------+-----------+
|         NAME         |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+----------------------+---------+------+------+------------+-----------+
| flying-snake         | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+
| immortal-feline      | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+
| vivid-x86-armhf      | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+
| xenial-desktop-amd64 | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+

具体的操作可以参阅文章:https://linuxcontainers.org/lxd/getting-started-cli/








作者:UbuntuTouch 发表于2017/1/4 11:50:37 原文链接
阅读:267 评论:0 查看评论

Read more
UbuntuTouch

对于有些snap应用来说,我们很希望在snap安装时能够运行我们的一段脚本来做一些我们想要做的事,比如创建一个文件夹等.那么我们如何能得到这个事件呢?在我们的先前的文章"如何为我们的Ubuntu Core应用进行设置"中,我们已经展示了如何设置我们的snap应用.在那里面的configure脚本在设置时会被调用.事实上,它在安装时也会被自动调用.下面,我们以如下的例子来说明:

https://github.com/liu-xiao-guo/helloworld-install

在上面的例子中,我们的configure脚本如下:

configure

#!/bin/sh

echo "This is called during the installation!"
exit 1

这是一个非常简单的脚本程序.在我们的安装过程中,它返回的值是"1",表明它是失败的.那么这个应用将不被成功安装:

liu-xiao-guo@localhost:~/apps/helloworld-install$ sudo snap install *.snap --dangerous
error: cannot perform the following tasks:
- Run configure hook of "hello-install" snap if present (This is called during the installation!)
liu-xiao-guo@localhost:~/apps/helloworld-install$ snap list
Name            Version       Rev  Developer  Notes
classic         16.04         17   canonical  devmode
core            16.04.1       716  canonical  -
grovepi-server  1.0           x1              devmode
packageproxy    0.1           3    ogra       -
pi2             16.04-0.17    29   canonical  -
pi2-kernel      4.4.0-1030-3  22   canonical  -
snapweb         0.21.2        25   canonical  -

显然通过上面的展示,helloworld-install没有被安装到我们的系统中去.
如果我们把configure脚本修改为:

configure

#!/bin/sh

echo "This is called during the installation!"
exit 0

这个脚本的返回值为"0",表明它的安装是成功的.

liu-xiao-guo@localhost:~/apps/helloworld-install$ sudo snap install *.snap --dangerous
hello-install 1.0 installed
liu-xiao-guo@localhost:~/apps/helloworld-install$ snap list
Name            Version       Rev  Developer  Notes
classic         16.04         17   canonical  devmode
core            16.04.1       716  canonical  -
grovepi-server  1.0           x1              devmode
hello-install   1.0           x1              -
packageproxy    0.1           3    ogra       -
pi2             16.04-0.17    29   canonical  -
pi2-kernel      4.4.0-1030-3  22   canonical  -
snapweb         0.21.2        25   canonical  -
liu-xiao-guo@localhost:~/apps/helloworld-install$ vi /var/log/syslog
liu-xiao-guo@localhost:~/apps/helloworld-install$ sudo vi /var/log/syslog

我们可以在系统的/var/log/syslog中找到这个脚本运行时的输出:



显然脚本在安装时有被正常运行.我们可以通过运行这样一个hook来对我们的应用做一些初始化,从而为接下来的应用的运行铺好基础.

作者:UbuntuTouch 发表于2017/1/16 10:30:29 原文链接
阅读:31 评论:0 查看评论

Read more
Colin Ian King

Kernel printk statements

The kernel contains tens of thousands of statements that may print various errors, warnings and debug/information messages to the kernel log.  Unsurprisingly, as the kernel grows in size, so does the quantity of these messages.  I've been scraping the kernel source for various kernel printk style statements and macros and scanning these for various typos and spelling mistakes and to make this easier I hacked up kernelscan (a quick and dirty parser) that helps me find literal strings from the kernel for spell checking.

Using kernelscan, I've gathered some statistics for the number of kernel print statements for various kernel releases:


As one can see, we have over 200,000 messages in the 4.9 kernel(!).  Given the kernel growth, we can see this seems to roughly correlate with the kernel source size:



So how many lines of code in the kernel do we have per kernel printk messages over time?


..showing that the trend is to have more lines of code per frequent printk statements over time.  I didn't differentiate between different types of printk message, so it is hard to see any deeper trends on what kinds of messages are being logged more or less frequently over each release, for example,  perhaps there are less debug messages landing in the kernel nowadays.

I find it quite amazing that the kernel contains quite so many printk messages; it would be useful to see just how many of these are actually in a production kernel. I suspect quite large number are for driver debugging and may be conditionally omitted at build time.

Read more
facundo

Regalo de fin de año: Recordium


En estas últimas semanas terminé de poner a punto un proyectito que había empezado durante el año. Aunque le faltan algunos detalles, ya es funcional y útil.

Se llama Recordium. Es una aplicación sencillita que ayuda al vos-fuera-de-tu-compu a recordarle cosas a tu futuro vos-en-la-compu.

Recordium

La idea es que ejecutás Recordium en tu computadora, y se pone ahí como un iconito pequeñito.

Después, en cualquier momento, estando en la calle, cortando el pasto, en la cola de la panadería, etc, cuando te acordás de algo que tenés que hacer, le mandás un texto o audio de Telegram a tu Bot de Recordium.

Cuando volvés a tu computadora (donde tomás las acciones correspondientes sobre eso que te habías acordado), el iconito de Recordium va a estar iluminado, te va a decir que tenés un mensaje nuevo (o más), y ahí podés leer/escuchar lo que te habías acordado en otro momento.

¿Se podría hacer algo similar utilizando herramientas más complejas? Sí. ¿O algún servicio de Google? También, pero no quiero meterle más yo a Google. Igual, lo más importante de Recordium es que me sirvió de proyecto juguete para (al mismo tiempo que lograba una funcionalidad que yo quería) tener algo hecho en Python 3 y PyQt 5.

Read more

2016 Retrospective

This has been a unique year for me, and I wanted to quickly lay out what I’ve accomplished and where I think I’m going for the coming year. This is now officially a tradition at three posts (see: 2014 and 2015).

Revisiting 2016’s Goals

These are the goals I set for myself at the start of the year and how I met or missed them:

  • Spend Money
    • I wanted to become less of a miser and spend more money. Note that this is not for bragging purposes, just that I am naturally very frugal and hesitant to spend money. I think we did a pretty good job, though! For starters, I swapped out my commuter car for a tech-heavy crossover. We stayed in a really cool art-hotel in downtown Cincinnati in the Spring, drove across the country for a big Yellowstone trip in early Summer, stayed at the Indiana Dunes for a few days in the Fall, and took a brief trip to Madison, WI, shortly thereafter. I bought a nice sitting/standing desk and chair for my home office. I paid the entry fee to go to GenCon for a day. Our fridge, dishwasher, and furnace all died one weekend, and it ended with me buying upgraded, modern appliances. I’ve also been keeping the post office busy with plenty of orders off Amazon, and I’ve been trying to Kickstart more games that I think deserve attention. I also found a new hobby in homebrewing which has been a great use of fun-money.
  • Back Into Web
    • At the start of 2016, I though I really wanted to do more web work. Turns out I’ve done a 180 on this one, and I now work on primarily desktop/mobile with minimal web work, and I wouldn’t have it any other way for the time being.
  • Work With Others
    • In 2015, I worked alone a lot. In 2016, I joined a new company where I work remotely. Although I don’t necessarily see my coworkers every day, I am working on projects with multiple developers and communicating constantly with my teammates (and others) through chat. Working with others again has helped me grow socially and become a better engineer.
  • Become Part of an Open-Source Community
    • I really wanted to better integrate FLOSS into my life and my career, and I believe I’ve hit the jackpot. My job is to build popular open-source operating system Ubuntu, where all the code I write is public and generally GPLv3. My job is to write open-source software, and to interact with the community. I use my own software and report/fix bugs that affect me all the time.
  • Good Vibes
    • I was feeling a bit down at the end of 2015, and I wanted to be a more positive person in 2016. Although there were some depressing things going on worldwide in 2016, I am generally happier with my personal and professional life on a micro-level.

Surprise Victories

  • New Job
    • The big one! After almost four years at SEP, I transitioned to a new role at Canonical. I get to work on Ubuntu, one of my favorite open-source projects, as my full-time job. This also means I get to work remotely, dropping my commute from 30 miles in 2012 to 3 miles in 2014 to 50 feet in 2016. I’ve been having a great time working on software that’s in the public spotlight, working with the community, and traveling every few months to see my coworkers. With this new job, I’ve also had the opportunity to visit Europe for the first time this year.
  • Less Own-Time Work
    • Although I’ve hit pretty hard in the past that developers should do some learning outside of work, this year likely contained the least own-time work I’ve ever done. I’ve been finding joys in non-software hobbies and home maintenance, and working on Ubuntu as my full-time job has made me less needy for doing open-source work in my off-hours. I tend to respond to bugs and Github PRs at any hour, and I follow more technical people on social media than I used to. I think this stems from a satisfaction from the learning I do throughout the day, and the difficulty of separating work-life and home-life when one works at home.
  • FOSDEM
    • Yesterday, I learned that I’ll be giving a talk at FOSDEM. I’m very excited (and nervous) to give my first-ever conference talk in Brussels this February.
  • Homebrewing
    • I picked up homebrewing beer at the start of 2016, and I love it. I started with simple pre-made extract kits, but have worked my way up to creating my own all-grain recipes and labels. Brewing is a fun, tasty hobby giving me some creative, manual labor to break the mold of always doing computer work.
  • Books
  • Spotify
    • Spotify is very good for listening to whatever music I want anytime, especially now that I work at home. If you really like a band, you should still buy their music or merch through their website to make sure they get paid to keep existing.

2017 Goals

  • Local
    • I’d like to make sure I stay involved locally, especially as I continue to work from home. I’ve let my Golang group dwindle over the past few months, and I’d like to see us back up to our numbers at the start of 2016. If possible, I’d also like to attend other meetups and meet more local devs.
  • Linux Greybeard
    • This is a slow process, but I want to get better at using Linux in general. I’ve been surprised at how much I’ve learned about the low-level workings of Ubuntu over the past 9 months. I’m excited to see what I can learn over the next year, especially as I’ll likely move into a different codebase at some point this year.
  • More talking
    • I’m very excited to be giving a talk at FOSDEM this year, but I would enjoy doing such things more regularly. It doesn’t necessarily have to be at conferences, as I could do meetups much more easily. I need to try to get back into blogging more regularly. Additionally, I’ve recently been kicking around ideas for a discussion-based podcast on the worst parts of software development, although that may have already been done to death. Contact if interested.
  • Transition Web Tooling
    • I would like to switch over my analytics systems to use a personal Piwik instance, and I would love to replace the (hopefully unobtrusive) ads on this site with some kind of tip jar system. I would also like to update this blog to use a Let’s Encrypt certificate, as well as Ollert once I’ve been given full control.
  • Kegging
    • In my homebrewing, I’ve been bottling my beers. This is generally ok, but I think the beer would be consumed faster if I kegged it and could fill growlers for my friends and family. Getting started with kegging is expensive, requiring the purchase of kegs, tanks, parts, and some sort of refrigeration unit. By the end of the year, I intend to have a kegerator-style setup with the ability to stow and distribute from two kegs.
  • Moving
    • My wife is looking into graduate schools for Fall 2017, and has already been accepted by one. I’m currently assuming a big part of my life this Spring/Summer will be finding and adjusting to a new home.
  • Active Activism
    • I’ve complained a lot about our government and the way the world works on social media, at the “water cooler”, and privately to my wife, but it’s become obvious that passive activism isn’t good enough. Signing petitions and pledges are nice gestures, but are more meaningful when backed up by real action. I’d like to do something, though I’m not sure what at the moment. By the end of 2017, I would like to, at minimum, have a plan to donate, join, create, or generally be more involved.

Adieu, 2016

Major changes in my life, career, and the world at large have made 2016 a memorable year for me. I highly encourage you to reflect on the year you’ve had and think about what you can do to make 2017 great. Happy new year!

Read more
Louis

Introduction

For a while now I have been actively maintaining the sosreport debian package. I am also helping out making it available on Ubuntu.

I also have had multiple requests to make sosreport more easily usable in a Juju environment. I have finally been able to author a charm for the sosreport which will render its usage simpler with Juju.

Theory of operation

As you already know, sosreport is a tool that will collect information about your running environment. In the context of a Juju deployment, what we are after is the running environments of the units providing the services. So in order for the sosreport charm to be useful, it needs to be deployed on an existing unit.

The charm has two actions :

  • collect    : Generate the sosreport tarball
  • cleanup  : Cleanup existing tarballs

You would use the collect action to create the sosreport tarball of the unit where it is being run and cleanup to remove those tarballs once you are done.

Each action has optional parameters attached to it :

homedir  Home directory where sosreport files will be copied to (common to both collect & cleanup actions)
options Command line options to be passed to sosreport (collect only)
minfree Minimum of free diskspace to run sosreport expressed in percent,Megabytes or Gigabytes. Valid suffixes are % M or G (collect only)

Practical example using Juju 2.0

Suppose that you are encountering problems with the mysql service being used by your MediaWiki service (yes, I know, yet one more MediaWiki example). You would have an environment similar to the following (Juju 2.0) :

$ juju status
Model Controller Cloud/Region Version
default MyLocalController localhost/localhost 2.0.0

App Version Status Scale Charm Store Rev OS Notes
mediawiki unknown 1 mediawiki jujucharms 5 ubuntu 
mysql error 1 mysql jujucharms 55 ubuntu

Unit Workload Agent Machine Public address Ports Message
mediawiki/0* unknown idle 1 10.0.4.48 
mysql/0* error idle 2 10.0.4.140 hook failed: "start"

Machine State DNS Inst id Series AZ
1 started 10.0.4.48 juju-53ced1-1 trusty 
2 started 10.0.4.140 juju-53ced1-2 trusty

Relation Provides Consumes Type
cluster mysql mysql peer

Here the mysql start hook failed to start for some reason that we want to investigate. One solution is to ssh to the unit and try to find out. You may be asked by a support representative to provide the data for remote analysis. This is where sosreport becomes useful.

Deploy the sosreport charm

The sosreport charm will be helpful in going to collect the information of the unit where the mysql service runs. In our example, the service runs on unit #2 so this is where the sosreport charm needs to be deployed. So in our example we would do :

$ juju deploy cs:~sosreport-charmers/sosreport --to=2

Once the charm is done deploying, you will have the following juju status :

$ juju status
Model Controller Cloud/Region Version
default MyLocalController localhost/localhost 2.0.0

App Version Status Scale Charm Store Rev OS Notes
mediawiki unknown 1 mediawiki jujucharms 5 ubuntu 
mysql error 1 mysql jujucharms 55 ubuntu 
sosreport active 1 sosreport jujucharms 1 ubuntu

Unit Workload Agent Machine Public address Ports Message
mediawiki/0* unknown idle 1 10.0.4.48 
mysql/0* error idle 2 10.0.4.140 hook failed: "start"
sosreport/1* active idle 2 10.0.4.140 sosreport is installed

Machine State DNS Inst id Series AZ
1 started 10.0.4.48 juju-53ced1-1 trusty 
2 started 10.0.4.140 juju-53ced1-2 trusty

Relation Provides Consumes Type
cluster mysql mysql peer

Collect the sosreport information

In order to collect the sosreport tarball, you will issue an action to the sosreport service, telling it to collect the data :

$ juju run-action sosreport/1 collect
Action queued with id: 95d405b3-9b78-468b-840f-d24df5751351

To verify the progression of the action you can use the show-action-status command :

$ juju show-action-status 95d405b3-9b78-468b-840f-d24df5751351
actions:
- id: 95d405b3-9b78-468b-840f-d24df5751351
 status: running
 unit: sosreport/1

After completion, the action will show as completed :

$ juju show-action-status 95d405b3-9b78-468b-840f-d24df5751351
actions:
- id: 95d405b3-9b78-468b-840f-d24df5751351
 status: completed
 unit: sosreport/1

Using the show-action-output, you can see the result of the collect action :

$ juju show-action-output 95d405b3-9b78-468b-840f-d24df5751351
results:
 outcome: success
 result-map:
 message: sosreport-juju-53ced1-2-20161221163645.tar.xz and sosreport-juju-53ced1-2-20161221163645.tar.xz.md5
 available in /home/ubuntu
status: completed
timing:
 completed: 2016-12-21 16:37:06 +0000 UTC
 enqueued: 2016-12-21 16:36:40 +0000 UTC
 started: 2016-12-21 16:36:45 +0000 UTC

If we look at the mysql/0 unit $HOME directory, we will see that the tarball is indeed present :

$ juju ssh mysql/0 "ls -l"
total 26149
-rw------- 1 root root 26687372 Dec 21 16:36 sosreport-juju-53ced1-2-20161221163645.tar.xz
-rw-r--r-- 1 root root 33 Dec 21 16:37 sosreport-juju-53ced1-2-20161221163645.tar.xz.md5
Connection to 10.0.4.140 closed.

One thing to be aware of is that, as with any environment using sosreport, the owner of the tarball and md5 file is root. This is to protect access to the unit’s configuration data contained in the tarball. In order to copy the files from the mysql/0 unit, you would first need to change their ownership :

$ juju ssh mysql/0 "sudo chown ubuntu:ubuntu sos*"
Connection to 10.0.4.140 closed.

$ juju ssh mysql/0 "ls -l"
total 26149
-rw------- 1 ubuntu ubuntu 26687372 Dec 21 16:36 sosreport-juju-53ced1-2-20161221163645.tar.xz
-rw-r--r-- 1 ubuntu ubuntu 33 Dec 21 16:37 sosreport-juju-53ced1-2-20161221163645.tar.xz.md5
Connection to 10.0.4.140 closed.

The files can be copied off the unit by using juju scp.

Cleanup obsolete sosreport information

To cleanup the tarballs that have been previously created, use the cleanup action of the charm as outlined here :

$ juju run-action sosreport/1 cleanup
Action queued with id: 3df3dcb8-0850-414e-87d5-746a52ef9b53

$ juju show-action-status 3df3dcb8-0850-414e-87d5-746a52ef9b53
actions:
- id: 3df3dcb8-0850-414e-87d5-746a52ef9b53
 status: completed
 unit: sosreport/1

$ juju show-action-output 3df3dcb8-0850-414e-87d5-746a52ef9b53
results:
 outcome: success
 result-map:
 message: Directory /home/ubuntu cleaned up
status: completed
timing:
 completed: 2016-12-21 16:49:35 +0000 UTC
 enqueued: 2016-12-21 16:49:30 +0000 UTC
 started: 2016-12-21 16:49:35 +0000 UTC

Practical example using Juju 1.25

Deploy the sosreport charm

Given the same environment with mysql & MediaWiki service deployed, we need to deploy the sosreport charm to the unit where the mysql service is deployed :

$ juju deploy cs:~sosreport-charmers/sosreport --to=2

Once deployed, we have an environment that looks like this :

$ juju status --format=tabular
[Environment]
UPGRADE-AVAILABLE
1.25.9

[Services]
NAME STATUS EXPOSED CHARM
mediawiki unknown false cs:trusty/mediawiki-5
mysql unknown false cs:trusty/mysql-55
sosreport active false cs:~sosreport-charmers/trusty/sosreport-2

[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
mediawiki/0 unknown idle 1.25.6.1 1 192.168.122.246
mysql/0 unknown idle 1.25.6.1 2 3306/tcp 192.168.122.6
sosreport/0 active idle 1.25.6.1 2 192.168.122.6 sosreport is installed

[Machines]
ID STATE VERSION DNS INS-ID SERIES HARDWARE
0 started 1.25.6.1 localhost localhost zesty
1 started 1.25.6.1 192.168.122.246 caribou-local-machine-1 trusty arch=amd64
2 started 1.25.6.1 192.168.122.6 caribou-local-machine-2 trusty arch=amd64

Collect the sosreport information

With the previous version of Juju, the syntax for actions is slightly different. To run the collect action we need to issue :

$ juju action do sosreport/0 collect

We then get the status of our action :

$ juju action status 2176fad0-9b9f-4006-88cb-4adbf6ad3da1
actions:
- id: 2176fad0-9b9f-4006-88cb-4adbf6ad3da1
 status: failed
 unit: sosreport/0

And to our surprise, the action has failed ! To try to identify why it has failed, we can fetch the result of our action :

$ juju action fetch 2176fad0-9b9f-4006-88cb-4adbf6ad3da1
message: 'Not enough space in /home/ubuntu (minfree: 5% )'
results:
 outcome: failure
status: failed
timing:
 completed: 2016-12-22 10:32:15 +0100 CET
 enqueued: 2016-12-22 10:32:09 +0100 CET
 started: 2016-12-22 10:32:14 +0100 CET

So there is not enough space in our unit to safely run sosreport. This gives me the opportunity to talk about one of the parameter of the collect action : minfree. But first, we need to look at how much disk space is available.

$ juju ssh sosreport/0 "df -h"
Warning: Permanently added '192.168.122.6' (ECDSA) to the list of known hosts.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-root 222G 200G 11G 95% /

We see that there is at least 11Gb available. While it is below the 5% mark, we can change that by using the minfree parameter. Here is its description :

 * minfree : Minimum of free diskspace to run sosreport expressed in percent, 
             Megabytes or Gigabytes. Valid suffixes are % M or G 
            (default 5%)

Since we have 11Gb available, let us set minfree to 5G :

$ juju action do sosreport/0 collect pctfree=5G
Action queued with id: b741aa7c-537d-4175-8af9-548b1e0e6f7b

We can now fetch the result of our command, waiting for at most 600 seconds for the result :

 

$ juju action fetch b741aa7c-537d-4175-8af9-548b1e0e6f7b --wait=100

results:
 outcome: success
 result-map:
 message: sosreport-caribou-local-machine-1-20161222153903.tar.xz and sosreport-caribou-local-machine-1-20161222153903.tar.xz.md5
 available in /home/ubuntu
 status: completed
 timing:
 completed: 2016-12-22 15:40:01 +0100 CET
 enqueued: 2016-12-22 15:38:58 +0100 CET
 started: 2016-12-22 15:39:03 +0100 CET

Cleanup obsolete sosreport information

As with the previous example, the cleanup of old tarballs is rather simple :

$ juju action do sosreport/0 cleanup
Action queued with id: edf199cd-2a79-4605-8f00-40ec37aa25a9
$ juju action fetch edf199cd-2a79-4605-8f00-40ec37aa25a9 --wait=600
results:
 outcome: success
 result-map:
 message: Directory /home/ubuntu cleaned up
status: completed
timing:
 completed: 2016-12-22 15:47:14 +0100 CET
 enqueued: 2016-12-22 15:47:12 +0100 CET
 started: 2016-12-22 15:47:13 +0100 CET

Conclusion

This charm makes collecting information in a juju enviroment much simpler. Don’t hesitate to test it and please report any bug you may encounter.

Read more
Alan Griffiths

MirAL 1.0

There’s a new MirAL release (1.0.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Surprisingly, given the project’s original goal, the ABI is changed. This allowed us to address a couple of minor issues and the timing seemed good as downstreams are faced with Mir-0.25 moving some necessary APIs from libmircommon to the more ABI stable libmircore.

The changes in 1.0.0 are:

  1. The default movement of child windows can be overridden by the window management policy;
  2. A new “miral-app” script that runs the miral example servers as an application on an existing desktop;
  3. Bug fix LP: #1646431 “Examples fail to start under Unity8”;
  4. Bug fix LP: #1646735 “[miral-shell –window-manager tiling] windows are not correctly constrained to tiles”; and
  5. A couple of deprecated APIs have been removed.

Read more
Colin Ian King

Another year passes and once more I have another seasonal obfuscated C program.  I was caught short on free time this year to heavily obfuscate the code which is a shame. However, this year I worked a bit harder at animating the output, so hopefully that will make up for lack of obfuscation.

The source is available on github to eyeball.  I've had criticism on previous years that it is hard to figure out the structure of my obfuscated code, so this year I made sure that the if statements were easier to see and hence understand the flow of the code.

This year I've snapped up all my seasonal obfuscated C programs and put them into the snap store as the christmas-obfuscated-c snap.

Below is a video of the program running; it is all ASCII art and one can re-size the window while it is running.


Unlike previous years, I have the pre-obfuscated version of the code available in the git repository at commit c98376187908b2cf8c4d007445b023db67c68691 so hopefully you can see the original hacky C source.

Have a great Christmas and a most excellent New Year. 

Read more
Daniel Holbach

Taking a break

It’s a bit strange to write this blog post in the same week as Martin Pitt is announcing moving on from Canonical. I remember many moments of Martin’s post very vividly and he was one of the first I ran into on my flight to Sydney for Ubuntu Down Under in 2005.

Fast forward to today: 2016 was a year full of change – my personal life was no exception there. In the last weeks I had to realise more and more that I need a long break from everything. I therefore decided to move on from Canonical, take some time off, wander the world, recharge my batteries, come back and surprise you all with what’s next.

I’m very much leaving on good terms and I could imagine I won’t be too far away (I’d miss all you great people who became good friends way too much). Having been with Canonical for 11 years and 12 years in the Ubuntu community, it has been an incredibly hard decision to take. Still it’s necessary now and it’ll be good open myself up again to new challenges, new ways of working and new sets of problems.

It was a great privilege to work with you all and be able to add my humble contribution to this crazy undertaking called Ubuntu. I’m extremely grateful for the great moments with you all, the opportunities to learn, your guidance, the friends I made around the world, the laughs, the discussions, the excellent work we did together. This was a very important time of my life.

In the coming weeks I will be without internet, I haven’t quite decided yet, which part of the world I’m going to go to, but maybe I’ll post a picture or two somewhere. </p>
            <a href=Read more

pitti

I’ve had the pleasure of working on Ubuntu for 12½ years now, and during that time used up an entire Latin alphabet of release names! (Well, A and C are still free, but we used H and W twice, so on average.. ☺ ) This has for sure been the most exciting time in my life with tons of good memories! Very few highlights:

  • Getting some spam mail from a South African multi-millionaire about a GREAT OPPORTUNITY
  • Joining #warthogs (my first IRC experience) and collecting my first bounties for “derooting” Debian (i. e. drop privileges from root daemons and suid binaries)
  • Getting invited to Oxford to meet a bunch of people for which I had absolutely zero proof of existence, and tossing myself into debts for buying a laptop for that occasion
  • Once being there, looking into my fellows’ stern and serious faces and being amazed by their professionalism:
  • The excitement and hype around going public with Warty Warthogs Beta
  • Meeting lots of good folks at many UDSes, with great ideas and lots of enthusiasm, and sometimes “Bags of Death”. Group photo from Ubuntu Down Under:
  • Organizing UDSes without Launchpad or other electronic help:
     
  • Playing “Wish you were Here” with Bill, Tony, Jono, and the other All Stars
  • Seeing bug #1 getting closed, and watching the transformation of Microsoft from being TEH EVIL of the FOSS world to our business partner
  • Getting to know lots of great places around the world. My favourite: luring a few colleagues for a “short walk through San Francisco” but ruining their feet with a 9 hour hike throughout the city, Golden Gate Park and dipping toes into the Pacific.
  • Seeing Ubuntu grow from that crazy idea into one of the main pillars of the free software world
  • ITZ GTK BUG!
  • Getting really excited when Milbank and the Canonical office appeared in the Harry Potter movie
  • Moving between and getting to know many different teams from the inside (security, desktop, OEM, QA, CI, Foundations, Release, SRU, Tech Board, …) to appreciate and understand the value of different perspectives
  • Breaking burning wood boards, making great and silly videos, and team games in the forest (that was La Mola) at various All Hands

But all good things must come to an end — after tossing and turning this idea for a long time, I will leave Canonical at the end of the year. One major reason for me leaving is that after that long time I am simply in need for a “reboot”: I’ve piled up so many little and large things that I can hardly spend one day on developing something new without hopelessly falling behind in responding to pings about fixing low-level stuff, debugging weird things, handholding infrastructure, explaining how things (should) work, do urgent archive/SRU/maintenance tasks, and whatnot (“it’s related to boot, it probably has systemd in the name, let’s hand it to pitti”). I’ve repeatedly tried to rid myself of some of those or at least find someone else to share the load with, but it’s too sticky :-/ So I spent the last few weeks with finishing some lose ends and handing over some of my main responsibilities.

Today is my last day at work, which I spend mostly on unsubscribing from package bugs, leaving Launchpad teams, and catching up with emails and bugs, i. e. “clean up my office desk”. From tomorrow on I’ll enjoy some longer EOY holidays, before starting my new job in January.

I got offered to work on Cockpit, on the product itself and its ties into the Linux plumbing stack (storaged/udisks, systemd, and the like). So from next year on I’ll change my Hat to become Red instead of orange. I’m curious to seeing for myself how that other side of the fence looks like!

This won’t be a personal good-bye. I will continue to see a lot of you Ubuntu folks on FOSDEMs, debconfs, Plumber’s, or on IRC. But certainly much less often, and that’s the part that I regret most — many of you have become close friends, and Canonical feels much more like a family than a company. So, thanks to all lof you for being on that journey with me, and of course a special and big Thank You to Mark Shuttleworth for coming up with this great Ubuntu vision and making all of this possible!

Read more