Canonical Voices

UbuntuTouch

在今天的教程中,我们来展示如何在Ubuntu Core中使用azure的IoT hub来开发我们的应用.Azure IoT Hub目前提供了一个框架对我们的IoT设备进行管理,并可以通过预置解决方案来展示我们的数据.在今天的文章中,我们将介绍如何把我们的设备连接到远程监视预配置解决方案中.


1)预配远程监视预配置解决方案


我们可以按照在微软的官方文档:


来创建一个我们的预配置解决方案,并最终形成向如下的配置:



这里我的解决方案的名称叫做"sensors".







如果在我们的IoT设备中有数据传上来的话,我们可以在右边的"遥测历史记录"中看到这些数据.这些数据通常是以曲线的形式来表现出来的.
在创建设备的过程中,我们需要记录下来在下面画面中的数据,以便我们在代码实现中使用:



我们也可以打开azure.cn来查看我们已经创建的所有的资源:









这里显示的"连接字符串-主秘钥"对我们以后的编程是非常有用的.需要特别留意一下.


2)生成以C语言开发的snap应用


在这一节中,我们将展示如下如何使用C语言来开发一个客户端,并最终形成一个snap应用.这个应用将和我们在上一节中所形成的远程监视预配置解决方案进行通信.我们在Ubuntu 16.04的Desktop中开发snap应用.如果大家对snap开发的安装还是不很熟悉的话,请参阅我的文章来安装好自己的环境.就像文章中介绍的那样,我们先安装必要的组件包:

$ sudo apt-get install cmake gcc g++

将 AzureIoT 存储库添加到计算机:
$ sudo add-apt-repository ppa:aziotsdklinux/ppa-azureiot
$ sudo apt-get update
安装 azure-iot-sdk-c-dev 包:
$ sudo apt-get install -y azure-iot-sdk-c-dev
这样,我们就安装好了我们所需要的组件包.由于一些原因,在我们编译我们的例程中,会出现一些的错误,所以我们必须做如下的手动修改:
/usr/include/azureiot/inc$ sudo mv azure_c_shared_utility ..
有了这个改动,就可以确保我们在如下的编译中不会出现头文件找不到的情况.

我们先来看看我已经做好的一个项目:

snapcraft.yaml

name: remote-monitor 
version: '0.1' 
summary: This is a remote-monitor snap for azure
description: |
  This is a remote-monitor sample snap for azure

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  remote-monitor:
    command: bin/sample_app
    plugs: [network]

parts:
  remote:
    plugin: cmake
    source: ./src

这个项目是一个cmake项目.由于我们的包的名称和我们的应用名称是一样的,所在我们运行我们的应用时,我们可以直接打入remote-monitorming来运行我们的应用.在做任何改变之前,我们打开remote_monitoring.c文件,并注意一下的代码:

static const char* deviceId = "mydevice";
static const char* deviceKey = "[Device Key]";
static const char* hubName = "sensorsf8f61";
static const char* hubSuffix = "azure-devices.cn";

这里的解释是:

static const char* deviceId = "[Device Id]";
static const char* deviceKey = "[Device Key]";
static const char* hubName = "[IoTHub Name]";
static const char* hubSuffix = "[IoTHub Suffix, i.e. azure-devices.net]";

我们需要根据我们自己的账号的情况替换这些值.在实际运用中,我们可以修改在remote_monitoring.c中的如下的代码:

while (1)
{
	unsigned char*buffer;
	size_t bufferSize;
	
	srand(time(NULL));
	int r = rand() % 50;  
	int r1 = rand() % 55;
	int r2 = rand() % 50;
	printf("r: %d, r1: %d, r2: %d\n", r, r1, r2);
	thermostat->Temperature = r;
	thermostat->ExternalTemperature = r1;
	thermostat->Humidity = r2;
	
	(void)printf("Sending sensor value Temperature = %d, Humidity = %d\r\n", thermostat->Temperature, thermostat->Humidity);

	if (SERIALIZE(&buffer, &bufferSize, thermostat->DeviceId, thermostat->Temperature, thermostat->Humidity, thermostat->ExternalTemperature) != CODEFIRST_OK)
	{
		(void)printf("Failed sending sensor value\r\n");
	}
 ...
}
来把我们所需要的数据传上去.在这里,我们随意写了一些随机的数据.

注意这里的"deviceKey"是我们在上节中图片中所展示的那个"Device Key".
我们在termnial中直接打入如下的命令:
$ snapcraft
这样就形成了我们的项目的snap包.我们可以通过如下的命令来进行安装:

liuxg@liuxg:~/snappy/desktop/azure/remote-monitor$ sudo snap install remote-monitor_0.1_amd64.snap --dangerous
[sudo] password for liuxg: 
remote-monitor 0.1 installed

liuxg@liuxg:~$ snap list
Name            Version  Rev  Developer  Notes
azure           0.1      x1              -
core            16.04.1  714  canonical  -
hello           1.0      x1              devmode
hello-world     6.3      27   canonical  -
hello-xiaoguo   1.0      x1              -
remote-monitor  0.1      x2    
显然我们的remote-monitor已经被成功安装.我们在terminal中打入如下的命令:
liuxg@liuxg:~$ remote-monitor 
IoTHubClient accepted the message for delivery
r: 30, r1: 37, r2: 4
Sending sensor value Temperature = 30, Humidity = 4
IoTHubClient accepted the message for delivery
r: 45, r1: 23, r2: 35
Sending sensor value Temperature = 45, Humidity = 35
IoTHubClient accepted the message for delivery
r: 16, r1: 39, r2: 25
Sending sensor value Temperature = 16, Humidity = 25
IoTHubClient accepted the message for delivery
r: 16, r1: 33, r2: 14
Sending sensor value Temperature = 16, Humidity = 14
IoTHubClient accepted the message for delivery
r: 20, r1: 29, r2: 32

显然我们的客户端应用在不断地向azure IoT Hub发送数据.我们可以通过https://www.azureiotsuite.cn/来查看我们已经收到的数据.



在下面我们可以看到设备数据的最大值及最小值的变化.
如果大家想把这个应用在树莓派等的ARM设备上运行的话,请参阅我的文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".具体的安装和这里介绍的是一样的.请开发者自己去试.


3)生成以nodejs开发的snap应用


在这一节中,我们将介绍如何使用nodejs来开发我们的snap应用.我们可以参阅文章"适用于 Node.js 的 Azure IoT 中心入门".就像这篇文章中所介绍的那样,我们最感兴趣的是它里面介绍的第三个控制台应用程序SimulatedDevice.js.

SimulatedDevice.js

#!/usr/bin/env node

var clientFromConnectionString = require('azure-iot-device-amqp').clientFromConnectionString;
var Message = require('azure-iot-device').Message;

var connectionString = 'HostName=sensorsf8f61.azure-devices.cn;DeviceId=mydevice;SharedAccessKey={Device Key}';

var client = clientFromConnectionString(connectionString);

function printResultFor(op) {
  return function printResult(err, res) {
    if (err) console.log(op + ' error: ' + err.toString());
    if (res) console.log(op + ' status: ' + res.constructor.name);
  };
}

var connectCallback = function (err) {
  if (err) {
    console.log('Could not connect: ' + err.amqpError);
  } else {
    console.log('Client connected');

    // Create a message and send it to the IoT Hub every second
    setInterval(function(){
        var temp = 10 + (Math.random() * 4);
        var windSpeed = 10 + (Math.random() * 4);
        var data = JSON.stringify({ deviceId: 'mydevice', temp: temp, windSpeed: windSpeed});
        var message = new Message(data);
        console.log("Sending message: " + message.getData());
        client.sendEvent(message, printResultFor('send'));

    }, 5000);
  }
};

client.open(connectCallback);

注意在上面的代码中,我们需要手动修改如下的connectionString:
var connectionString = 'HostName=sensorsf8f61.azure-devices.cn;DeviceId=mydevice;SharedAccessKey={yourdevicekey}';
就像在文章中介绍的那样,它的定义为:

var connectionString = 'HostName={youriothostname};DeviceId=myFirstNodeDevice;SharedAccessKey={yourdevicekey}';

我们需要根据我们在第一节中设置的那些参数来修改上面的字符串.大家可以参阅我的项目:

snapcraft.yaml

name: azure 
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: This is an azure snap app
description: |
  This is an azure client snap to send a message

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  azure:
    command: bin/send
    plugs: [network]

parts:
  node:
    plugin: nodejs
    source: .

同样我们可以打入snapcraft命令来生产相应的包,并进行安装:

liuxg@liuxg:~/snappy/desktop/azurenode-snap$ snap list
Name            Version  Rev  Developer  Notes
azure           0.1      x2              -
core            16.04.1  714  canonical  -
hello           1.0      x1              devmode
hello-world     6.3      27   canonical  -
hello-xiaoguo   1.0      x1              -
remote-monitor  0.1      x2         

我们可以直接运行azure命令:

liuxg@liuxg:~/snappy/desktop/azurenode-snap$ azure
Client connected
Sending message: {"deviceId":"mydevice","temp":11.826184131205082,"windSpeed":11.893792165443301}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":10.594819721765816,"windSpeed":10.54138664342463}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.27814894542098,"windSpeed":10.962828870862722}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":13.068702490068972,"windSpeed":10.28670579008758}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.723079251125455,"windSpeed":12.173830625601113}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":12.595101269893348,"windSpeed":12.120747512206435}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.431507185101509,"windSpeed":11.76255036983639}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":12.488932724110782,"windSpeed":13.200456796213984}
send status: MessageEnqueued

我们可以通过https://www.azureiotsuite.cn/来查看我们已经收到的数据:



我们在上面可以看到Temp及Wind Speed的曲线.同样地,如果大家想把这个应用在树莓派等的ARM设备上运行的话,请参阅我的文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".请开发者自己去试.


作者:UbuntuTouch 发表于2017/1/19 16:59:05 原文链接
阅读:317 评论:0 查看评论

Read more
UbuntuTouch

在我们的应用设计中,我们通过会选择一些临时的文件目录来存储我们的文件,比如在Linux中的tmp文件目录.那么在我们的snap设计中,我们应该利用哪个文件目录来存储我们的文件呢?答案是我们可以选择XDG_RUNTIME_DIR,当然这也依赖于开发者自己的选择.


我们先来看一下我的一个做好的例程:

https://github.com/liu-xiao-guo/helloworld-fifo

它的snapcraft.yaml文件如下:


name: hello
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: strict
type: app  #it can be gadget or framework
icon: icon.png

apps:
 fifo:
   command: bin/fifo
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo
 createfile:
   command: bin/createfile
 createfiletohome:
   command: bin/createfiletohome
 writetocommon:
   command: bin/writetocommon

parts:
 hello:
  plugin: dump
  source: .

在这里,我们设计了一个叫做fifo的应用.它的脚本具体如下:

#!/bin/bash

echo "Going to make a directory at: $XDG_RUNTIME_DIR"
mkdir -p $XDG_RUNTIME_DIR

echo "Create a file at the location..."
cd $XDG_RUNTIME_DIR
pwd
touch thisfile

if [ $? == 0 ]; then
	echo "The file is successfully created!"
else
	echo "The file is not successfully created!"
fi

我首先创建一个目录,并在目录中创建一个文件.显示如下:

liuxg@liuxg:~$ hello.fifo 
Going to make a directory at: /run/user/1000/snap.hello
Create a file at the location...
/run/user/1000/snap.hello
The file is successfully created!

显然这个应用的运行是没有任何的permission的问题的.它是完全可以访问并进行读写的位置.这个位置可以被我们的应用程序用来进行FIFO的操作.

我们实际上也可以运行我在应用中的env这个应用来展示所有的环境变量:

liuxg@liuxg:~$ hello.env | grep XDG_RUNTIME_DIR
XDG_RUNTIME_DIR=/run/user/1000/snap.hello


当然,我们也可以使用/tmp目录来作为临时存储文件目录.这个目录对于每个snap应用来说都是独特的,也就是每个应用有一个自己的独立的tmp目录.但是我们我们都可以按照/tmp的方式去访问.这个文件的位置可以在我们的桌面电脑的/tmp目录下找到。它的文件目录有点像/tmp/snap.1000_snap.hello.fifo_5BpMiB/tmp。

我们可以使用如下的代码来检验这个:

fifo

#!/bin/bash

echo "Going to make a directory at: $XDG_RUNTIME_DIR"
mkdir -p $XDG_RUNTIME_DIR

echo "Create a file at the location..."
cd $XDG_RUNTIME_DIR
pwd
touch thisfile

if [ $? == 0 ]; then
	echo "The file is successfully created!"
else
	echo "The file is not successfully created!"
fi

cd /tmp
pwd
echo "Haha" > test.txt

if [ $? == 0 ]; then
	echo "The test.txt file is successfully created!"
else
	echo "The test.txt file is not successfully created!"
fi


作者:UbuntuTouch 发表于2017/2/4 11:37:04 原文链接
阅读:293 评论:0 查看评论

Read more
UbuntuTouch

基于开发者程路的项目:https://github.com/dawndiy/electronic-wechat-snap,我对该项目做了一些小的修改.但最终把electronic-wechat打包为snap应用.


1)方案一


在整个方案中,我们直接把一个稳定编译好的发布直接下载打包.这个项目的源码:

snapcraft.yaml


name: electronic-wechat
version: '1.4.0'
summary: A better WeChat on macOS and Linux. Built with Electron.
description: |
  Electronic WeChat is a unoffical WeChat client. A better WeChat on
  macOS and Linux. Built with Electron.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  electronic-wechat:
    command: desktop-launch $SNAP/wechat.wrapper
    plugs:
      - unity7
      - opengl
      - network
      - pulseaudio
      - home
      - browser-support
      - gsettings
      - x11

parts:
  electronic-wechat:
    plugin: dump
    source: https://github.com/geeeeeeeeek/electronic-wechat/releases/download/v1.4.0/linux-x64.tar.gz
    stage-packages:
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      - fonts-wqy-microhei
      - libasound2-data
      - fcitx-frontend-gtk2
      - overlay-scrollbar-gtk2
      - libatk-adaptor
      - libcanberra-gtk-module
    filesets:
      no-easy-install-files:
        - -usr/sbin/update-icon-caches
        - -README.md
    stage:
      - $no-easy-install-files
    prime:
      - $no-easy-install-files

  wechat-copy:
    plugin: dump
    source: .
    filesets:
      wechat.wrapper: wechat.wrapper
    after: 
      - electronic-wechat
      - desktop-gtk2

在这里,我们直接在地址https://github.com/geeeeeeeeek/electronic-wechat/releases/download/v1.4.0/linux-x64.tar.gz下载已经编译好的稳定的版本,并进行打包.这里我们可以利用dump plugin来帮我们进行安装.

2)方案二


我们可以利用最新的代码来编译,并打包.这个项目的源码在:


snapcraft.yaml


name: electronic-wechat
version: '1.4.0'
summary: A better WeChat on macOS and Linux. Built with Electron.
description: |
  Electronic WeChat is a unoffical WeChat client. A better WeChat on
  macOS and Linux. Built with Electron.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  electronic-wechat:
    command: desktop-launch $SNAP/wechat.wrapper
    plugs:
      - unity7
      - opengl
      - network
      - pulseaudio
      - home
      - browser-support
      - gsettings
      - x11

parts:
  electronic-wechat:
    plugin: nodejs
    source-type: git
    source: https://github.com/geeeeeeeeek/electronic-wechat/
    source-branch: production
    npm-run:
      - build:linux
    install: cp -r dist $SNAPCRAFT_PART_INSTALL
    stage-packages:
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      - fonts-wqy-microhei
      - libasound2-data
      - fcitx-frontend-gtk2
      - overlay-scrollbar-gtk2
      - libatk-adaptor
      - libcanberra-gtk-module
    filesets:
      no-easy-install-files:
        - -usr/sbin/update-icon-caches
        - -README.md
    stage:
      - $no-easy-install-files
    prime:
      - $no-easy-install-files

  wechat-copy:
    plugin: dump
    source: .
    filesets:
      wechat.wrapper: wechat.wrapper
    after: 
      - electronic-wechat
      - desktop-gtk2

最新的代码在地址https://github.com/geeeeeeeeek/electronic-wechat/可以找到.我们利用nodejs plugin来帮我们进行打包.在这里,我们使用了snapcraft的Scriplets来覆盖我们的nodejs plugin中的install:

   install: cp -r dist $SNAPCRAFT_PART_INSTALL

对于上面的npm-run那一句,我们甚至也可以删除,并用snapcraft中的Scriplets来实现:

build: npm run build:linux

我们在项目的根目录下打入如下的命令:

$ snapcraft

它将最终帮我们生成我们所需要的.snap文件.我们可以安装到我们的系统中:

$ sudo snap install electronic-wechat_1.4.0_amd64.snap --dangerous

liuxg@liuxg:~$ snap list
Name                 Version  Rev  Developer  Notes
core                 16.04.1  888  canonical  -
electronic-wechat    1.4.0    x1              -
hello-world          6.3      27   canonical  -
hello-xiaoguo        1.0      x1              -
snappy-debug         0.28     26   canonical  -
ubuntu-app-platform  1        22   canonical  -

我们可以看到electronic-wechat已经被成功安装到我的电脑中.运行我们的应用:








作者:UbuntuTouch 发表于2017/2/3 16:12:02 原文链接
阅读:302 评论:0 查看评论

Read more
UbuntuTouch

我们在先前的文章"如何把一个qmake的Ubuntu手机应用打包为一个snap应用"中体会了如何把一个qmake的项目打包为一个snap应用.在今天的教程中,我们利用Qt Creator来创建一个项目,并最终把我们的应用打包为一个snap项目.在打包的过程中,我们可以体会在snapcraft中的scriplets


1)创建一个Qt Helloworld项目


首先,我们打开我们的Qt Creator:







这样我们就创建了一个最简单的一个helloworld应用.


2)创建snapcraft.yaml文件


我们在项目的根目录下,打入如下的命令:
$ snapcraft init
上面的命令将会为我们在当前目录下生成一个叫做snap的目录(snapcraft version 2.26,之前的版本没有创建这个snap目录).

liuxg@liuxg:~/snappy/desktop/qtapp$ tree -L 3
.
├── main.cpp
├── mainwindow.cpp
├── mainwindow.h
├── mainwindow.ui
├── qtapp.pro
├── qtapp.pro.user
├── README.md
└── snap
    └── snapcraft.yaml

所有文件的架构如上面所示.我们可以通过编辑修改这个snapcraft.yaml文件:

snapcraft.yaml

name: qthello 
version: '0.1' 
summary: a demo for qt hello app
description: |
  This is a qt app demo

grade: stable 
confinement: strict 

apps:
  qthello:
    command: desktop-launch $SNAP/opt/myapp/qtapp
    plugs: [home, unity7, x11]

parts:
  project:
    plugin: qmake
    source: .
    qt-version: qt5
    project-files: [qtapp.pro]
    install: |
      install -d $SNAPCRAFT_PART_INSTALL/opt/myapp
      install qtapp $SNAPCRAFT_PART_INSTALL/opt/myapp/qtapp

  integration:
    plugin: nil
    stage-packages:
     - libc6
     - libstdc++6
     - libc-bin
    after: [desktop-qt5]

在这里,我们必须指出的是:

    install: |
      install -d $SNAPCRAFT_PART_INSTALL/opt/myapp
      install qtapp $SNAPCRAFT_PART_INSTALL/opt/myapp/qtapp

由于在原始的qtapp.pro文件中,并没有相应的代码让我们如何去安装我们的qtapp应用文件.我们在这里使用了上面的install来安装我们的应用.根据在Scriplets里的描述:

“install”

The install scriptlet is triggered after the build step of a plugin.

这里的scripts将会在build之后自动被自动执行.它首先创建一个叫做myapp的目录,接着把我们在build目录中的二进制执行文件qtapp安装到myapp目录下.这样就最终形成了我们的snap包.

我们安装好qthello应用,并执行:






在这个snap应用中,我们把对Qt所有的库的依赖都打包到一个包里,这样我们最终的snap包的大小很大.如果开发者想减少这个Qt应用的大小的话,开发者可以参阅文章"利用ubuntu-app-platform提供的platform接口来减小Qt应用大小"来减小整个应用的大小.





作者:UbuntuTouch 发表于2017/2/3 14:25:12 原文链接
阅读:209 评论:0 查看评论

Read more
Leo Arias

Call for testing: MySQL

I promised that more interesting things were going to be available soon for testing in Ubuntu. There's plenty coming, but today here is one of the greatest:

$ sudo snap install mysql --channel=8.0/beta

screenshot of mysql snap running

Lars Tangvald and other people at MySQL have been working on this snap for some time, and now they are ready to give it to the community for crowd testing. If you have some minutes, please give them a hand.

We have a testing guide to help you getting started.

Remember that this should run in trusty, xenial, yakkety, zesty and in all flavours of Ubuntu. It would be great to get a diverse pool of platforms and test it everywhere.

In here we are introducing a new concept: tracks. Notice that we are using --channel=8.0/beta, instead of only --beta as we used to do before. That's because mysql has two different major versions currently active. In order to try the other one:

$ sudo snap install mysql --channel=5.7/beta

Please report back your results. Any kind of feedback will be highly appreciated, and if you have doubts or need a hand to get started, I'm hanging around in Rocket Chat.

Read more
Stéphane Graber

This is the twelfth and last blog post in this series about LXD 2.0.

LXD logo

Introduction

This is finally it! The last blog post in this series of 12 that started almost a year ago.

If you followed the series from the beginning, you should have been using LXD for quite a bit of time now and be pretty familiar with its day to day operation and capabilities.

But what if something goes wrong? What can you do to track down the problem yourself? And if you can’t, what information should you record so that upstream can track down the problem?

And what if you want to fix issues yourself or help improve LXD by implementing the features you need? How do you build, test and contribute to the LXD code base?

Debugging LXD & filing bug reports

LXD log files

/var/log/lxd/lxd.log

This is the main LXD log file. To avoid filling up your disk very quickly, only log messages marked as INFO, WARNING or ERROR are recorded there by default. You can change that behavior by passing “–debug” to the LXD daemon.

/var/log/lxd/CONTAINER/lxc.conf

Whenever you start a container, this file is updated with the configuration that’s passed to LXC.
This shows exactly how the container will be configured, including all its devices, bind-mounts, …

/var/log/lxd/CONTAINER/forkexec.log

This file will contain errors coming from LXC when failing to execute a command.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

/var/log/lxd/CONTAINER/forkstart.log

This file will contain errors coming from LXC when starting the container.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

CRIU logs (for live migration)

If you are using CRIU for container live migration or live snapshotting there are additional log files recorded every time a CRIU dump is generated or a dump is restored.

Those logs can also be found in /var/log/lxd/CONTAINER/ and are timestamped so that you can find whichever matches your most recent attempt. They will contain a detailed record of everything that’s dumped and restored by CRIU and are far better for understanding a failure than the typical migration/snapshot error message.

LXD debug messages

As mentioned above, you can switch the daemon to doing debug logging with the –debug option.
An alternative to that is to connect to the daemon’s event interface which will show you all log entries, regardless of the configured log level (even works remotely).

An example for “lxc init ubuntu:16.04 xen” would be:
lxd.log:

INFO[02-24|18:14:09] Starting container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000
INFO[02-24|18:14:10] Started container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000

lxc monitor –type=logging:

metadata:
  context: {}
  level: dbug
  message: 'New events listener: 9b725741-ffe7-4bfc-8d3e-fe620fc6e00a'
timestamp: 2017-02-24T18:14:01.025989062-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.341283344-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.341536477-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/containers/xen
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.347709394-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: PUT
    url: /1.0/containers/xen/state
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.357046302-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358387853-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358578599-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/2e2cf904-c4c4-4693-881f-57897d602ad3/wait
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.366213106-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.369636451-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.369771164-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.424696767-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
    name: xen
  level: dbug
  message: ContainerUmount
timestamp: 2017-02-24T18:14:09.432723719-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.721067917-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Starting container
timestamp: 2017-02-24T18:14:09.749808518-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.792551375-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.792961032-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /internal/containers/23/onstart
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.800803501-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.803190248-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.803251188-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.803306055-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Scheduler: container xen started: re-balancing'
timestamp: 2017-02-24T18:14:09.965080432-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Started container
timestamp: 2017-02-24T18:14:10.162965059-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Success for task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:10.163072893-05:00
type: logging

The format from “lxc monitor” is a bit different from what you’d get in a log file where each entry is condense into a single line, but more importantly you see all those “level: dbug” entries

Where to report bugs

LXD bugs

The best place to report LXD bugs is upstream at https://github.com/lxc/lxd/issues.
Make sure to fill in everything in the bug reporting template as that information saves us a lot of back and forth to reproduce your environment.

Ubuntu bugs

If you find a problem with the Ubuntu package itself, failing to install, upgrade or remove. Or run into issues with the LXD init scripts. The best place to report such bugs is on Launchpad.

On an Ubuntu system, you can do so with: ubuntu-bug lxd
This will automatically include a number of log files and package information for us to look at.

CRIU bugs

Bugs that are related to CRIU which you can spot by the usually pretty visible CRIU error output should be reported on Launchpad with: ubuntu-bug criu

Do note that the use of CRIU through LXD is considered to be a beta feature and unless you are willing to pay for support through a support contract with Canonical, it may take a while before we get to look at your bug report.

Contributing to LXD

LXD is written in Go and hosted on Github.
We welcome external contributions of any size. There is no CLA or similar legal agreement to sign to contribute to LXD, just the usual Developer Certificate of Ownership (Signed-off-by: line).

We have a number of potential features listed on our issue tracker that can make good starting points for new contributors. It’s usually best to first file an issue before starting to work on code, just so everyone knows that you’re doing that work and so we can give some early feedback.

Building LXD from source

Upstream maintains up to date instructions here: https://github.com/lxc/lxd#building-from-source

You’ll want to fork the upstream repository on Github and then push your changes to your branch. We recommend rebasing on upstream LXD daily as we do tend to merge changes pretty regularly.

Running the testsuite

LXD maintains two sets of tests. Unit tests and integration tests. You can run all of them with:

sudo -E make check

To run the unit tests only, use:

sudo -E go test ./...

To run the integration tests, use:

cd test
sudo -E ./main.sh

That latter one supports quite a number of environment variables to test various storage backends, disable network tests, use a ramdisk or just tweak log output. Some of those are:

  • LXD_BACKEND: One of “btrfs”, “dir”, “lvm” or “zfs” (defaults to “dir”)
    Lets your run the whole testsuite with any of the LXD storage drivers.
  • LXD_CONCURRENT: “true” or “false” (defaults to “false”)
    This enables a few extra concurrency tests.
  • LXD_DEBUG: “true” or “false” (defaults to “false”)
    This will log all shell commands and run all LXD commands in debug mode.
  • LXD_INSPECT: “true” or “false” (defaults to “false”)
    This will cause the testsuite to hang on failure so you can inspect the environment.
  • LXD_LOGS: A directory to dump all LXD log files into (defaults to “”)
    The “logs” directory of all spawned LXD daemons will be copied over to this path.
  • LXD_OFFLINE: “true” or “false” (defaults to “false”)
    Disables any test which relies on outside network connectivity.
  • LXD_TEST_IMAGE: path to a LXD image in the unified format (defaults to “”)
    Lets you use a custom test image rather than the default minimal busybox image.
  • LXD_TMPFS: “true” or “false” (defaults to “false”)
    Runs the whole testsuite within a “tmpfs” mount, this can use quite a bit of memory but makes the testsuite significantly faster.
  • LXD_VERBOSE: “true” or “false” (defaults to “false”)
    A less extreme version of LXD_DEBUG. Shell commands are still logged but –debug isn’t passed to the LXC commands and the LXD daemon only runs with –verbose.

The testsuite will alert you to any missing dependency before it actually runs. A test run on a reasonably fast machine can be done under 10 minutes.

Sending your branch

Before sending a pull request, you’ll want to confirm that:

  • Your branch has been rebased on the upstream branch
  • All your commits messages include the “Signed-off-by: First Last <email>” line
  • You’ve removed any temporary debugging code you may have used
  • You’ve squashed related commits together to keep your branch easily reviewable
  • The unit and integration tests all pass

Once that’s all done, open a pull request on Github. Our Jenkins will validate that the commits are all signed-off, a test build on MacOS and Windows will automatically be performed and if things look good, we’ll trigger a full Jenkins test run that will test your branch on all storage backends, 32bit and 64bit and all the Go versions we care about.

This typically takes less than an hour to happen, assuming one of us is around to trigger Jenkins.

Once all the tests are done and we’re happy with the code itself, your branch will be merged into master and your code will be in the next LXD feature release. If the changes are suitable for the LXD stable-2.0 branch, we’ll backport them for you.

Conclusion

I hope this series of blog post has been helpful in understanding what LXD is and what it can do!

This series’ scope was limited to the LTS version of LXD (2.0.x) but we also do monthly feature releases for those who want the latest features. You can find a few other blog posts covering such features listed in the original LXD 2.0 series post.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Dustin Kirkland

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin

      Read more
      Alan Griffiths

      miral-workspaces

      “Workspaces” have arrived on MirAL trunk (lp:miral).

      We won’t be releasing 1.3 with this feature just yet (as we want some experience with this internally first). But if you build from source there’s an example to play with (bin/miral-app).

      As always, bug reports and other suggestions are welcome.

      Note that the miral-shell doesn’t have transitions and other effects like fully featured desktop environments.

      Read more
      facundo

      Resumen veraniego de películas


      Muchas, muchas películas vistas. Igual no entro en ritmo en ver más; estoy complicado en encontrar ese par de horas en que los niños están tranquilos y yo no estoy muy cansado :p

      • Alice Through the Looking Glass: +0. Divertida, un flash, pero tampoco mucho más que una colección de momentos interesantes.
      • All Is Lost: -0. La supervivencia de alguien con una seguidilla de malas suertes; mirala sólo si te interesa esto de "estar solo y sobrevivir como se pueda".
      • Captain America: Civil War: +0. La típica pelea entre superheroes, pero no se me hizo pesada; de bonus tiene una temática interesante de pensar, sobre el control de gobiernos sobre las armas.
      • Clouds of Sils Maria: -0. Aunque tiene muchas charlas interesantes, la historia en sí no tiene ritmo, y no va a ningún lado.
      • Danny Collins: +0. Linda historia, no del todo esperado lo que sucede, emotiva, bien armada.
      • El Ardor: +0. Buena la historia, buena la ambientación, y creo que muestra bien una realidad que conocemos muy poco.
      • Ex Machina: -0. No me gustó, pero no sé bien por qué. ¿Le faltó suspenso? ¿Muy plana? Lo que plantean a nivel inteligencia artificial está bien, sin embargo (me hubiese gustado más profundidad, pero bueno, es una película para las masas, no un documental).
      • Fantastic Four: -0. Un punto de vista diferente del clásico, pero bleh.
      • Home Sweet Hell: -0. Con sus partes muy graciosas, pero la historia no llega a ser.
      • La Vénus à la fourrure: +1. La dinámica entre dos personas, la linea entre la realidad y la ficción. Me encantó.
      • Laggies: +0. Apenas lenta, pero buena historia, buen desarrollo, me gustó como muestra la evolución de la decisión del personaje principal.
      • Match: +0. Linda historia, buenas actuaciones. Potente.
      • Nina: +1. Mis más grandes respetos para Zoe Saldaña. Maravillosa. Deslumbrante. Me gustaría saber qué piensa una o un fan de Nina Simone sobre esta película.
      • Pan: -0. Una versión distinta del clásico, bastante renovada, no me llegó a atrapar.
      • Pixels: +0. Divertida y pasatista, me gustó estando Adam Sandler y todo. Tampoco es la gran cosa, eh, pero es más que nada piola en función de los videojuegos viejos...
      • Predestination: +1. Muy buena historia, no vas entendiendo de qué va hasta que te enroscó y después ya caiste en la (buena) trampa.
      • Stealing Beauty: -0. Una linda historia, una maravillosa fotografía, pero le falta "consistencia", es muy etérea, no sé. Y lenta.
      • The November Man: -0. Una de acción y espías wannabe, no mucho.
      • The Right Kind of Wrong: +0. Sólo una comedia romántica, pasatista, pero divertida.
      • Time Lapse: +1. LA historia no es muy profunda, pero maneja muy bien la temporalidad (o los saltos en la misma...).
      • Under the Skin: -1. Hay una historia, ahí, pero la película es EXTREMADAMENTE lenta :(.
      • VANish: -0. Bruta, violenta, y cruda. Pero nada más.
      • Vice: -0. Con algunos dejos de temática interesante, en la que podrían haber incursionado sobre la parte conceptual de los robots, pero la película va por otro lado.


      Un montonazo para ver! Y eso que no estoy encontrando un buen lugar para enterarme de los trailers que van saliendo. Por ahora estoy usando este canal de YouTube, pero no tiene todo. Me sugirieron IMDb, también, pero aunque tiene algunas cosas que el otro no, tiene muy poco y no parece estar del todo bien ordenado.

      • Amateur (2016; Thriller) Martin (Esteban Lamothe) is a lonely television director, who becomes obsessed with his neighbor, Isabel (Jazmin Stuart), when he finds an amateur porn video in which she participates. But Isabel is the wife of Battaglia (Alejandro Awada), the owner of the television station where Martin works. As a strange love encounter takes place between Martin and Isabel, he discovers a secret that puts them both in danger. [D: Sebastian Perillo; A: Alejandro Awada, Esteban Lamothe, Jazmín Stuart]
      • Blade Runner 2049 (2017; Sci-Fi) Thirty years after the events of the first film, a new blade runner, LAPD Officer K (Ryan Gosling), unearths a long-buried secret that has the potential to plunge what's left of society into chaos. K's discovery leads him on a quest to find Rick Deckard (Harrison Ford), a former LAPD blade runner who has been missing for 30 years. [D: Denis Villeneuve; A: Ryan Gosling, Ana de Armas, Jared Leto]
      • Colossal (2016; Action, Sci-Fi, Thriller) A woman discovers that severe catastrophic events are somehow connected to the mental breakdown from which she's suffering. [D: Nacho Vigalondo; A: Dan Stevens, Anne Hathaway, Jason Sudeikis]
      • DxM (2015; Action, Sci-Fi, Thriller) A group of brilliant young students discover the greatest scientific breakthrough of all time: a wireless neural network, connected via a quantum computer, capable of linking the minds of each and every one of us. They realise that quantum theory can be used to transfer motor-skills from one brain to another, a first shareware for human motor-skills. They freely spread this technology, believing it to be a first step towards a new equality and intellectual freedom. But they soon discover that they themselves are part of a much greater and more sinister experiment as dark forces emerge that threaten to subvert this technology into a means of mass-control. MindGamers takes the mind-bender thriller to the next level with an immersive narrative and breath-taking action. [D: Andrew Goth; A: Dominique Tipper, Sam Neill, Tom Payne]
      • Elle (2016; Comedy, Drama, Thriller) Michèle seems indestructible. Head of a successful video game company, she brings the same ruthless attitude to her love life as to business. Being attacked in her home by an unknown assailant changes Michèle's life forever. When she resolutely tracks the man down, they are both drawn into a curious and thrilling game-a game that may, at any moment, spiral out of control. [D: Paul Verhoeven; A: Isabelle Huppert, Laurent Lafitte, Anne Consigny]
      • Frank & Lola (2016; Crime, Drama, Mystery, Romance, Thriller) A psychosexual noir love story, set in Las Vegas and Paris, about love, obsession, sex, betrayal, revenge and, ultimately, the search for redemption. [D: Matthew Ross; A: Imogen Poots, Michael Shannon, Michael Nyqvist]
      • Ghost in the Shell (2017; Action, Drama, Sci-Fi, Thriller) Based on the internationally acclaimed sci-fi manga series, "Ghost in the Shell" follows the Major, a special ops, one-of-a-kind human cyborg hybrid, who leads the elite task force Section 9. Devoted to stopping the most dangerous criminals and extremists, Section 9 is faced with an enemy whose singular goal is to wipe out Hanka Robotic's advancements in cyber technology. [D: Rupert Sanders; A: Scarlett Johansson, Michael Pitt, Michael Wincott]
      • Guardians of the Galaxy Vol. 2 (2017; Action, Sci-Fi) Set to the backdrop of 'Awesome Mixtape #2,' Marvel's Guardians of the Galaxy Vol. 2 continues the team's adventures as they traverse the outer reaches of the cosmos. The Guardians must fight to keep their newfound family together as they unravel the mysteries of Peter Quill's true parentage. Old foes become new allies and fan-favorite characters from the classic comics will come to our heroes' aid as the Marvel cinematic universe continues to expand. [D: James Gunn; A: Chris Sullivan, Pom Klementieff, Chris Pratt]
      • Kiki, el amor se hace (2016; Comedy) Through five stories, the movie addresses sex and love: Paco and Ana are a marriage looking for reactivate the passion of their sexual relations, long time unsatisfied; Jose Luis tries to recover the affections of his wife Paloma, sit down on a wheelchair after an accident which has limited her mobility; Mª Candelaria and Antonio are a marriage trying by all way to be parents, but she has the trouble that no get an orgasm when make love with him; Álex try to satisfy Natalia's fantasies, while she starts to doubt if he finally will ask her in marriage; and finally, Sandra is a single woman in a permanent searching for a man to fall in love. All them love, fear, live and explore their diverse sexual paraphilias and the different sides of sexuality, trying to find the road to happiness. [D: Paco León; A: Natalia de Molina, Álex García, Jacobo Sánchez]
      • Life (2017; Horror, Sci-Fi, Thriller) Six astronauts aboard the space station study a sample collected from Mars that could provide evidence for extraterrestrial life on the Red Planet. The crew determines that the sample contains a large, single-celled organism - the first example of life beyond Earth. But..things aren't always what they seem. As the crew begins to conduct research, and their methods end up having unintended consequences, the life form proves more intelligent than anyone ever expected. [D: Daniel Espinosa; A: Rebecca Ferguson, Jake Gyllenhaal, Ryan Reynolds]
      • Little Murder (2011; Crime, Drama, Thriller) In post-Katrina New Orleans, a disgraced detective encounters the ghost of a murdered woman who wants to help him identify her killer. [D: Predrag Antonijevic; A: Josh Lucas, Terrence Howard, Lake Bell]
      • Logan (2017; Action, Drama, Sci-Fi) In the near future, a weary Logan cares for an ailing Professor X in a hide out on the Mexican border. But Logan's attempts to hide from the world and his legacy are up-ended when a young mutant arrives, being pursued by dark forces. [D: James Mangold; A: Doris Morgado, Hugh Jackman, Dafne Keen]
      • Passengers (2016; Adventure, Drama, Romance, Sci-Fi) The spaceship, Starship Avalon, in its 120-year voyage to a distant colony planet known as the "Homestead Colony" and transporting 5,258 people has a malfunction in one of its sleep chambers. As a result one hibernation pod opens prematurely and the one person that awakes, Jim Preston (Chris Pratt) is stranded on the spaceship, still 90 years from his destination. [D: Morten Tyldum; A: Jennifer Lawrence, Chris Pratt, Michael Sheen]
      • Personal Shopper (2016; Drama, Mystery, Thriller) Revolves around a ghost story that takes place in the fashion underworld of Paris. [D: Olivier Assayas; A: Kristen Stewart, Lars Eidinger, Sigrid Bouaziz]
      • Pirates of the Caribbean: Dead Men Tell No Tales (2017; Action, Adventure, Comedy, Fantasy) Captain Jack Sparrow finds the winds of ill-fortune blowing even more strongly when deadly ghost pirates led by his old nemesis, the terrifying Captain Salazar, escape from the Devil's Triangle, determined to kill every pirate at sea...including him. Captain Jack's only hope of survival lies in seeking out the legendary Trident of Poseidon, a powerful artifact that bestows upon its possessor total control over the seas. [D: Joachim Rønning, Espen Sandberg; A: Kaya Scodelario, Johnny Depp, Javier Bardem]
      • Spider-Man: Homecoming (2017; Action, Adventure, Sci-Fi) A young Peter Parker/Spider-Man, who made his sensational debut in Captain America: Civil War, begins to navigate his newfound identity as the web-slinging superhero in Spider-Man: Homecoming. Thrilled by his experience with the Avengers, Peter returns home, where he lives with his Aunt May, under the watchful eye of his new mentor Tony Stark, Peter tries to fall back into his normal daily routine - distracted by thoughts of proving himself to be more than just your freindly neighborhood Spider-Man - but when the Vulture emerges as a new villain, everything that Peter holds most important will be threatened. [D: Jon Watts; A: Robert Downey Jr., Tom Holland, Angourie Rice]
      • T2 Trainspotting (2017; Comedy, Drama) First there was an opportunity......then there was a betrayal. Twenty years have gone by. Much has changed but just as much remains the same. Mark Renton (Ewan McGregor) returns to the only place he can ever call home. They are waiting for him: Spud (Ewen Bremner), Sick Boy (Jonny Lee Miller), and Begbie (Robert Carlyle). Other old friends are waiting too: sorrow, loss, joy, vengeance, hatred, friendship, love, longing, fear, regret, diamorphine, self-destruction and mortal danger, they are all lined up to welcome him, ready to join the dance. [D: Danny Boyle; A: Ewan McGregor, Logan Gillies, Ben Skelton]
      • The Discovery (2017; Romance, Sci-Fi) Writer-director Charlie McDowell returns to Sundance this year with a thriller about a scientist (played by Robert Redford) who uncovers scientific proof that there is indeed an afterlife. His son is portrayed by Jason Segel, who's not too sure about his father's "discovery", and Rooney Mara plays a mystery woman who has her own reasons for wanting to find out more about the afterlife. [D: Charlie McDowell; A: Rooney Mara, Riley Keough, Robert Redford]
      • The Whole Truth (2016; Drama, Thriller) Defense attorney Richard Ramsay takes on a personal case when he swears to his widowed friend, Loretta Lassiter, that he will keep her son Mike out of prison. Charged with murdering his father, Mike initially confesses to the crime. But as the trial proceeds, chilling evidence about the kind of man that Boone Lassiter really was comes to light. While Ramsay uses the evidence to get his client acquitted, his new colleague Janelle tries to dig deeper - and begins to realize that the whole truth is something she alone can uncover. [D: Courtney Hunt; A: Keanu Reeves, Renée Zellweger, Gugu Mbatha-Raw]
      • The Comedian (2016; Comedy) A look at the life of an aging insult comic named Jack Burke. [D: Taylor Hackford; A: Robert De Niro, Leslie Mann, Harvey Keitel]
      • The Mummy (2017; Action, Adventure, Fantasy, Horror) Though safely entombed in a crypt deep beneath the unforgiving desert, an ancient princess whose destiny was unjustly taken from her is awakened in our current day, bringing with her malevolence grown over millennia, and terrors that defy human comprehension. [D: Alex Kurtzman; A: Tom Cruise, Sofia Boutella, Russell Crowe]
      • Valerian and the City of a Thousand Planets (2017; Action, Adventure, Sci-Fi) Rooted in the classic graphic novel series, Valerian and Laureline- visionary writer/director Luc Besson advances this iconic source material into a contemporary, unique and epic science fiction saga. Valerian (Dane DeHaan) and Laureline (Cara Delevingne) are special operatives for the government of the human territories charged with maintaining order throughout the universe. Valerian has more in mind than a professional relationship with his partner- blatantly chasing after her with propositions of romance. But his extensive history with women, and her traditional values, drive Laureline to continuously rebuff him. Under directive from their Commander (Clive Owen), Valerian and Laureline embark on a mission to the breathtaking intergalactic city of Alpha, an ever-expanding metropolis comprised of thousands of different species from all four corners of the universe. Alpha's seventeen million inhabitants have converged over time- uniting their talents, technology and resources for the betterment of all. Unfortunately, not everyone on Alpha shares in these same objectives; in fact, unseen forces are at work, placing our race in great danger. [D: Luc Besson; A: Dane DeHaan, Cara Delevingne, Ethan Hawke]
      • Vampyres (2015; Horror) Faithful to the sexy, twisted 1974 cult classic by Joseph Larraz, Vampyres is an English-language remake pulsating with raw eroticism, wicked sado-masochism and bloody, creative gore. Victor Matellano (Wax (2014, Zarpazos! A Journey through Spanish Horror, 2013) directs this tale set in a stately English manor inhabited by two older female vampires and with their only cohabitant being a man imprisoned in the basement. Their lives and lifestyle are upended when a trio of campers come upon their lair and seek to uncover their dark secrets, a decision that has sexual and blood-curdling consequences. [D: Víctor Matellano; A: Marta Flich, Almudena León, Alina Nastase]
      • Zero Days (2016; Documentary) Documentary detailing claims of American/Israeli jointly developed malware Stuxnet being deployed not only to destroy Iranian enrichment centrifuges but also threaten attacks against Iranian civilian infrastructure. Adresses obvious potential blowback of this possibly being deployed against the US by Iran in retaliation. [D: Alex Gibney; A: David Sanger, Emad Kiyaei, Eric Chien]
      • Collateral Beauty (2016; Drama, Romance) When a successful New York advertising executive suffers a great tragedy, he retreats from life. While his concerned friends try desperately to reconnect with him, he seeks answers from the universe by writing letters to Love, Time and Death. But it's not until his notes bring unexpected personal responses that he begins to understand how these constants interlock in a life fully lived, and how even the deepest loss can reveal moments of meaning and beauty [D: David Frankel; A: Will Smith, Edward Norton, Kate Winslet]
      • Passage to Mars (2016; Documentary, Adventure) The journals of a true NASA Arctic expedition unveils the adventure of a six-man crew's aboard an experimental vehicle designed to prepare the first human exploration of Mars. A voyage of fears and survival, hopes and dreams, through the beauties and the deadly dangers of two worlds: the High Arctic and Mars, a planet that might hide the secret of our origins. [D: Jean-Christophe Jeauffre; A: Zachary Quinto, Charlotte Rampling, Pascal Lee]


      Finalmente, el conteo de pendientes por fecha:

      (Abr-2011)    4
      (Ago-2011)   11   4
      (Ene-2012)   17  11   3
      (Jul-2012)   15  14  11
      (Nov-2012)   11  11  11   6
      (Feb-2013)   15  14  14   8   2
      (Jun-2013)   16  15  15  15  11   2
      (Sep-2013)   18  18  18  17  16   8
      (Dic-2013)   14  14  12  12  12  12   4
      (Abr-2014)        9   9   8   8   8   3
      (Jul-2014)           10  10  10  10  10   5   1
      (Nov-2014)               24  22  22  22  22   7
      (Feb-2015)                   13  13  13  13  10
      (Jun-2015)                       16  16  15  13  11
      (Dic-2015)                           21  19  19  18
      (May-2016)                               26  25  23
      (Sep-2016)                                   19  19
      (Feb-2017)                                       26
      Total:      121 110 103 100  94  91  89 100  94  97

      Read more
      Alan Griffiths

      MirAL 1.2

      There’s a new MirAL release (1.2.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

      Unsurprisingly, given the project’s original goal, the ABI is unchanged.

      Since my last update the integration of libmiral into QtMir has progressed and libmiral has been used in the latest updates to Unity8.

      The changes in 1.2.0 fall are:

      A new libmirclientcpp-dev package

      This is a “C++ wrapper for libmirclient” and has been split
      from libmiral-dev.

      Currently it comprises RAII wrappers for some Mir client library types: MirConnection, MirWindowSpec, MirWindow and MirWindowId. In addition, the WindowSpec wrapper provides named constructors and function chaining to enable code like the following:

      auto const window = mir::client::WindowSpec::
          for_normal_window(connection, 50, 50, mir_pixel_format_argb_8888)
          .set_buffer_usage(mir_buffer_usage_software)
          .set_name(a_window.c_str())
          .create_window();
      

      Refresh the “Building and Using MirAL” doc

      This has been rewritten (and renamed) to reflect the presence of MirAL in the Ubuntu archives and make installation (rather than “build it yourself”) the default approach.

      Bug fixes

      • [libmiral] Chrome-less shell hint does not work any more (LP: #1658117)
      • “$ miral-app -kiosk” fails with “Unknown command line options:
        –desktop_file_hint=miral-shell.desktop” (LP: #1660933)
      • [libmiral] Fix focus and movement rules for Input Method and Satellite
        windows. (LP: #1660691)
      • [libmirclientcpp-dev] WindowSpec::set_state() wrapper for mir_window_spec_set_state()
        (LP: #1661256)

      Read more

      Snappy Libertine

      Libertine is software suite for runnin X11 apps in non-X11 environments and installing deb-based applications on a system without dpkg. Snappy is a package management system to confine applications from one another. Wouldn’t it be cool to run libertine as a snap?

      Yes. Yes it would.

      snapd

      The first thing to install is snapd itself. You can find installation instructions for many Linux distros at snapcraft.io, but here’s the simple command if you’re on a debian-based operating system:

      1
      
      $ sudo apt install snapd
      

      Ubuntu users may be surprised to find that snapd is already installed on their systems. snapd is the daemon for handling all things snappy: installing, removing, handling interface connections, etc.

      lxd

      We use lxd as our container backend for libertine in the snap. lxd is essentially a layer on top of lxc to give a better user experience. Fortunately for us, lxd has a snap all ready to go. Unfortunately, the snap version of lxd is incompatible with the deb-based version, so you’ll need to completely remove that before continuing. Skip this step if you never installed lxd:

      1
      2
      3
      4
      
      $ sudo apt remove --purge lxd lxd-client
      $ sudo zpool destroy lxd                 # if you use zfs
      $ sudo ip link set lxdbr0 down           # take down the bridge (lxdbr0 is the default)
      $ sudo brctl delbr lxdbr0                # delete the bridge
      

      For installing, in-depth instructions can be found in this blog post by one of the lxd devs. In short, we’re going to create a new group called lxd, add ourselves to it, and then add our own user ID and group ID to map to root within the container.

      1
      2
      3
      4
      5
      6
      
      $ sudo groupadd --system lxd                      # Create the group on your system
      $ sudo usermod -G lxd -a $USER                    # Add the current user
      $ newgrp lxd                                      # update current session with new group
      $ echo root:`id --user ${USER}`:1 >> /etc/subuid  # Setup subuid to map correctly
      $ echo root:`id --group ${USER}`:1 >> /etc/subgid # Setup subgid to map correctly
      $ sudo snap install lxd                           # actually install the snap!
      

      We also need to initialize lxd manually. For me, the defaults all work great. The important pieces here are setting up a new network bridge and a new filestore for lxd to use. You can optionally use zfs if you have it installed (zfsutils-linux should do it on Ubuntu). Generally, I just hit “return” as fast as the questions show up and everything turns out alright. If anything goes wrong, you may need to manually delete zpools, network bridges, or reinstall the lxd snap. No warranties here.

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      
      $ sudo lxd init
      Do you want to configure a new storage pool (yes/no) [default=yes]?
      Name of the new storage pool [default=default]:
      Name of the storage backend to use (dir or zfs) [default=zfs]:
      Create a new ZFS pool (yes/no) [default=yes]?
      Would you like to use an existing block device (yes/no) [default=no]?
      Would you like LXD to be available over the network (yes/no) [default=no]?
      Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
      Would you like to create a new network bridge (yes/no) [default=yes]?
      What should the new bridge be called [default=lxdbr0]?
      What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?
      

      You should now be able to run lxd.lxc list without errors. It may warn you about running lxd init, but don’t worry about that if your initialization succeeded.

      libertine

      Now we’re onto the easy part. libertine is only available from edge channels in the app store, but we’re fairly close to having a version that we could push into more stable channels. For the latest and greatest libertine:

      1
      2
      
      $ sudo snap install --edge libertine
      $ sudo snap connect libertine:libertined-client libertine:libertined
      

      If we want libertine to work fully, we need to jump through a couple of hoops. For starters, dbus-activation is not fully functional at this time for snaps. Lucky for us, we can fake this by either running the d-bus service manually (/snap/bin/libertined), or by adding this file to /usr/share/dbus-1/services/com.canonical.libertine.Service.service

      /usr/share/dbus-1/services/com.canonical.libertine.Service.service
      1
      2
      3
      
      [D-BUS Service]
      Name=com.canonical.libertine.Service
      Exec=/snap/bin/libertine.libertined --cache-output
      

      Personally, I always create the file, which will allow libertined to start automatically on the session bus whenever a user calls it. Hopefully d-bus activation will be fixed sooner rather than later, but this works fine for now.

      Another issue is that existing deb-based libertine binaries may conflict with the snap binaries. We can fix this by adjusting PATH in our .bashrc file:

      $HOME/.bashrc
      1
      2
      
      # ...
      export PATH=/snap/bin:$PATH
      

      This will give higher priority to snap binaries (which should be the default, IMO). One more thing to fix before running full-force is to add an environment variable to /etc/environment such that the correct libertine binary is picked up in Unity 8:

      /etc/environment
      1
      2
      
      # ...
      UBUNTU_APP_LAUNCH_LIBERTINE_LAUNCH=/snap/bin/libertine-launch
      

      OK! Now we’re finally ready to start creating containers and installing packages:

      1
      2
      3
      4
      
      $ libertine-container-manager create -i my-container
      # ... (this could take a few minutes)
      $ libertine-container-manager install-package -i my-container -p xterm
      # ... (and any other packages you may want)
      

      If you want to launch your apps in Unity 7 (why not?):

      1
      2
      
      $ libertine-launch -i my-container xterm
      # ... (lots of ourput, hopefully an open window!)
      

      When running Unity 8, your apps should show up in the app drawer with all the other applications. This will all depend on libertined running, so make sure that it runs at startup!

      I’ve been making a lot of improvements on the snap lately, especially as the ecosystem continues to mature. One day we plan for a much smoother experience, but this current setup will let us work out some of the kinks and find issues. If you want to switch back the deb-based libertine, you can just install it through apt and remove the change to /etc/environment.

      Read more
      Leo Arias

      There is a huge announcement coming: snaps now run in Ubuntu 14.04 Trusty Tahr.

      Take a moment to note how big this is. Ubuntu 14.04 is a long-term release that will be supported until 2019. Ubuntu 16.04 is also a long-term release that will be supported until 2021. We have many many many users in both releases, some of which will stay there until we drop the support. Before this snappy new world, all those users were stuck with the versions of all their programs released in 2014 or 2016, getting only updates for security and critical issues. Just try to remember how your favorite program looked 5 years ago; maybe it didn't even exist. We were used to choose between stability and cool new features.

      Well, a new world is possible. With snaps you can have a stable base system with frequent updates for every program, without the risk of breaking your machine. And now if you are a Trusty user, you can just start taking advantage of all this. If you are a developer, you have to prepare only one release and it will just work in all the supported Ubuntu releases.

      Awesome, right? The Ubuntu devs have been doing a great job. snapd has already landed in the Trusty archive, and we have been running many manual and automated tests on it. So we would like now to invite the community to test it, explore weird paths, try to break it. We will appreciate it very much, but all of those Trusty users out there will love it, when they receive loads of new high quality free software on their oldie machines.

      So, how to get started?

      If you are already running Trusty, you will just have to install snapd:

      $ sudo apt update && sudo apt install snapd
      

      Reboot your system after that in case you had a kernel update pending, and to get the paths for the new snap binaries set up.

      If you are running a different Ubuntu release, you can Install Ubuntu in a virtual machine. Just make sure that you install the http://releases.ubuntu.com/14.04/ubuntu-14.04.5-desktop-amd64.iso.

      Once you have Trusty with snapd ready, try a few commands:

      $ snap list
      $ sudo snap install hello-world
      $ hello-world
      $ snap find something
      

      screenshot of snaps running in Trusty

      Keep searching for snaps until you find one that's interesting. Install it, try it, and let us know how it goes.

      If you find something wrong, please report a bug with the trusty tag. If you are new to the Ubuntu community or get lost on the way, come and join us in Rocket Chat.

      And after a good session of testing, sit down, relax, and get ohmygiraffe. With love from popey:

      $ sudo snap install ohmygiraffe
      $ ohmygiraffe
      

      screenshot of ohmygiraffe

      Read more
      facundo

      Vacaciones en Neuquén


      En enero nos tomamos con la familia un par de semanas y nos fuimos a pasar unas vacaciones en Neuquén. Como siempre, hicimos el viaje en dos días, pero la novedad es que no fuimos solos, ibamos en "caravana de dos autos", nosotros y mi mamá y Diana en el otro.

      El lugar base, como en otras oportunidades, fue la casa que se están armando Diana y Gus en Piedra del Águila. Allí estuvimos varios días, e hicimos de todo.

      Parando a almorzar en la ruta, ¡ni un árbol!

      Obviamente, un punto fuerte fue el comer :p. Es que es un clásico: el horno de barro construido por Di es un golazo. Ahí hicimos un pernil de cerdo con verduras, un costillar de cerdo y bondiola, también con verduras (tirar cuatro o cinco choclos con las chalas adentro y dejarlos una horita lo hacíamos siempre!), pizzas caseras, de todo.

      Para bajar la comida (?) paseamos bastante. Algunas recorridas sólo para descansar, como un paseito pequeño una tarde al perilago (nos metimos al agua, que estaba linda), o un día en la vera del Río Limay, justo abajo del Embalse de Pichí Picún Leufú, donde también almorzamos. La pasaron bien hasta los perros, Mafalda (como pudo, con las piedras, está muy viejita) y Fidel. Nosotros nos divertimos tirando piedras con Gus, Felu y hasta Male! Y obvio: descansamos, dormimos, caminamos por el agua, etc.

      Por otro lado, también hicimos un paseo por los cerros de Piedra del Águila, escalando bastante, paseando por las cimas, esquivando cardos y pinches varios, bajando con mucho cuidado. Male se la re bancó. Felu iba como loco. Estuvo muy bueno, incluso haciendo tanto tantísimo viento en la cima (te hacía perder el equilibrio!).

      En la cima de la montaña

      A nivel de actividades dentro de la casa, se destaca jugamos varios tutes cabrero. Incluso Felipe aprendió a jugar, ¡¡y casi gana uno!! Yo tuve suerte, gané un par, y el último que jugamos lo gané yo solito, porque hice un capote cuando quedábamos sólo tres y estábamos al borde de salir.

      También chusmeamos mucho y nos entrometemos en la imprenta, donde Gus trata de trabajar normalmente mientras nosotros estamos visitando. Los chicos se entretienen anillando papelitos, a mí me fascina los automatismos de las máquinas, Moni acomoda e intercala facturas, etc. Pobre Gus.

      Los chicos también estuvieron ayudando un poco en la huerta, cosechando unas frutillas caseras (estaban asombrósamente ricas). No faltó un juego de tirarse agua con el regador entre Felu, Male y Gus...

      Almorzando sobre el Limay

      Unos pehuenes cerca de una montaña con forma rara, camino a Villa Pehuenia

      Un día nos lo tomamos y nos fuimos hasta el Chocón, con mi vieja.

      Visitamos nuevamente el museo de la ciudad, ya que los niños crecen y aprovechan otras cosas. Y a decir verdad, uno también aprende siempre algo nuevo con cada visita.

      Guarda que te come

      Fue una complicación almorzar. Fuimos al restaurant del camping (habíamos ido también dos años atrás y estaba bueno), y nos enteramos que tenían cerveza artesanal: buenísimo! Pero vimos que la carta era muy reducida. Decidimos quedarnos igual, pero a la hora de pedir sólo tenían sánguche de lomo ($250!!), ravioles, y alguna cosita más. O sea, nos tomamos las cervezas y jugos, y nos fuimos.

      Encontramos otro restaurant, que parecía supercheto pero igual entramos al predio: en la puerta, en el horario, decía: "abrimos cuando llegamos, cerramos cuando nos vamos". Ok, tenía ganas de dejarles notita de "me voy a dejar mi dinero en otro lado".

      Al final pasamos por un almacén, compramos material para sanguchitos, y almorzamos bajo unos arbolitos :)

      Con Felu visitándo la estatua del Águila, en Piedra del ídem

      También hicimos un paseo más largo, esta vez con Diana y Gus. Nos llegamos hasta Villa Pehuenia, donde hicimos noche y casi no paseamos. Visitamos el lago y tomamos unos mates ahí, y comimos rico en un lugarcito lindo.

      En el lago de Villa Pehuenia

      Al otro día bien temprano nos fuimos para Chile. Tuvimos una espera bárbara para cruzar: tres horas del lado argentino hasta que hicimos todos los trámites. Del lado chileno resolvimos todo en una hora (contando con que tuve que volver a las oficinas argentinas para que corrigieran un número).

      Estuvimos un par de días solamente, como para conocer algunos lugares y ver si da para una estadía más larga. Alquilamos una cabaña linda en Villarrica, alejada del centro. El centro de la ciudad es muy lindo, por donde paseamos bastante (hay una graaaaaaaan feria semiartesanal donde compramos cositas lindas para la casa), fuimos a comer, comprar cosas, etc. Había bastante gente.

      Alrmorzando en Temuco

      El volcán de Villarrica

      Uno de los días nos fuimos a Temuco, una ciudad bastante más grande, a unos 80km. Paseamos un rato también por el centro, compramos un par de cosas, almorzamos muy rico (en Vicuña Mackenna 530: unas muy buenas sopas, una de champignones y otra de camarones, y una espectacular lasaña de berenjenas, más una ensalada de verdes), y visitamos un museo mapuche.

      Al lado del museo mapuche, en el mismo predio pero al aire libre, había una feria medieval: gente enseñando esgrima con espadas, contando cuentos, vendiendo todo tipo de cosas estilo medieval (ropas, armas, libros, lo que se te ocurra).

      Felipe en una plaza de Temuco

      Felipe flasheó cuando entró a la feria y vio a una chica con orejas tipo elfo, :), aunque también nos colgamos en la clase de esgrima, y en otro lugar donde había un "duende del bosque" contando un cuento con acertijos.

      Al volver a Argentina, del lado de Chile nos hicieron problema porque faltaba un sello (de algo del auto) en los papeles de la entrada al país. Nos faltaba a nosotros, a Gus y Diana, y a otra persona que estaba después en la cola. Se ve que le pifiaron o se olvidaron cuando pasamos dos días atrás. En fin, protestamos un poco y listo, dieron el ok (?). Nosotros apuntábamos a tener 3 o 4 horas de cola del lado de Argentina, como pasó dos días antes cuando nosotros hicimos el camino inverso, ¡pero no había nadie! Se ve que justo al ser domingo a la mañana, zafamos, resolvimos todo en media hora y nos fuimos para Aluminé.

      Moni y Male en Aluminé

      En Aluminé teníamos reservadas dos habitaciones en un hostal que resultó ser bárbaro (Diana y Gus ya lo conocían). Las habitaciones eran lindas, el desayuno casero, pero lo mejor era el parque y las parrillas, y un quincho totalmente comunal (con parrilla interna, heladera, horno, hornallas, microondas y muchas mesas).

      Al otro día de llegar hicimos rafting, lo que resultó toda una experiencia!  Felu remó un poco y todo, Male iba en el medio y se asustó un toque al romper los rápidos; igual en la mitad del paseo ellos dos se metieron en el rio, conmigo, Diana y Gus. Eso sí, el agua estaba muy fria, por suerte el guia (que era un capo, nos iba contando cosas del rio o de la naturaleza de la región) le prestó una remera a Malena y otra (la propia!) a Felipe, para que no tomaran frío mojados.

      Atacándo el rápido

      Felu experimentado remador

      Luego del rafting en sí nos quedamos disfrutando la tardecita en el rio, y nos volvimos que yo tenía que hacer unos pollos a la parrilla.

      Al otro día ya arrancamos la vuelta a Piedra del Águila, pero en el camino nos desviamos un poco para pasear por el Parque Nacional Lanin (aunque el volcán no se puedo ver mucho porque estaba muy nublado), y luego también fuimos a ver unas pinturas rupestres que casi ni quedaban luego de vandalismos por el humano estúpido.

      En Piedra estuvimos un día entero, y ya al siguiente partimos viaje a Buenos Aires, donde llegamos luego de hacer noche en Catriló.

      En la cima, buscando las pinturas rupestres

      Los pimpollos en el lago de Villa Pehuenia

      Unas vacaciones bárbaras. Muchas fotos acá.

      Read more
      Stéphane Graber

      LXD logo

      LXD on other operating systems?

      While LXD and especially its API have been designed in a mostly OS-agnostic way, the only OS supported for the daemon right now is Linux (and a rather recent Linux at that).

      However since all the communications between the client and daemon happen over a REST API, there is no reason why our default client wouldn’t work on other operating systems.

      And it does. We in fact gate changes to the client on having it build and pass unit tests on Linux, Windows and MacOS.

      This means that you can run one or more LXD daemons on Linux systems on your network and then interact with those remotely from any Linux, Windows or MacOS machine.

      Setting up your LXD daemon

      We’ll be connecting to the LXD daemon over the network, so you’ll need to make sure it’s listening and has a password configured so that new clients can add themselves to the trust store.

      This can be done with:

      lxc config set core.https_address "[::]:8443"
      lxc config set core.trust_password "my-password"

      In my case, that remote LXD can be reached with “djanet.maas.mtl.stgraber.net”, you’ll want to replace that with your LXD server’s FQDN or IP in the commands used below.

      Windows client

      Pre-built native binaries

      Our Windows CI service builds a tarball for every commit. You can grab the latest one here:
      https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts

      Then unpack the archive and open a command prompt in the directory where you unpacked the lxc.exe binary.

      Build from source

      Alternatively, you can build it from source, by first installing Go using the latest MSI based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

      And then in a command prompt, run:

      git config --global http.https://gopkg.in.followRedirects true
      go get -v -x github.com/lxc/lxd/lxc

      Use Ubuntu on Windows (“bash”)

      For this, you need to use Windows 10 and have the Windows subsystem for Linux enabled.
      With that done, start an Ubuntu shell by launching “bash”. And you’re done.
      The LXD client is installed by default in the Ubuntu 16.04 image.

      Interact with the remote server

      Regardless of which method you picked, you’ve now got access to the “lxc” command and can add your remote server.

      Using the native build does have a few restrictions to do with Windows terminal escape codes, breaking things like the arrow keys and password hiding. The Ubuntu on Windows way uses the Linux version of the LXD client and so doesn’t suffer from those limitations.

      MacOS client

      Even though we do have MacOS CI through Travis, they don’t host artifacts for us and so don’t have prebuilt binaries for people to download.

      Build from source

      Similarly to the Windows instructions, you can build the LXD client from source, by first installing Go using the latest DMG based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

      Once that’s done, open a new Terminal window and run:

      export GOPATH=~/go
      go get -v -x github.com/lxc/lxd/lxc
      sudo ln -s ~/go/bin/lxc /usr/local/bin/

      At which point you can use the “lxc” command.

      Conclusion

      The LXD client can be built on all the main operating systems and on just about every architecture, this makes it very easy for anyone to interact with existing LXD servers, whether they’re themselves using a Linux machine or not.

      Thanks to our pretty strict backward compatibility rules, the version of the client doesn’t really matter. Older clients can talk to newer servers and newer clients can talk to older servers. Obviously in both cases some features will not be available, but normal container worflow operations will work fine.

      Extra information

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      Read more
      Leo Arias

      After a little break, on the first Friday of February we resumed the Ubuntu Testing Days.

      This session was pretty interesting, because after setting some of the bases last year we are now ready to dig deep into the most important projects that will define the future of Ubuntu.

      We talked about Ubuntu Core, a snap package that is the base of the operating system. Because it is a snap, it gets the same benefits as all the other snaps: automatic updates, rollbacks in case of error during installation, read-only mount of the code, isolation from other snaps, multiple channels on the store for different levels of stability, etc.

      The features, philosophy and future of Core were presented by Michael Vogt and Zygmunt Krynicki, and then Federico Giménez did a great demo of how to create an image and test it in QEMU.

      Click the image below to watch the full session.

      Alt text

      There are plenty of resources in the Ubuntu websites related to Ubuntu Core.

      To get started, we recommend to follow this guide to run the operating system in a virtual machine.

      After that, and if you are feeling brave and want to help Michael, Zygmund and Federico, you can download the candidate image instead, from http://cdimage.ubuntu.com/ubuntu-core/16/candidate/pending/ubuntu-core-16-amd64.img.xz This is the image that's being currently tested, so if you find something wrong or weird, please report a bug in Launchpad.

      Finally, if you want to learn more about the snaps that compose the image and take a peek at the things that we'll cover in the following testing days, you can follow the tutorial to create your own Core image.

      On this session we were also accompanied by Robert Wolff who works on 96boards at Linaro. He has an awesome show every Thursday called Open Hours. At 96boards they are building open Linux boards for prototyping and embedded computing. Anybody can jump into the Open Hours to learn more about this cool work.

      The great news that Robert brought is that both Open Hours and Ubuntu Testing Days will be focused on Ubuntu Core this month. He will be our guest again next Friday, February 10th, where he will be talking about the DragonBoard 410c. Also my good friend Oliver Grawert will be with us, and he will talk about the work he has been doing to enable Ubuntu in this board.

      Great topics ahead, and a full new world of possiblities now that we are mixing free software with open hardware and affordable prototyping tools. Remember, every Friday in http://ubuntuonair.com/, no se lo pierda.

      Read more
      Stéphane Graber

      LXD logo

      What’s Ubuntu Core?

      Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

      Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

      The current release of Ubuntu Core is called series 16 and was released in November 2016.

      Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

      Requirements

      As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

      • An up to date Ubuntu system using the official Ubuntu kernel
      • An up to date version of LXD

      Creating an Ubuntu Core container

      The Ubuntu Core images are currently published on the community image server.
      You can launch a new container with:

      stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
      Creating ubuntu-core
      Starting ubuntu-core

      The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

      Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

      stgraber@dakara:~$ lxc list
      +-------------+---------+----------------------+----------------------------------------------+------------+-----------+
      |     NAME    |  STATE  |          IPV4        |                      IPV6                    |    TYPE    | SNAPSHOTS |
      +-------------+---------+----------------------+----------------------------------------------+------------+-----------+
      | ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0         |
      +-------------+---------+----------------------+----------------------------------------------+------------+-----------+

      You can then interact with that container the same way you would any other:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap list
      Name       Version     Rev  Developer  Notes
      core       16.04.1     394  canonical  -
      pc         16.04-0.8   9    canonical  -
      pc-kernel  4.4.0-45-4  37   canonical  -
      root@ubuntu-core:~#

      Updating the container

      If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

      If you want to immediately force an update, you can do it with:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap refresh
      pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded
      core (stable) 16.04.1 from 'canonical' upgraded
      root@ubuntu-core:~# snap version
      snap 2.17
      snapd 2.17
      series 16
      root@ubuntu-core:~#

      And then reboot the system and check the snapd version again:

      root@ubuntu-core:~# reboot
      root@ubuntu-core:~# 
      
      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap version
      snap 2.21
      snapd 2.21
      series 16
      root@ubuntu-core:~#

      You can get an history of all snapd interactions with

      stgraber@dakara:~$ lxc exec ubuntu-core snap changes
      ID  Status  Spawn                 Ready                 Summary
      1   Done    2017-01-31T05:14:38Z  2017-01-31T05:14:44Z  Initialize system state
      2   Done    2017-01-31T05:14:40Z  2017-01-31T05:14:45Z  Initialize device
      3   Done    2017-01-31T05:21:30Z  2017-01-31T05:22:45Z  Refresh all snaps in the system

      Installing some snaps

      Let’s start with the simplest snaps of all, the good old Hello World:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap install hello-world
      hello-world 6.3 from 'canonical' installed
      root@ubuntu-core:~# hello-world
      Hello World!

      And then move on to something a bit more useful:

      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap install nextcloud
      nextcloud 11.0.1snap2 from 'nextcloud' installed

      Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

      If you feel like testing the latest LXD straight from git, you can do so with:

      stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
      stgraber@dakara:~$ lxc exec ubuntu-core bash
      root@ubuntu-core:~# snap install lxd --edge
      lxd (edge) git-c6006fb from 'canonical' installed
      root@ubuntu-core:~# lxd init
      Name of the storage backend to use (dir or zfs) [default=dir]: 
      
      We detected that you are running inside an unprivileged container.
      This means that unless you manually configured your host otherwise,
      you will not have enough uid and gid to allocate to your containers.
      
      LXD can re-use your container's own allocation to avoid the problem.
      Doing so makes your nested containers slightly less safe as they could
      in theory attack their parent container and gain more privileges than
      they otherwise would.
      
      Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
      Would you like LXD to be available over the network (yes/no) [default=no]? 
      Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
      Would you like to create a new network bridge (yes/no) [default=yes]? 
      What should the new bridge be called [default=lxdbr0]? 
      What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
      What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
      LXD has been successfully configured.

      And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

      root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
      Creating nested-core
      Starting nested-core 
      root@ubuntu-core:~# lxc list
      +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
      |    NAME     |  STATE  |         IPV4        |                       IPV6                    |    TYPE    | SNAPSHOTS |
      +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
      | nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0         |
      +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+

      Conclusion

      If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

      Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

      And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

      Extra information

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      Read more