Canonical Voices

abeato

In the conclusions to my last post, “Modifying System Call Arguments With ptrace”, I mentioned that one of the main drawbacks of the explained approach for modifying system call arguments was that there is a process switch for each system call performed by the tracee. I also suggested a possible approach to overcome that issue using ptrace jointly with seccomp, with the later making sure the tracer gets only the system calls we are interested in. In this post I develop this idea further and show how this can be achieved.

For this, I have created a little example that can be found in github, along the example used in the previous post. The main idea is to use seccomp with a Berkeley Packet Filter (BPF) that will specify the conditions under which the tracer gets interrupted.

Now we will go through the source code, with emphasis on the parts that differ from the original example. Skipping the include directives and the forward declarations we get to main():

int main(int argc, char **argv)
{
    pid_t pid;
    int status;

    if (argc < 2) {
        fprintf(stderr, "Usage: %s <prog> <arg1> ... <argN>\n", argv[0]);
        return 1;
    }

    if ((pid = fork()) == 0) {
        /* If open syscall, trace */
        struct sock_filter filter[] = {
            BPF_STMT(BPF_LD+BPF_W+BPF_ABS, offsetof(struct seccomp_data, nr)),
            BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_open, 0, 1),
            BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_TRACE),
            BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
        };
        struct sock_fprog prog = {
            .filter = filter,
            .len = (unsigned short) (sizeof(filter)/sizeof(filter[0])),
        };
        ptrace(PTRACE_TRACEME, 0, 0, 0);
        /* To avoid the need for CAP_SYS_ADMIN */
        if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) == -1) {
            perror("prctl(PR_SET_NO_NEW_PRIVS)");
            return 1;
        }
        if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog) == -1) {
            perror("when setting seccomp filter");
            return 1;
        }
        kill(getpid(), SIGSTOP);
        return execvp(argv[1], argv + 1);
    } else {
        waitpid(pid, &status, 0);
        ptrace(PTRACE_SETOPTIONS, pid, 0, PTRACE_O_TRACESECCOMP);
        process_signals(pid);
        return 0;
    }
}

The main change here when compared to the original code is the set-up of a BPF in the tracee, right after performing the call to fork(). BPFs have an intimidating syntax at first glance, but once you grasp the basic concepts behind they are actually quite easy to read. BPFs are defined as a sort of virtual machine (VM) which has one data register or accumulator, one index register, and an implicit program counter (PC). Its “assembly” instructions are defined as a structure with format:

struct sock_filter {
    u_short code;
    u_char  jt;
    u_char  jf;
    u_long k;
};

There are codes (opcodes) for loading into the accumulator, jumping, and so on. jt and jf are increments on the program counter that are used in jump instructions, while k is an auxiliary value which usage depends on the code number.

BPFs have an addressable space with data that is in the networking case a packet datagram, and for seccomp the following structure:

struct seccomp_data {
    int   nr;                   /* System call number */
    __u32 arch;                 /* AUDIT_ARCH_* value
                                   (see <linux/audit.h>) */
    __u64 instruction_pointer;  /* CPU instruction pointer */
    __u64 args[6];              /* Up to 6 system call arguments */
};

So basically what BPFs do in seccomp is to operate on this data, and return a value that tells the kernel what to do next: allow the process to perform the call (SECCOMP_RET_ALLOW), kill it (SECCOMP_RET_KILL), or other options as specified in the seccomp man page.

As can be seen, struct seccomp_data contains more than enough information for our purposes: we can filter based on the system call number and on the arguments.

With all this information we can look now at the filter definition. BPFs filters are defined as an array of sock_filter structures, where each entry is a BPF instruction. In our case we have

BPF_STMT(BPF_LD+BPF_W+BPF_ABS, offsetof(struct seccomp_data, nr)),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_open, 0, 1),
BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_TRACE),
BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),

BPF_STMT and BPF_JUMP are a couple of simple macros that fill the sock_filter structure. They differ in the arguments, which include jumping offsets in BPF_JUMP. The first argument is in both cases the “opcode”, which is built with macros as a mnemonics help: for instance the first one is for loading into the accumulator (BPF_LD) a word (BPF_W) using absolute addressing (BPF_ABS). More about this can be read here, for instance.

Analysing now in more detail the filter, the first instruction is asking the VM to load the call number, nr, to the accumulator. The second one compares that to the number for the open syscall, and asks the VM to not modify the counter if they are equal (PC+0), so the third instruction is run, or jump to PC+1 otherwise, which would be the 4th instruction (when executing this instruction the PC points already to the 3rd instruction). So if this is an open syscall we return SECCOMP_RET_TRACE, which will invoke the tracer, otherwise we return SECCOMP_RET_ALLOW, which will let the tracee run the syscall without further impediment.

Moving forward, the first call to prctl sets PR_SET_NO_NEW_PRIVS, which impedes child processes to have more privileges than those of the parent. This is needed to make the following call to prctl, which sets the seccomp filter using the PR_SET_SECCOMP option, succeed even when not being root. After that, we call execvp() as in the ptrace-only example.

Switching to what the parent does, we see that changes are very few. In main(), we set the PTRACE_O_TRACESECCOMP option, that makes the tracee stop when a filter returns SECCOMP_RET_TRACE and signals the event to the tracer. The other change in this function is that we do not need to set anymore PTRACE_O_TRACESYSGOOD, as we are being interrupted by seccomp, not because of system calls.

Moving now to the next function,

static void process_signals(pid_t child)
{
    const char *file_to_redirect = "ONE.txt";
    const char *file_to_avoid = "TWO.txt";

    while(1) {
        char orig_file[PATH_MAX];

        /* Wait for open syscall start */
        if (wait_for_open(child) != 0) break;

        /* Find out file and re-direct if it is the target */

        read_file(child, orig_file);
        printf("[Opening %s]\n", orig_file);

        if (strcmp(file_to_avoid, orig_file) == 0)
            redirect_file(child, file_to_redirect);
    }
}

we see here that now we invoke wait_for_open() only once. Differently to when we are tracing each syscall, which interrupted the tracer before and after the execution of the syscall, seccomp will interrupt us only before the call is processed. We also add here a trace for demonstration purposes.

After that, we have

static int wait_for_open(pid_t child)
{
    int status;

    while (1) {
        ptrace(PTRACE_CONT, child, 0, 0);
        waitpid(child, &status, 0);
        printf("[waitpid status: 0x%08x]\n", status);
        /* Is it our filter for the open syscall? */
        if (status >> 8 == (SIGTRAP | (PTRACE_EVENT_SECCOMP << 8)) &&
            ptrace(PTRACE_PEEKUSER, child,
                   sizeof(long)*ORIG_RAX, 0) == __NR_open)
            return 0;
        if (WIFEXITED(status))
            return 1;
    }
}

Here we use PTRACE_CONT instead of PTRACE_SYSCALL. We get interrupted every time there is a match in the BPF as we have set the PTRACE_O_TRACESECCOMP option, and we let the tracer run until that happens. The other change here, besides a trace, is how we check if we have received the event we are interested in, as obviously the status word is different. The details can be seen in ptrace’s man page. Note also that we could actually avoid the test for __NR_open as the BPF will interrupt us only for open syscalls.

The rest of the code, which is the part that actually changes the argument to the open syscall is exactly the same. Now, let’s check if this works as advertised:

$ git clone https://github.com/alfonsosanchezbeato/ptrace-redirect.git
$ cd ptrace-redirect/
$ cat ONE.txt 
This is ONE.txt
$ cat TWO.txt 
This is TWO.txt
$ gcc redir_filter.c -o redir_filter
$ ./redir_filter cat TWO.txt 
[waitpid status: 0x0000057f]
[waitpid status: 0x0007057f]
[Opening /etc/ld.so.cache]
[waitpid status: 0x0007057f]
[Opening /lib/x86_64-linux-gnu/libc.so.6]
[waitpid status: 0x0007057f]
[Opening /usr/lib/locale/locale-archive]
[waitpid status: 0x0007057f]
[Opening TWO.txt]
This is ONE.txt
[waitpid status: 0x00000000]

It does indeed! Note that traces show that the tracer gets interrupted only by the open syscall (besides an initial trap and when the child exits). If we added the same traces to the ptrace-only program we would see many more calls.

Finally, a word of caution regarding call numbers: in this post and in the previous one we are assuming an x86-64 architecture, so the programs would need to be adapted if we want to use it in different archs. There is also an important catch here: we are implicitly assuming that the child process that gets run by the execvp() call is also x86-64, as we are filtering by using the syscall number for that arch. This implies that this will not work in the case that the child program is compiled for i386. To make this example work properly also in that case, we must check the architecture in the BPF, by looking at “arch” in seccomp_data, and use the appropriate syscall number in each case. We would also need to check the arch before looking at the tracee registers, see an example on how to do this here (alternatively we could make the BPF return this data in the SECCOMP_RET_DATA bits of its return value, which can be retrieved by the tracer via PTRACE_GETEVENTMSG). Needless to say, for arm64/32 we would have similar issues.

Read more
Barry McGee

One of the most complex aspects of managing continuous development on a large codebase is ensuring that it remains stable.

This problem is particularly acute when building out front end architecture using HTML & CSS due to the inherently global nature of CSS.

How many times have you shipped a CSS change to one small part of a website only to find you’ve inadvertently broken a page element on a different page entirely?

This problem usually arises because of all your CSS loading via one external file, added to each page of your website. If you don’t namespace or isolate your styles correctly, changes to your CSS may have unintended consequences.

Structuring your CSS using the BEM convention or similar can help prevent such clashes. However, in a fast moving team where multiple developers are working on a large codebase daily, relying on code convention alone is often not enough to stop visual regression bugs from creeping in.

Ideally, you or a team member should check each page of your site, in turn, to make sure nothing has broken, right? While that’s a solid QA approach, it doesn’t scale very well. As your site grows, it can become all time consuming to check each page, especially if you consider you may also need to check each page over multiple breakpoints.

That’s where automated Visual Regression Testing (VRT) tools can seriously lighten your workload. A VRT tool will typically run through your site and capture a baseline snapshot of all your pages to use as a benchmark.

After you then make some changes, you run the process again and the VRT tool will compare the latest capture of your pages with the baseline capture and highlight the differences. It’s at this stage where you’ll be alerted to any unintended consequences.

The concept of VRT has been around for a few years but up until now, most solutions have involved setting up your process locally, typically involving quite a few moving parts. When trying to get a project team to integrate VRT as part of their workflow using one of these solutions, we always ran into trouble as it was so difficult to keep individual developer setups consistent – inevitably, I’d spend longer debugging VRT setup than I would visual diffs.

I then stumbled upon Percy.io, which offers VRT software as a service. I was immediately interested in how we might utilise it for Vanilla Framework, our constantly evolving CSS framework.

I immediately signed up for a trial and was quickly impressed with their GitHub integration and ease of use. Percy is unobtrusive, and it’s only when a feature progresses to the Pull Request stage does Percy come into play. It will run as part of the Travis CI build and then report back if it has found any visual diffs for review. You can also configure Percy to test across defined breakpoints.


Percy’s Github integration is a big win

The person reviewing the PR can then click through to the project dashboard on percy.io and review the highlighted diffs. If the changes are expected based on the what has been outlined in the PR, then the changes can be approved.


Comparing different pages for visual differences

When the feature merges, these changes then become the baseline. If unexpected changes are highlighted, the reviewer can then highlight this to the developer for amendment.

As we make multiple changes a day to our Vanilla codebase while aiming for a weekly release, having VRT as part of our continuous integration has afforded us extra confidence that our releases do not contain missed bugs and regressions.

Related:

Read more
Anthony Dillon

The Vanilla team needed to solve two issues which have been paining the development of Vanilla Framework for some time.

Firstly we needed to improve our workflow for testing and QAing components on our local machines. Up until now, we have been using npm link on our local branches of Vanilla with our local website branch, then reviewing the examples in the components page of the documentation. This caused a lot of extra overhead to reviewing Vanilla.

Secondly, since we actually build the docs.vanillaframework.io site using the Documentation theme (vanilla-docs-theme), the Vanilla pattern examples we ended up reviewing were no longer purely styled by Vanilla Framework, but as they were extended by the theme.

The documentation of the matrix pattern in Vanilla

The documentation of the matrix pattern in Vanilla

The solution

To solve both these issues, we decided to decouple the examples from the documentation. This change allowed us to move the coded examples of the patterns into a separate “examples” directory of the codebase and remove the hard-coded examples from the documentation.

As the examples were a part of the Vanilla Framework code we simply linked each example page with the Vanilla built from the same branch. This means all examples are only styled by Vanilla and nothing else.

Another benefit that came from this change was that now we have an easy way to find an example of a pattern when reviewing or QAing a pull request. Whereas previously we had to do the npm link dance. Now we simply check out the branch and run the internal Jekyll site to build Vanilla giving us a directory of pattern pages.

Examples in the docs

So we were happy with these changes: we had solved the issues at hand and were ready to head off and have a celebratory coffee.  But, we couldn’t leave the documentation without examples and code snippets.

To solve this issue, we used an embedding paradigm like on Codepen.

Example of a Codepen embed

Example of a Codepen embed

We set about creating a small JavaScript project that would find a link to the page with a specific class and grab the href attribute from it, replacing the link with an iframe of the link. This gave us a nice progressively enhanced experience:

Example of progressive enhancement - on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

An example of progressive enhancement – on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

We were still lacking the code snippets, so we made the script also pull the HTML source of the linked page into the example, then display the contents of the body in a code block appended after the iframe.

The wrap-up

And that was it. The solution gives us:

  • A single place for example code
  • Examples only displayed using Vanilla
  • A local testing environment
  • Documentation examples that are automatically up to date

We named this mini project example-js. Please feel free to fork it, use it or file any issues you may find.

Read more
Will Moggridge

Introducing tutorials.ubuntu.com

The web team has been hard at work on our new Ubuntu Tutorials website and we are proud to share our work with the community. Our first set of tutorials are based around snap usage and building snaps with snapcraft. We will continue to work on our catalogue to broaden it to a variety of subjects.

Ubuntu Tutorials is part of a bigger project to improve our documentation across our other projects. Our goals are to improve the discoverability and the ease of use for our documentation. Having followed Ubuntu and been part of the community for many years, I am excited to be involved with this project. I hope we can keep moving forward with this work and give back to the community.

Polymer and our source code

The website is built using Google’s Polymer framework with their Codelabs web components. Polymer has been a great and enjoyable experience and really made the web components so much more more exciting. I am already looking to see where I can use these technologies in the rest of our projects. We recently had a hack day and had the opportunity to explore putting Vanilla Framework in web components. I am happy with our initial work with Vanilla web components we are looking forward to continue exploring and developing them.

The Ubuntu Tutorials website source code is available for you to dive into, at the Ubuntu Tutorials GitHub repository.
A big thank you to Didier Roche, whose work was the foundation for this.

Our next steps

Looking to the future, we are already thinking about and preparing improvements for the site. We have been really happy with the feedback we are getting on the GitHub issues page. A number of the issues have been requests for tutorials on certain topics. This is really useful and interesting to us, so that we can see which areas to focus.

I am interested in simplifying our process for creating and contributing to Ubuntu Tutorials. Not only for us but also to empower you. One strong area for this is adding functionality to write tutorials using markdown. This will increase visibility for all and remove some overhead to us, while also making it simpler for people to contribute to our catalogue. We are currently looking into this and hope we will have a solution soon.

Read more
facundo

Alcohol


Le puso cuatro cubitos de hielo al vaso, dudó unos instantes y sacó uno con los dedos, volviéndolo a tirar a la hielera. Con la cantidad de whisky no dudó, llenó el vaso hasta casi el borde.

Sin abandonar la cercanía del barcito medio pelo contra la pared del living le dió el primer gran trago, y después sí, se fue contra la ventana.

Yo no sabía si mirarlo a él o a ella, que se cerraba el deshabillé por demás, agarrándolo con fuerza, tensa, marcando su casi ausencia de curvas en el cuerpo demasiado flaco.

- ¡Borracho de mierda! -le gritó, casi con desesperación.

Él la ignoró, seguía mirando por la ventana. Desde mi posición, sentado en el sillón, no llegaba a verle la cara, pero adivinaba que tenía la vista perdida. No miraba por la ventana, suponía yo, más bien la usaba como excusa para no tener que mirar nada más.

Ella, con la voz todavía ronca por el llanto, pero mucho más calma, le dijo:

- El alcohol, esa oscuridad donde los cobardes van a esconderse de si mismos.

Él se dio vuelta, con la sorpresa dibujada en el rostro, en parte porque ella no era de hacer ese tipo de declaraciones filosóficas altisonantes, pero en parte -y cada vez que recuerdo ese día estoy más seguro- porque finalmente le tocó alguna cuerda interior.

Dejó el vaso por la mitad apoyado contra el marco de la ventana, abrió la puerta, y no lo vimos nunca más.

Read more
Leo Arias

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at snapcraft.io.

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield

<3

If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Read more
Stéphane Graber

LXD logo

The LXD demo server

The LXD demo server is the service behind https://linuxcontainers.org/lxd/try-it.
We use it to showcase LXD by leading visitors through an interactive tour of LXD’s features.

Rather than use some javascript simulation of LXD and its client tool, we give our visitors a real root shell using a LXD container with nesting enabled. This environment is using all of LXD’s resource limits as well as a very strict firewall to prevent abuses and offer everyone a great experience.

This is done using lxd-demo-server which can be found at: https://github.com/lxc/lxd-demo-server
The lxd-demo-server is a daemon that offers a public REST API for use from a web browser.
It supports:

  • Creating containers from an existing container or from a LXD image
  • Choose what command to execute in the containers on connection
  • Lets you choose specific profiles to apply to the containers
  • An API to record user feedback
  • An API to fetch usage statistics for reporting
  • A number of resource restrictions:
    • CPU
    • Disk quota (if using btrfs or zfs as the LXD storage backend)
    • Processes
    • Memory
    • Number of sessions per IP
    • Time limit for the session
    • Total number of concurrent sessions
  • Requiring the user to read and agree to terms of service
  • Recording all sessions in a sqlite3 database
  • A maintenance mode

All of it is configured through a simple yaml configuration file.

Setting up your own

The LXD demo server is now available as a snap package and interacts with the snap version of LXD. To install it on your own system, all you need to do is:

Make sure you don’t have the deb version of LXD installed

ubuntu@djanet:~$ sudo apt remove --purge lxd lxd-client
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following packages will be REMOVED:
 lxd* lxd-client*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 25.3 MB disk space will be freed.
Do you want to continue? [Y/n] 
(Reading database ... 59776 files and directories currently installed.)
Removing lxd (2.0.9-0ubuntu1~16.04.2) ...
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
Purging configuration files for lxd (2.0.9-0ubuntu1~16.04.2) ...
Removing lxd-client (2.0.9-0ubuntu1~16.04.2) ...
Processing triggers for man-db (2.7.5-1) ...

Install the LXD snap

ubuntu@djanet:~$ sudo snap install lxd
lxd 2.8 from 'canonical' installed

Then configure LXD

ubuntu@djanet:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=43]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And finally install lxd-demo-server itself

ubuntu@djanet:~$ sudo snap install lxd-demo-server
lxd-demo-server git from 'stgraber' installed
ubuntu@djanet:~$ sudo snap connect lxd-demo-server:lxd lxd:lxd

At that point, you can hit http://127.0.0.1:8080 and will be greeted with this:

To change the configuration, use:

ubuntu@djanet:~$ sudo lxd-demo-server.configure

And that’s it, you have your own instance of the demo server.

Security

As mentioned at the beginning, the demo server comes with a number of options to prevent users from using all the available resources themselves and bringing the whole thing down.

Those should be tweaked for your particular needs and should also update the total number of concurrent sessions so that you don’t end up over-committing on resources.

On the network side of things, the demo server itself doesn’t do any kind of firewalling or similar network restrictions. If you plan on offering sessions to anyone online, you should make sure that the network which LXD is using is severely restricted and that the host this is running on is also placed in a very restricted part of your network.

Containers handed to strangers should never be using “security.privileged” as that’d be a straight route to getting root privileges on the host. You should also stay away from bind-mounting any part of the host’s filesystem into those containers.

I would also very strongly recommend setting up very frequent security updates on your host and kernel live patching or at least automatic reboot when a new kernel is installed. This should avoid a new kernel security issue from being immediately exploited in your environment.

Conclusion

The LXD demo server was initially written as a quick hack to expose a LXD instance to the Internet so we could let people try LXD online and also offer the upstream team a reliable environment we could have people attempt to reproduce their bugs into.

It’s since grown a bit with new features contributed by users and with improvements we’ve made to the original experience on our website.

We’ve now served over 36000 sessions to over 26000 unique visitors. This has been a great tool for people to try and experience LXD and I hope it will be similarly useful to other projects.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Dustin Kirkland

Mobile World Congress is simply one of the biggest trade shows in the entire world.

It's also, perhaps, the best place in the world to see how encompassing the Ubuntu ecosystem actually is.

Canonical and our partners demonstrated Ubuntu running on dozens of devices -- from robots, to augmented reality headsets, digital signs, vending machines, IoT Gateways, cell tower base stations, phones, tablets, servers, from super computers to tiny, battery powered embedded controllers.

But that was only a tiny fraction of the Ubuntu running at MWC!

We saw Ubuntu at the heart of demos from Dell, AMD, Intel, IBM, Deutsche Telekom, DJI, and hundreds of other booths, running autonomous drones, national telephone networks, self driving cars, smart safety helmets, inflight entertainment systems, and so, so, so much more.

Among the thousands of customers, prospects, fans, competitors, students, and industry executives, we even received a visit from (the somewhat controversial?) King of Spain!

It was an incredible week, with no fewer than 12 hours per day, on our feet, telling the Ubuntu story.
And what a story it is... I hope you enjoy.

Cheers,
Dustin





































Read more
Dustin Kirkland



Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.


As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.



Cheers,
Dustin

Read more
Alan Griffiths

MirAL 1.3

There’s a new MirAL release (1.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged.

The changes in 1.3.0 fall are:

Support for “workspaces”

This is part of the enabling “workspaces” for Unity8 desktop. MirAL doesn’t provide fancy transitions and spreads, but you can see some basic workspace switching in the miral-shell example program:

$ apt install miral-examples
$ miral-app

There are four workspaces (corresponding to F1-F4) and you can switch using Meta-Alt-[F1|F2|F3|F4], or switch taking the active application to the new workspace using Meta-Ctrl-[F1|F2|F3|F4].

Support for “previous window in application”

You can now use Alt-Shift-` to switch to the previous in an application.

miral-shell adds a background

miral-shell now uses its background for a handy guide to the available keyboard shortcuts.

Bug fixes

Two bug fixes related to shutdown problems: one deals with a possible race in libmiral code, the other works around a bug in Mir.

  • [libmiral] Join internal client threads before server shutdown (LP: #1668651)
  • [miral-shell] Workaround for crash on exit (LP: #1667645)

Read more
Anthony Dillon

Hack day 2

This week, the web team managed to get away for our second hack day. These hack days give us an opportunity to scratch our own itches and work on things we find interesting.

We wrote about our first hack day in August last year.

Getting started

We began by outlining the day and reviewing ideas that had been suggested on a Google Doc throughout the previous week by everyone on the team. We each voted by marking the ideas we would be interested in working. Then we chose the most voted ones and assigned groups of 2 or 3 people to each.

The groups broke up and turned their idea into a formal project with a list of the tasks required to produce an MVP. Below is a list of the ideas and outcomes from each team.

Performance audit of the current websites

Team: Rich, Andrea, Robin

The team discovered a tool called Lighthouse by Google Chrome which analyses a web page and returns a full audit of the dependent assets and accessibility issues.

The team spent some time trying to create a service using Lighthouse to produce an API, then realised that Google Chrome team had done this work already. The service is called Moonlight. Moonlight is a SaaS to test the performance of a page.

As Moonlight takes a single webpage endpoint to test., we need a way to recursively test pages. The team created a profiling script to gather the references endpoints of a site.

Canonical web team dashboard

Team: Luke, Ant, Yaili

The goal of this project was to motivate the team to improve key areas at a glance. The  metrics we wanted to capture were:

  • Whether the site is up or down
  • Live visitors countsMonthly unique visitors
  • Monthly unique visitorsOpen issues on the project
  • Open issues on the projectOpen PRs on the project
  • Open PRs on the project
  • Information about the last commit to the sites code base
  • PageSpeed insights tests results

We gathered a set of sites we would like to collect these metrics on:

  • www.ubuntu.com
  • www.canonical.com
  • maas.io
  • jujucharms.com
  • landscape.canonical.com
  • design.ubuntu.com
  • design.canonical.com
  • insights.ubuntu.com
  • developer.ubuntu.com
  • community.ubuntu.com
  • summit.ubuntu.com

The team used MERN stack (MongoDB, expressjs, React and Nodejs) and modified its sample project to create a interface which could be displayed depending on the state of the data. For example, the up or down card would display all sites as up but once one went down the card would change to an error state and only display the information about the site that is down. By designing for emotion in this way, we can intelligently utilise the limited space available in a dashboard.

The team also used a few plugins to gather some data:

  • ping-monitor to ping our sites to check if they’re up, down or broken
  • node-http-ping to get response times for the same set of Canonical sites

Storing the data in MongoDB to keep historical data and using the /api endpoint to return the response time and status for each site, the team managed to produce a simplified dashboard showing the available state of our list of sites.

Ubuntu.com dev tools

Team: Graham, Karl


As a team, we have been using gulp scripts to lint and test our code locally and in our CI environments for sometime. But we have never got around to applying these checks to our flagship website, www.ubuntu.com.

The plan here was to implement gulp scripts to lint Sass and JavaScript. And, to also look into further options like spell-checking, auto-prefixing and HTML validation.

The team added Sass linting and borrowed the linting tasks from our styling framework vanilla-framework. This produced a long list of lint issues. The team tracked the lint errors and quickly fixed them to get a passing CI run.

Adding JavaScript linting (jsHint)

The team also implemented JavaScript linting using jsHint on the current JavaScript within the sites code base. This produced a number of JavaScript lint errors which were fixed, ignoring the plugin code.

Finally adding the new linting steps to the Travis configuration. So the linting is tested on each pull request.

Vanilla web components prototype

Team: Barry, Will, Robin

To enable Vanilla on a variety of platforms. This would allow people to use Vanilla in modern web apps.

The team  created a base repository using Polymer’s tools and started creating web components for Vanilla.

They discovered that the styling needs tweaking to be compatible with web components. Possibly just by building a shared styles import which is included in each web component.

The team started by importing vanilla-framework from NPM, then built modular scss files containing only relevant parts from Vanilla, and finally imported the modular style file in web component.

Inside the repository there is a vanilla.html which imports all of the components. Components can individually be included as needed.

This work includes a demo system, with API documentation. The demo system displays the component and the markup used to create it. This is accessed by running `polymer serve` and accessing the site.

This work can be used to build solid web components for use in Polymer and we can also use this work to jumpstart React components.

HTTP/2 on vanillaframework.io

In the midst of all this work. Robin found time to tackle the task of hosting our styling frameworks website on HTTP/2. It’s currently a proof of concept but can now be considered as the start of work item to roll out.

Demo site

Conclusion

Again, this was a successful hack day with everyone busy working on things that interest them. Although there were less completed outcomes this time, we did set up a number of good projects which are ready to be continued.




Read more
Robin Winslow

We’ve been making an effort to secure all our websites with HTTPS. While some Canonical sites have enforced HTTPS for a while (e.g.: landscape.canonical.com, jujucharms.com, launchpad.net), it’s been missing from our other sites until now.

Why HTTPS?

The HTTPS movement has been building for years to help secure internet users against black-hat hackers and spies. The movement became more urgent after Edward Snowden revealed significant efforts by government agencies to spy on the world population.

The EFF have helped create two projects: LetsEncrypt – which massively simplifies the free installation of HTTPS certificates; and HTTPS Everywhere – a browser plugin to help you use HTTPS whenever it’s available. The advent of HTTP/2 has helped negate performance concerns when moving to HTTPS.

Google have also made efforts to encourage websites to enable HTTPS: First announcing in 2014 that they would consider HTTPS support in their search ranking algorithm; and last year, that Google Chrome would start visually warning users of “insecure” (non-HTTPS) websites.

Our sites

We made https://www.ubuntu.com HTTPS-only in October of last year, and have since done so on 10 more sites:

We hope to enable HTTPS on our other sites in the coming months.

Although enabling HTTPS can be relatively simple there were a number of specific challenges we had to overcome for some of our websites. I hope to write more about these in a follow-up post.

Read more
Leo Arias

Last Sunday we went to the Poás Volcano to make free maps.

This is the second geek outing of the JaquerEspéis. From the first one we learned that we had to wait until summer because it's not possible to make maps during a storm. And the day was perfect. It wasn't just sunny, but the crater was totally clear and thus we could add a new spot of Costa Rica to the virtual tour.

In addition to that, this time we arrived much better prepared, with multiple phones with mapillary, osmand and OSMTracker, a 360 cam, a Garmin GPS, a drone and even a notebook and two biologists.

The procession of the MapperSpace

Here's how it works. Everybody with the GPS in the phone activated waits until it finds the location. Then, each person uses the application of his preference to collect data: pictures, audio, video, text notes, traces, annotations in the notebook...

Later, in our homes, we upload, publish and share all the collected data. These is useful to improve the free maps of OpenStreetMap. We add from really simple things like the location of a trash bin to really important things like how accessible is the place for a person in a wheelchair, together with the location of all the accesses or the places that have a lack of them. Each person improves the map a little, in the region that he knows or passed by. With more than 3 million users, OpenStreetMap is the best map of the world that exists; and it has a particular importance in regions like ours, without a lot of economic potential for the megacorporations that make and sell closed maps stealing private data from their users.

Because the maps we make are free, what comes next has no limits. There are groups working on the reconstruction of 3D models from the pictures, on the identification and interpretation of signs, on applications to calculate the optimal route to reach any place using any combination of means of transportation, on applications to assist decission making during the design of the future of a city, and many other things. All of this based on shared knowledge and community.

The image above is the virtual tour in Mapillary. As we recorded it with the 360 cam, you can click and drag with the mouse to see all the angles. You can also click above, in the play button to follow the path we took. Or you can click in any of the green dots in the map to follow your own path.

Thank you very much to everybody who joined us, specially to Denisse and Charles for being our guides, and for filling up the trip with interesting information about flora, fauna, geology and historic importance of El Poás.

Miembros del MaperEspeis

(More pictures and videos here)

The next MapperSpace will be on march the 12th.

Read more
Leo Arias

Call for testing: MySQL

I promised that more interesting things were going to be available soon for testing in Ubuntu. There's plenty coming, but today here is one of the greatest:

$ sudo snap install mysql --channel=8.0/beta

screenshot of mysql snap running

Lars Tangvald and other people at MySQL have been working on this snap for some time, and now they are ready to give it to the community for crowd testing. If you have some minutes, please give them a hand.

We have a testing guide to help you getting started.

Remember that this should run in trusty, xenial, yakkety, zesty and in all flavours of Ubuntu. It would be great to get a diverse pool of platforms and test it everywhere.

In here we are introducing a new concept: tracks. Notice that we are using --channel=8.0/beta, instead of only --beta as we used to do before. That's because mysql has two different major versions currently active. In order to try the other one:

$ sudo snap install mysql --channel=5.7/beta

Please report back your results. Any kind of feedback will be highly appreciated, and if you have doubts or need a hand to get started, I'm hanging around in Rocket Chat.

Read more
Stéphane Graber

This is the twelfth and last blog post in this series about LXD 2.0.

LXD logo

Introduction

This is finally it! The last blog post in this series of 12 that started almost a year ago.

If you followed the series from the beginning, you should have been using LXD for quite a bit of time now and be pretty familiar with its day to day operation and capabilities.

But what if something goes wrong? What can you do to track down the problem yourself? And if you can’t, what information should you record so that upstream can track down the problem?

And what if you want to fix issues yourself or help improve LXD by implementing the features you need? How do you build, test and contribute to the LXD code base?

Debugging LXD & filing bug reports

LXD log files

/var/log/lxd/lxd.log

This is the main LXD log file. To avoid filling up your disk very quickly, only log messages marked as INFO, WARNING or ERROR are recorded there by default. You can change that behavior by passing “–debug” to the LXD daemon.

/var/log/lxd/CONTAINER/lxc.conf

Whenever you start a container, this file is updated with the configuration that’s passed to LXC.
This shows exactly how the container will be configured, including all its devices, bind-mounts, …

/var/log/lxd/CONTAINER/forkexec.log

This file will contain errors coming from LXC when failing to execute a command.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

/var/log/lxd/CONTAINER/forkstart.log

This file will contain errors coming from LXC when starting the container.
It’s extremely rare for anything to end up in there as LXD usually handles errors much before that.

CRIU logs (for live migration)

If you are using CRIU for container live migration or live snapshotting there are additional log files recorded every time a CRIU dump is generated or a dump is restored.

Those logs can also be found in /var/log/lxd/CONTAINER/ and are timestamped so that you can find whichever matches your most recent attempt. They will contain a detailed record of everything that’s dumped and restored by CRIU and are far better for understanding a failure than the typical migration/snapshot error message.

LXD debug messages

As mentioned above, you can switch the daemon to doing debug logging with the –debug option.
An alternative to that is to connect to the daemon’s event interface which will show you all log entries, regardless of the configured log level (even works remotely).

An example for “lxc init ubuntu:16.04 xen” would be:
lxd.log:

INFO[02-24|18:14:09] Starting container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000
INFO[02-24|18:14:10] Started container action=start created=2017-02-24T23:11:45+0000 ephemeral=false name=xen stateful=false used=1970-01-01T00:00:00+0000

lxc monitor –type=logging:

metadata:
  context: {}
  level: dbug
  message: 'New events listener: 9b725741-ffe7-4bfc-8d3e-fe620fc6e00a'
timestamp: 2017-02-24T18:14:01.025989062-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.341283344-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.341536477-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/containers/xen
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.347709394-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: PUT
    url: /1.0/containers/xen/state
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.357046302-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358387853-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:09.358578599-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/2e2cf904-c4c4-4693-881f-57897d602ad3/wait
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.366213106-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.369636451-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.369771164-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.424696767-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
    name: xen
  level: dbug
  message: ContainerUmount
timestamp: 2017-02-24T18:14:09.432723719-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.721067917-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Starting container
timestamp: 2017-02-24T18:14:09.749808518-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.792551375-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StorageCoreInit
timestamp: 2017-02-24T18:14:09.792961032-05:00
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /internal/containers/23/onstart
  level: dbug
  message: handling
timestamp: 2017-02-24T18:14:09.800803501-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolInit
timestamp: 2017-02-24T18:14:09.803190248-05:00
type: logging


metadata:
  context:
    driver: storage/zfs
  level: dbug
  message: StoragePoolCheck
timestamp: 2017-02-24T18:14:09.803251188-05:00
type: logging


metadata:
  context:
    container: xen
    driver: storage/zfs
  level: dbug
  message: ContainerMount
timestamp: 2017-02-24T18:14:09.803306055-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Scheduler: container xen started: re-balancing'
timestamp: 2017-02-24T18:14:09.965080432-05:00
type: logging


metadata:
  context:
    action: start
    created: 2017-02-24 23:11:45 +0000 UTC
    ephemeral: "false"
    name: xen
    stateful: "false"
    used: 1970-01-01 00:00:00 +0000 UTC
  level: info
  message: Started container
timestamp: 2017-02-24T18:14:10.162965059-05:00
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Success for task operation: 2e2cf904-c4c4-4693-881f-57897d602ad3'
timestamp: 2017-02-24T18:14:10.163072893-05:00
type: logging

The format from “lxc monitor” is a bit different from what you’d get in a log file where each entry is condense into a single line, but more importantly you see all those “level: dbug” entries

Where to report bugs

LXD bugs

The best place to report LXD bugs is upstream at https://github.com/lxc/lxd/issues.
Make sure to fill in everything in the bug reporting template as that information saves us a lot of back and forth to reproduce your environment.

Ubuntu bugs

If you find a problem with the Ubuntu package itself, failing to install, upgrade or remove. Or run into issues with the LXD init scripts. The best place to report such bugs is on Launchpad.

On an Ubuntu system, you can do so with: ubuntu-bug lxd
This will automatically include a number of log files and package information for us to look at.

CRIU bugs

Bugs that are related to CRIU which you can spot by the usually pretty visible CRIU error output should be reported on Launchpad with: ubuntu-bug criu

Do note that the use of CRIU through LXD is considered to be a beta feature and unless you are willing to pay for support through a support contract with Canonical, it may take a while before we get to look at your bug report.

Contributing to LXD

LXD is written in Go and hosted on Github.
We welcome external contributions of any size. There is no CLA or similar legal agreement to sign to contribute to LXD, just the usual Developer Certificate of Ownership (Signed-off-by: line).

We have a number of potential features listed on our issue tracker that can make good starting points for new contributors. It’s usually best to first file an issue before starting to work on code, just so everyone knows that you’re doing that work and so we can give some early feedback.

Building LXD from source

Upstream maintains up to date instructions here: https://github.com/lxc/lxd#building-from-source

You’ll want to fork the upstream repository on Github and then push your changes to your branch. We recommend rebasing on upstream LXD daily as we do tend to merge changes pretty regularly.

Running the testsuite

LXD maintains two sets of tests. Unit tests and integration tests. You can run all of them with:

sudo -E make check

To run the unit tests only, use:

sudo -E go test ./...

To run the integration tests, use:

cd test
sudo -E ./main.sh

That latter one supports quite a number of environment variables to test various storage backends, disable network tests, use a ramdisk or just tweak log output. Some of those are:

  • LXD_BACKEND: One of “btrfs”, “dir”, “lvm” or “zfs” (defaults to “dir”)
    Lets your run the whole testsuite with any of the LXD storage drivers.
  • LXD_CONCURRENT: “true” or “false” (defaults to “false”)
    This enables a few extra concurrency tests.
  • LXD_DEBUG: “true” or “false” (defaults to “false”)
    This will log all shell commands and run all LXD commands in debug mode.
  • LXD_INSPECT: “true” or “false” (defaults to “false”)
    This will cause the testsuite to hang on failure so you can inspect the environment.
  • LXD_LOGS: A directory to dump all LXD log files into (defaults to “”)
    The “logs” directory of all spawned LXD daemons will be copied over to this path.
  • LXD_OFFLINE: “true” or “false” (defaults to “false”)
    Disables any test which relies on outside network connectivity.
  • LXD_TEST_IMAGE: path to a LXD image in the unified format (defaults to “”)
    Lets you use a custom test image rather than the default minimal busybox image.
  • LXD_TMPFS: “true” or “false” (defaults to “false”)
    Runs the whole testsuite within a “tmpfs” mount, this can use quite a bit of memory but makes the testsuite significantly faster.
  • LXD_VERBOSE: “true” or “false” (defaults to “false”)
    A less extreme version of LXD_DEBUG. Shell commands are still logged but –debug isn’t passed to the LXC commands and the LXD daemon only runs with –verbose.

The testsuite will alert you to any missing dependency before it actually runs. A test run on a reasonably fast machine can be done under 10 minutes.

Sending your branch

Before sending a pull request, you’ll want to confirm that:

  • Your branch has been rebased on the upstream branch
  • All your commits messages include the “Signed-off-by: First Last <email>” line
  • You’ve removed any temporary debugging code you may have used
  • You’ve squashed related commits together to keep your branch easily reviewable
  • The unit and integration tests all pass

Once that’s all done, open a pull request on Github. Our Jenkins will validate that the commits are all signed-off, a test build on MacOS and Windows will automatically be performed and if things look good, we’ll trigger a full Jenkins test run that will test your branch on all storage backends, 32bit and 64bit and all the Go versions we care about.

This typically takes less than an hour to happen, assuming one of us is around to trigger Jenkins.

Once all the tests are done and we’re happy with the code itself, your branch will be merged into master and your code will be in the next LXD feature release. If the changes are suitable for the LXD stable-2.0 branch, we’ll backport them for you.

Conclusion

I hope this series of blog post has been helpful in understanding what LXD is and what it can do!

This series’ scope was limited to the LTS version of LXD (2.0.x) but we also do monthly feature releases for those who want the latest features. You can find a few other blog posts covering such features listed in the original LXD 2.0 series post.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Dustin Kirkland

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin

      Read more
      Alan Griffiths

      miral-workspaces

      “Workspaces” have arrived on MirAL trunk (lp:miral).

      We won’t be releasing 1.3 with this feature just yet (as we want some experience with this internally first). But if you build from source there’s an example to play with (bin/miral-app).

      As always, bug reports and other suggestions are welcome.

      Note that the miral-shell doesn’t have transitions and other effects like fully featured desktop environments.

      Read more
      facundo

      Resumen veraniego de películas


      Muchas, muchas películas vistas. Igual no entro en ritmo en ver más; estoy complicado en encontrar ese par de horas en que los niños están tranquilos y yo no estoy muy cansado :p

      • Alice Through the Looking Glass: +0. Divertida, un flash, pero tampoco mucho más que una colección de momentos interesantes.
      • All Is Lost: -0. La supervivencia de alguien con una seguidilla de malas suertes; mirala sólo si te interesa esto de "estar solo y sobrevivir como se pueda".
      • Captain America: Civil War: +0. La típica pelea entre superheroes, pero no se me hizo pesada; de bonus tiene una temática interesante de pensar, sobre el control de gobiernos sobre las armas.
      • Clouds of Sils Maria: -0. Aunque tiene muchas charlas interesantes, la historia en sí no tiene ritmo, y no va a ningún lado.
      • Danny Collins: +0. Linda historia, no del todo esperado lo que sucede, emotiva, bien armada.
      • El Ardor: +0. Buena la historia, buena la ambientación, y creo que muestra bien una realidad que conocemos muy poco.
      • Ex Machina: -0. No me gustó, pero no sé bien por qué. ¿Le faltó suspenso? ¿Muy plana? Lo que plantean a nivel inteligencia artificial está bien, sin embargo (me hubiese gustado más profundidad, pero bueno, es una película para las masas, no un documental).
      • Fantastic Four: -0. Un punto de vista diferente del clásico, pero bleh.
      • Home Sweet Hell: -0. Con sus partes muy graciosas, pero la historia no llega a ser.
      • La Vénus à la fourrure: +1. La dinámica entre dos personas, la linea entre la realidad y la ficción. Me encantó.
      • Laggies: +0. Apenas lenta, pero buena historia, buen desarrollo, me gustó como muestra la evolución de la decisión del personaje principal.
      • Match: +0. Linda historia, buenas actuaciones. Potente.
      • Nina: +1. Mis más grandes respetos para Zoe Saldaña. Maravillosa. Deslumbrante. Me gustaría saber qué piensa una o un fan de Nina Simone sobre esta película.
      • Pan: -0. Una versión distinta del clásico, bastante renovada, no me llegó a atrapar.
      • Pixels: +0. Divertida y pasatista, me gustó estando Adam Sandler y todo. Tampoco es la gran cosa, eh, pero es más que nada piola en función de los videojuegos viejos...
      • Predestination: +1. Muy buena historia, no vas entendiendo de qué va hasta que te enroscó y después ya caiste en la (buena) trampa.
      • Stealing Beauty: -0. Una linda historia, una maravillosa fotografía, pero le falta "consistencia", es muy etérea, no sé. Y lenta.
      • The November Man: -0. Una de acción y espías wannabe, no mucho.
      • The Right Kind of Wrong: +0. Sólo una comedia romántica, pasatista, pero divertida.
      • Time Lapse: +1. LA historia no es muy profunda, pero maneja muy bien la temporalidad (o los saltos en la misma...).
      • Under the Skin: -1. Hay una historia, ahí, pero la película es EXTREMADAMENTE lenta :(.
      • VANish: -0. Bruta, violenta, y cruda. Pero nada más.
      • Vice: -0. Con algunos dejos de temática interesante, en la que podrían haber incursionado sobre la parte conceptual de los robots, pero la película va por otro lado.


      Un montonazo para ver! Y eso que no estoy encontrando un buen lugar para enterarme de los trailers que van saliendo. Por ahora estoy usando este canal de YouTube, pero no tiene todo. Me sugirieron IMDb, también, pero aunque tiene algunas cosas que el otro no, tiene muy poco y no parece estar del todo bien ordenado.

      • Amateur (2016; Thriller) Martin (Esteban Lamothe) is a lonely television director, who becomes obsessed with his neighbor, Isabel (Jazmin Stuart), when he finds an amateur porn video in which she participates. But Isabel is the wife of Battaglia (Alejandro Awada), the owner of the television station where Martin works. As a strange love encounter takes place between Martin and Isabel, he discovers a secret that puts them both in danger. [D: Sebastian Perillo; A: Alejandro Awada, Esteban Lamothe, Jazmín Stuart]
      • Blade Runner 2049 (2017; Sci-Fi) Thirty years after the events of the first film, a new blade runner, LAPD Officer K (Ryan Gosling), unearths a long-buried secret that has the potential to plunge what's left of society into chaos. K's discovery leads him on a quest to find Rick Deckard (Harrison Ford), a former LAPD blade runner who has been missing for 30 years. [D: Denis Villeneuve; A: Ryan Gosling, Ana de Armas, Jared Leto]
      • Colossal (2016; Action, Sci-Fi, Thriller) A woman discovers that severe catastrophic events are somehow connected to the mental breakdown from which she's suffering. [D: Nacho Vigalondo; A: Dan Stevens, Anne Hathaway, Jason Sudeikis]
      • DxM (2015; Action, Sci-Fi, Thriller) A group of brilliant young students discover the greatest scientific breakthrough of all time: a wireless neural network, connected via a quantum computer, capable of linking the minds of each and every one of us. They realise that quantum theory can be used to transfer motor-skills from one brain to another, a first shareware for human motor-skills. They freely spread this technology, believing it to be a first step towards a new equality and intellectual freedom. But they soon discover that they themselves are part of a much greater and more sinister experiment as dark forces emerge that threaten to subvert this technology into a means of mass-control. MindGamers takes the mind-bender thriller to the next level with an immersive narrative and breath-taking action. [D: Andrew Goth; A: Dominique Tipper, Sam Neill, Tom Payne]
      • Elle (2016; Comedy, Drama, Thriller) Michèle seems indestructible. Head of a successful video game company, she brings the same ruthless attitude to her love life as to business. Being attacked in her home by an unknown assailant changes Michèle's life forever. When she resolutely tracks the man down, they are both drawn into a curious and thrilling game-a game that may, at any moment, spiral out of control. [D: Paul Verhoeven; A: Isabelle Huppert, Laurent Lafitte, Anne Consigny]
      • Frank & Lola (2016; Crime, Drama, Mystery, Romance, Thriller) A psychosexual noir love story, set in Las Vegas and Paris, about love, obsession, sex, betrayal, revenge and, ultimately, the search for redemption. [D: Matthew Ross; A: Imogen Poots, Michael Shannon, Michael Nyqvist]
      • Ghost in the Shell (2017; Action, Drama, Sci-Fi, Thriller) Based on the internationally acclaimed sci-fi manga series, "Ghost in the Shell" follows the Major, a special ops, one-of-a-kind human cyborg hybrid, who leads the elite task force Section 9. Devoted to stopping the most dangerous criminals and extremists, Section 9 is faced with an enemy whose singular goal is to wipe out Hanka Robotic's advancements in cyber technology. [D: Rupert Sanders; A: Scarlett Johansson, Michael Pitt, Michael Wincott]
      • Guardians of the Galaxy Vol. 2 (2017; Action, Sci-Fi) Set to the backdrop of 'Awesome Mixtape #2,' Marvel's Guardians of the Galaxy Vol. 2 continues the team's adventures as they traverse the outer reaches of the cosmos. The Guardians must fight to keep their newfound family together as they unravel the mysteries of Peter Quill's true parentage. Old foes become new allies and fan-favorite characters from the classic comics will come to our heroes' aid as the Marvel cinematic universe continues to expand. [D: James Gunn; A: Chris Sullivan, Pom Klementieff, Chris Pratt]
      • Kiki, el amor se hace (2016; Comedy) Through five stories, the movie addresses sex and love: Paco and Ana are a marriage looking for reactivate the passion of their sexual relations, long time unsatisfied; Jose Luis tries to recover the affections of his wife Paloma, sit down on a wheelchair after an accident which has limited her mobility; Mª Candelaria and Antonio are a marriage trying by all way to be parents, but she has the trouble that no get an orgasm when make love with him; Álex try to satisfy Natalia's fantasies, while she starts to doubt if he finally will ask her in marriage; and finally, Sandra is a single woman in a permanent searching for a man to fall in love. All them love, fear, live and explore their diverse sexual paraphilias and the different sides of sexuality, trying to find the road to happiness. [D: Paco León; A: Natalia de Molina, Álex García, Jacobo Sánchez]
      • Life (2017; Horror, Sci-Fi, Thriller) Six astronauts aboard the space station study a sample collected from Mars that could provide evidence for extraterrestrial life on the Red Planet. The crew determines that the sample contains a large, single-celled organism - the first example of life beyond Earth. But..things aren't always what they seem. As the crew begins to conduct research, and their methods end up having unintended consequences, the life form proves more intelligent than anyone ever expected. [D: Daniel Espinosa; A: Rebecca Ferguson, Jake Gyllenhaal, Ryan Reynolds]
      • Little Murder (2011; Crime, Drama, Thriller) In post-Katrina New Orleans, a disgraced detective encounters the ghost of a murdered woman who wants to help him identify her killer. [D: Predrag Antonijevic; A: Josh Lucas, Terrence Howard, Lake Bell]
      • Logan (2017; Action, Drama, Sci-Fi) In the near future, a weary Logan cares for an ailing Professor X in a hide out on the Mexican border. But Logan's attempts to hide from the world and his legacy are up-ended when a young mutant arrives, being pursued by dark forces. [D: James Mangold; A: Doris Morgado, Hugh Jackman, Dafne Keen]
      • Passengers (2016; Adventure, Drama, Romance, Sci-Fi) The spaceship, Starship Avalon, in its 120-year voyage to a distant colony planet known as the "Homestead Colony" and transporting 5,258 people has a malfunction in one of its sleep chambers. As a result one hibernation pod opens prematurely and the one person that awakes, Jim Preston (Chris Pratt) is stranded on the spaceship, still 90 years from his destination. [D: Morten Tyldum; A: Jennifer Lawrence, Chris Pratt, Michael Sheen]
      • Personal Shopper (2016; Drama, Mystery, Thriller) Revolves around a ghost story that takes place in the fashion underworld of Paris. [D: Olivier Assayas; A: Kristen Stewart, Lars Eidinger, Sigrid Bouaziz]
      • Pirates of the Caribbean: Dead Men Tell No Tales (2017; Action, Adventure, Comedy, Fantasy) Captain Jack Sparrow finds the winds of ill-fortune blowing even more strongly when deadly ghost pirates led by his old nemesis, the terrifying Captain Salazar, escape from the Devil's Triangle, determined to kill every pirate at sea...including him. Captain Jack's only hope of survival lies in seeking out the legendary Trident of Poseidon, a powerful artifact that bestows upon its possessor total control over the seas. [D: Joachim Rønning, Espen Sandberg; A: Kaya Scodelario, Johnny Depp, Javier Bardem]
      • Spider-Man: Homecoming (2017; Action, Adventure, Sci-Fi) A young Peter Parker/Spider-Man, who made his sensational debut in Captain America: Civil War, begins to navigate his newfound identity as the web-slinging superhero in Spider-Man: Homecoming. Thrilled by his experience with the Avengers, Peter returns home, where he lives with his Aunt May, under the watchful eye of his new mentor Tony Stark, Peter tries to fall back into his normal daily routine - distracted by thoughts of proving himself to be more than just your freindly neighborhood Spider-Man - but when the Vulture emerges as a new villain, everything that Peter holds most important will be threatened. [D: Jon Watts; A: Robert Downey Jr., Tom Holland, Angourie Rice]
      • T2 Trainspotting (2017; Comedy, Drama) First there was an opportunity......then there was a betrayal. Twenty years have gone by. Much has changed but just as much remains the same. Mark Renton (Ewan McGregor) returns to the only place he can ever call home. They are waiting for him: Spud (Ewen Bremner), Sick Boy (Jonny Lee Miller), and Begbie (Robert Carlyle). Other old friends are waiting too: sorrow, loss, joy, vengeance, hatred, friendship, love, longing, fear, regret, diamorphine, self-destruction and mortal danger, they are all lined up to welcome him, ready to join the dance. [D: Danny Boyle; A: Ewan McGregor, Logan Gillies, Ben Skelton]
      • The Discovery (2017; Romance, Sci-Fi) Writer-director Charlie McDowell returns to Sundance this year with a thriller about a scientist (played by Robert Redford) who uncovers scientific proof that there is indeed an afterlife. His son is portrayed by Jason Segel, who's not too sure about his father's "discovery", and Rooney Mara plays a mystery woman who has her own reasons for wanting to find out more about the afterlife. [D: Charlie McDowell; A: Rooney Mara, Riley Keough, Robert Redford]
      • The Whole Truth (2016; Drama, Thriller) Defense attorney Richard Ramsay takes on a personal case when he swears to his widowed friend, Loretta Lassiter, that he will keep her son Mike out of prison. Charged with murdering his father, Mike initially confesses to the crime. But as the trial proceeds, chilling evidence about the kind of man that Boone Lassiter really was comes to light. While Ramsay uses the evidence to get his client acquitted, his new colleague Janelle tries to dig deeper - and begins to realize that the whole truth is something she alone can uncover. [D: Courtney Hunt; A: Keanu Reeves, Renée Zellweger, Gugu Mbatha-Raw]
      • The Comedian (2016; Comedy) A look at the life of an aging insult comic named Jack Burke. [D: Taylor Hackford; A: Robert De Niro, Leslie Mann, Harvey Keitel]
      • The Mummy (2017; Action, Adventure, Fantasy, Horror) Though safely entombed in a crypt deep beneath the unforgiving desert, an ancient princess whose destiny was unjustly taken from her is awakened in our current day, bringing with her malevolence grown over millennia, and terrors that defy human comprehension. [D: Alex Kurtzman; A: Tom Cruise, Sofia Boutella, Russell Crowe]
      • Valerian and the City of a Thousand Planets (2017; Action, Adventure, Sci-Fi) Rooted in the classic graphic novel series, Valerian and Laureline- visionary writer/director Luc Besson advances this iconic source material into a contemporary, unique and epic science fiction saga. Valerian (Dane DeHaan) and Laureline (Cara Delevingne) are special operatives for the government of the human territories charged with maintaining order throughout the universe. Valerian has more in mind than a professional relationship with his partner- blatantly chasing after her with propositions of romance. But his extensive history with women, and her traditional values, drive Laureline to continuously rebuff him. Under directive from their Commander (Clive Owen), Valerian and Laureline embark on a mission to the breathtaking intergalactic city of Alpha, an ever-expanding metropolis comprised of thousands of different species from all four corners of the universe. Alpha's seventeen million inhabitants have converged over time- uniting their talents, technology and resources for the betterment of all. Unfortunately, not everyone on Alpha shares in these same objectives; in fact, unseen forces are at work, placing our race in great danger. [D: Luc Besson; A: Dane DeHaan, Cara Delevingne, Ethan Hawke]
      • Vampyres (2015; Horror) Faithful to the sexy, twisted 1974 cult classic by Joseph Larraz, Vampyres is an English-language remake pulsating with raw eroticism, wicked sado-masochism and bloody, creative gore. Victor Matellano (Wax (2014, Zarpazos! A Journey through Spanish Horror, 2013) directs this tale set in a stately English manor inhabited by two older female vampires and with their only cohabitant being a man imprisoned in the basement. Their lives and lifestyle are upended when a trio of campers come upon their lair and seek to uncover their dark secrets, a decision that has sexual and blood-curdling consequences. [D: Víctor Matellano; A: Marta Flich, Almudena León, Alina Nastase]
      • Zero Days (2016; Documentary) Documentary detailing claims of American/Israeli jointly developed malware Stuxnet being deployed not only to destroy Iranian enrichment centrifuges but also threaten attacks against Iranian civilian infrastructure. Adresses obvious potential blowback of this possibly being deployed against the US by Iran in retaliation. [D: Alex Gibney; A: David Sanger, Emad Kiyaei, Eric Chien]
      • Collateral Beauty (2016; Drama, Romance) When a successful New York advertising executive suffers a great tragedy, he retreats from life. While his concerned friends try desperately to reconnect with him, he seeks answers from the universe by writing letters to Love, Time and Death. But it's not until his notes bring unexpected personal responses that he begins to understand how these constants interlock in a life fully lived, and how even the deepest loss can reveal moments of meaning and beauty [D: David Frankel; A: Will Smith, Edward Norton, Kate Winslet]
      • Passage to Mars (2016; Documentary, Adventure) The journals of a true NASA Arctic expedition unveils the adventure of a six-man crew's aboard an experimental vehicle designed to prepare the first human exploration of Mars. A voyage of fears and survival, hopes and dreams, through the beauties and the deadly dangers of two worlds: the High Arctic and Mars, a planet that might hide the secret of our origins. [D: Jean-Christophe Jeauffre; A: Zachary Quinto, Charlotte Rampling, Pascal Lee]


      Finalmente, el conteo de pendientes por fecha:

      (Abr-2011)    4
      (Ago-2011)   11   4
      (Ene-2012)   17  11   3
      (Jul-2012)   15  14  11
      (Nov-2012)   11  11  11   6
      (Feb-2013)   15  14  14   8   2
      (Jun-2013)   16  15  15  15  11   2
      (Sep-2013)   18  18  18  17  16   8
      (Dic-2013)   14  14  12  12  12  12   4
      (Abr-2014)        9   9   8   8   8   3
      (Jul-2014)           10  10  10  10  10   5   1
      (Nov-2014)               24  22  22  22  22   7
      (Feb-2015)                   13  13  13  13  10
      (Jun-2015)                       16  16  15  13  11
      (Dic-2015)                           21  19  19  18
      (May-2016)                               26  25  23
      (Sep-2016)                                   19  19
      (Feb-2017)                                       26
      Total:      121 110 103 100  94  91  89 100  94  97

      Read more
      Alan Griffiths

      MirAL 1.2

      There’s a new MirAL release (1.2.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

      Unsurprisingly, given the project’s original goal, the ABI is unchanged.

      Since my last update the integration of libmiral into QtMir has progressed and libmiral has been used in the latest updates to Unity8.

      The changes in 1.2.0 fall are:

      A new libmirclientcpp-dev package

      This is a “C++ wrapper for libmirclient” and has been split
      from libmiral-dev.

      Currently it comprises RAII wrappers for some Mir client library types: MirConnection, MirWindowSpec, MirWindow and MirWindowId. In addition, the WindowSpec wrapper provides named constructors and function chaining to enable code like the following:

      auto const window = mir::client::WindowSpec::
          for_normal_window(connection, 50, 50, mir_pixel_format_argb_8888)
          .set_buffer_usage(mir_buffer_usage_software)
          .set_name(a_window.c_str())
          .create_window();
      

      Refresh the “Building and Using MirAL” doc

      This has been rewritten (and renamed) to reflect the presence of MirAL in the Ubuntu archives and make installation (rather than “build it yourself”) the default approach.

      Bug fixes

      • [libmiral] Chrome-less shell hint does not work any more (LP: #1658117)
      • “$ miral-app -kiosk” fails with “Unknown command line options:
        –desktop_file_hint=miral-shell.desktop” (LP: #1660933)
      • [libmiral] Fix focus and movement rules for Input Method and Satellite
        windows. (LP: #1660691)
      • [libmirclientcpp-dev] WindowSpec::set_state() wrapper for mir_window_spec_set_state()
        (LP: #1661256)

      Read more