Canonical Voices

Alan Griffiths

MirAL 1.3.1

There’s a bugfix MirAL release (1.3.1) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged.

The bugfixes in 1.3.1 are:

In libmiral a focus management fix:

When a dialog is hidden ensure that the active window focus goes to the parent. (LP: #1671072)

In the miral-shell example, two crashes fixed:

If a surface is deleted before its decoration is painted miral-shell can crash, or hang on exit (LP: #1673038)

If the specified “titlebar” font doesn’t exist the server crashes (LP: #1671028)

In addition a misspelling of “management” has been corrected:

SetWindowManagmentPolicy => SetWindowManagementPolicy

Read more
Dustin Kirkland

Canonical announced the Ubuntu 12.04 LTS (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with all LTS releases, Canonical has provided ongoing security patches and bug fixes for a period of 5 years. The Ubuntu 12.04 LTS (Long Term Support) period will end on Friday, April 28, 2017.

Following the end-of-life of Ubuntu 12.04 LTS, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04.  These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers on a per-node basis.

All Ubuntu 12.04 LTS users are encouraged to upgrade to Ubuntu 14.04 LTS or Ubuntu 16.04 LTS. But for those who cannot upgrade immediately, Ubuntu 12.04 ESM updates will help ensure the on-going security and integrity of Ubuntu 12.04 systems.

Users interested in Ubuntu 12.04 ESM updates can purchase Ubuntu Advantage at   Credentials for the private archive will be available by the end-of-life date for Ubuntu 12.04 LTS (April 28, 2017).

Questions?  Post in the comments below and join us for a live webinar, "HOWTO: Ensure the Ongoing Security Compliance of your Ubuntu 12.04 Systems", on Wednesday, March 22nd at 4pm GMT / 12pm EDT / 9am PDT.  Here, we'll discuss Ubuntu 12.04 ESM and perform a few live upgrades of Ubuntu 12.04 LTS systems.


Read more
Alan Griffiths

Mir and Zesty

Mir is continuing to make progress towards a 1.0 release and, meanwhile, Zesty Zapus (Ubuntu 17.04) is continuing to make progress towards final freeze.

Currently the version of Mir in Zesty is 0.26.1 and we’re not planning any major changes for the 17.04 series. We’re probably going to make a bugfix release (0.26.2). The other possibility is that work on supporting hybrid graphics is completed in time for adequate testing for 17.04. In the latter case we’ll be releasing Mir 0.27 to get that shipped.

For this and other reasons it isn’t yet clear whether there will be a 0.27 release before we move to 1.0.

The significance of a 1.0 release is that it will be the time we break the mirclient ABI and delete a lot of deprecated APIs, which will have a significant effect on downstream projects. We’ve tried to prepare by marking the deprecations in 0.26 and updating downstream projects accordingly. But while this preparation means that most downstream projects “only need recompiling” this is something we want to do at the start of a release cycle, not at the end.

The argument for a 0.27 release is that there is functionality we want to release and that this can be done without the disruption of an ABI break. So even if we don’t release 0.27 for 17.04 we may well do so once 17.10 is “open” in order to make this work available for Unity8 developers to use.

Either way, sometime early in the 17.10 cycle we’re going to release Mir 1.0. This will clear the way for Mir support in Mesa and Vulkan.

Read more
Alan Griffiths

Choosing a backend

I got drawn into a discussion today and swiftly realized there is no right answer. But there should be!

The question is deceptively simple: Which order should graphics toolkits probe for backends?

My contention is that the answer is: “it depends”.

Suppose that I’m running a traditional X11 based desktop and am testing with a new technology (obviously Mir, but the same applies to Wayland) running as a window on top of it. (I.e. Mir-on-X or Wayland-on-X)

In this case I want any new application to *default* to connecting to the main X11 desktop – I don’t want my test session to “capture” any applications launched normally.

Now suppose I’m running a new technology desktop that provides an X11 socket as a backup (Xmir/Xwayland). In this case I want any new application to *default* to connecting to the main Mir/Wayland desktop – only if the toolkit doesn’t support Mir/Wayland should it connect to the X11 socket.

Now GDK, for example, provides for this with GDK_BACKEND=mir,wayland,x11 or GDK_BACKEND=x11,mir,wayland (as needed). But that is only one toolkit: OTTOMH Qt has QT_QPA_PLATFORM and SDL has SDL_VIDEODRIVER. (I’m sure there are others.)

What is needed is a standard environment variable that all toolkits (and other graphics libs) can use to prioritize backends. One of my colleagues suggested XDG_TOOLKIT_BACKEND (working much the way that GDK_BACKEND does).

That only helps if all the toolkits take notice. Is it worth pursuing?

Read more

In the conclusions to my last post, “Modifying System Call Arguments With ptrace”, I mentioned that one of the main drawbacks of the explained approach for modifying system call arguments was that there is a process switch for each system call performed by the tracee. I also suggested a possible approach to overcome that issue using ptrace jointly with seccomp, with the later making sure the tracer gets only the system calls we are interested in. In this post I develop this idea further and show how this can be achieved.

For this, I have created a little example that can be found in github, along the example used in the previous post. The main idea is to use seccomp with a Berkeley Packet Filter (BPF) that will specify the conditions under which the tracer gets interrupted.

Now we will go through the source code, with emphasis on the parts that differ from the original example. Skipping the include directives and the forward declarations we get to main():

int main(int argc, char **argv)
    pid_t pid;
    int status;

    if (argc < 2) {
        fprintf(stderr, "Usage: %s <prog> <arg1> ... <argN>\n", argv[0]);
        return 1;

    if ((pid = fork()) == 0) {
        /* If open syscall, trace */
        struct sock_filter filter[] = {
            BPF_STMT(BPF_LD+BPF_W+BPF_ABS, offsetof(struct seccomp_data, nr)),
            BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_open, 0, 1),
        struct sock_fprog prog = {
            .filter = filter,
            .len = (unsigned short) (sizeof(filter)/sizeof(filter[0])),
        ptrace(PTRACE_TRACEME, 0, 0, 0);
        /* To avoid the need for CAP_SYS_ADMIN */
        if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) == -1) {
            return 1;
        if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog) == -1) {
            perror("when setting seccomp filter");
            return 1;
        kill(getpid(), SIGSTOP);
        return execvp(argv[1], argv + 1);
    } else {
        waitpid(pid, &status, 0);
        return 0;

The main change here when compared to the original code is the set-up of a BPF in the tracee, right after performing the call to fork(). BPFs have an intimidating syntax at first glance, but once you grasp the basic concepts behind they are actually quite easy to read. BPFs are defined as a sort of virtual machine (VM) which has one data register or accumulator, one index register, and an implicit program counter (PC). Its “assembly” instructions are defined as a structure with format:

struct sock_filter {
    u_short code;
    u_char  jt;
    u_char  jf;
    u_long k;

There are codes (opcodes) for loading into the accumulator, jumping, and so on. jt and jf are increments on the program counter that are used in jump instructions, while k is an auxiliary value which usage depends on the code number.

BPFs have an addressable space with data that is in the networking case a packet datagram, and for seccomp the following structure:

struct seccomp_data {
    int   nr;                   /* System call number */
    __u32 arch;                 /* AUDIT_ARCH_* value
                                   (see <linux/audit.h>) */
    __u64 instruction_pointer;  /* CPU instruction pointer */
    __u64 args[6];              /* Up to 6 system call arguments */

So basically what BPFs do in seccomp is to operate on this data, and return a value that tells the kernel what to do next: allow the process to perform the call (SECCOMP_RET_ALLOW), kill it (SECCOMP_RET_KILL), or other options as specified in the seccomp man page.

As can be seen, struct seccomp_data contains more than enough information for our purposes: we can filter based on the system call number and on the arguments.

With all this information we can look now at the filter definition. BPFs filters are defined as an array of sock_filter structures, where each entry is a BPF instruction. In our case we have

BPF_STMT(BPF_LD+BPF_W+BPF_ABS, offsetof(struct seccomp_data, nr)),

BPF_STMT and BPF_JUMP are a couple of simple macros that fill the sock_filter structure. They differ in the arguments, which include jumping offsets in BPF_JUMP. The first argument is in both cases the “opcode”, which is built with macros as a mnemonics help: for instance the first one is for loading into the accumulator (BPF_LD) a word (BPF_W) using absolute addressing (BPF_ABS). More about this can be read here, for instance.

Analysing now in more detail the filter, the first instruction is asking the VM to load the call number, nr, to the accumulator. The second one compares that to the number for the open syscall, and asks the VM to not modify the counter if they are equal (PC+0), so the third instruction is run, or jump to PC+1 otherwise, which would be the 4th instruction (when executing this instruction the PC points already to the 3rd instruction). So if this is an open syscall we return SECCOMP_RET_TRACE, which will invoke the tracer, otherwise we return SECCOMP_RET_ALLOW, which will let the tracee run the syscall without further impediment.

Moving forward, the first call to prctl sets PR_SET_NO_NEW_PRIVS, which impedes child processes to have more privileges than those of the parent. This is needed to make the following call to prctl, which sets the seccomp filter using the PR_SET_SECCOMP option, succeed even when not being root. After that, we call execvp() as in the ptrace-only example.

Switching to what the parent does, we see that changes are very few. In main(), we set the PTRACE_O_TRACESECCOMP option, that makes the tracee stop when a filter returns SECCOMP_RET_TRACE and signals the event to the tracer. The other change in this function is that we do not need to set anymore PTRACE_O_TRACESYSGOOD, as we are being interrupted by seccomp, not because of system calls.

Moving now to the next function,

static void process_signals(pid_t child)
    const char *file_to_redirect = "ONE.txt";
    const char *file_to_avoid = "TWO.txt";

    while(1) {
        char orig_file[PATH_MAX];

        /* Wait for open syscall start */
        if (wait_for_open(child) != 0) break;

        /* Find out file and re-direct if it is the target */

        read_file(child, orig_file);
        printf("[Opening %s]\n", orig_file);

        if (strcmp(file_to_avoid, orig_file) == 0)
            redirect_file(child, file_to_redirect);

we see here that now we invoke wait_for_open() only once. Differently to when we are tracing each syscall, which interrupted the tracer before and after the execution of the syscall, seccomp will interrupt us only before the call is processed. We also add here a trace for demonstration purposes.

After that, we have

static int wait_for_open(pid_t child)
    int status;

    while (1) {
        ptrace(PTRACE_CONT, child, 0, 0);
        waitpid(child, &status, 0);
        printf("[waitpid status: 0x%08x]\n", status);
        /* Is it our filter for the open syscall? */
        if (status >> 8 == (SIGTRAP | (PTRACE_EVENT_SECCOMP << 8)) &&
            ptrace(PTRACE_PEEKUSER, child,
                   sizeof(long)*ORIG_RAX, 0) == __NR_open)
            return 0;
        if (WIFEXITED(status))
            return 1;

Here we use PTRACE_CONT instead of PTRACE_SYSCALL. We get interrupted every time there is a match in the BPF as we have set the PTRACE_O_TRACESECCOMP option, and we let the tracer run until that happens. The other change here, besides a trace, is how we check if we have received the event we are interested in, as obviously the status word is different. The details can be seen in ptrace’s man page. Note also that we could actually avoid the test for __NR_open as the BPF will interrupt us only for open syscalls.

The rest of the code, which is the part that actually changes the argument to the open syscall is exactly the same. Now, let’s check if this works as advertised:

$ git clone
$ cd ptrace-redirect/
$ cat ONE.txt 
This is ONE.txt
$ cat TWO.txt 
This is TWO.txt
$ gcc redir_filter.c -o redir_filter
$ ./redir_filter cat TWO.txt 
[waitpid status: 0x0000057f]
[waitpid status: 0x0007057f]
[Opening /etc/]
[waitpid status: 0x0007057f]
[Opening /lib/x86_64-linux-gnu/]
[waitpid status: 0x0007057f]
[Opening /usr/lib/locale/locale-archive]
[waitpid status: 0x0007057f]
[Opening TWO.txt]
This is ONE.txt
[waitpid status: 0x00000000]

It does indeed! Note that traces show that the tracer gets interrupted only by the open syscall (besides an initial trap and when the child exits). If we added the same traces to the ptrace-only program we would see many more calls.

Finally, a word of caution regarding call numbers: in this post and in the previous one we are assuming an x86-64 architecture, so the programs would need to be adapted if we want to use it in different archs. There is also an important catch here: we are implicitly assuming that the child process that gets run by the execvp() call is also x86-64, as we are filtering by using the syscall number for that arch. This implies that this will not work in the case that the child program is compiled for i386. To make this example work properly also in that case, we must check the architecture in the BPF, by looking at “arch” in seccomp_data, and use the appropriate syscall number in each case. We would also need to check the arch before looking at the tracee registers, see an example on how to do this here (alternatively we could make the BPF return this data in the SECCOMP_RET_DATA bits of its return value, which can be retrieved by the tracer via PTRACE_GETEVENTMSG). Needless to say, for arm64/32 we would have similar issues.

Read more
Barry McGee

One of the most complex aspects of managing continuous development on a large codebase is ensuring that it remains stable.

This problem is particularly acute when building out front end architecture using HTML & CSS due to the inherently global nature of CSS.

How many times have you shipped a CSS change to one small part of a website only to find you’ve inadvertently broken a page element on a different page entirely?

This problem usually arises because of all your CSS loading via one external file, added to each page of your website. If you don’t namespace or isolate your styles correctly, changes to your CSS may have unintended consequences.

Structuring your CSS using the BEM convention or similar can help prevent such clashes. However, in a fast moving team where multiple developers are working on a large codebase daily, relying on code convention alone is often not enough to stop visual regression bugs from creeping in.

Ideally, you or a team member should check each page of your site, in turn, to make sure nothing has broken, right? While that’s a solid QA approach, it doesn’t scale very well. As your site grows, it can become all time consuming to check each page, especially if you consider you may also need to check each page over multiple breakpoints.

That’s where automated Visual Regression Testing (VRT) tools can seriously lighten your workload. A VRT tool will typically run through your site and capture a baseline snapshot of all your pages to use as a benchmark.

After you then make some changes, you run the process again and the VRT tool will compare the latest capture of your pages with the baseline capture and highlight the differences. It’s at this stage where you’ll be alerted to any unintended consequences.

The concept of VRT has been around for a few years but up until now, most solutions have involved setting up your process locally, typically involving quite a few moving parts. When trying to get a project team to integrate VRT as part of their workflow using one of these solutions, we always ran into trouble as it was so difficult to keep individual developer setups consistent – inevitably, I’d spend longer debugging VRT setup than I would visual diffs.

I then stumbled upon, which offers VRT software as a service. I was immediately interested in how we might utilise it for Vanilla Framework, our constantly evolving CSS framework.

I immediately signed up for a trial and was quickly impressed with their GitHub integration and ease of use. Percy is unobtrusive, and it’s only when a feature progresses to the Pull Request stage does Percy come into play. It will run as part of the Travis CI build and then report back if it has found any visual diffs for review. You can also configure Percy to test across defined breakpoints.

Percy’s Github integration is a big win

The person reviewing the PR can then click through to the project dashboard on and review the highlighted diffs. If the changes are expected based on the what has been outlined in the PR, then the changes can be approved.

Comparing different pages for visual differences

When the feature merges, these changes then become the baseline. If unexpected changes are highlighted, the reviewer can then highlight this to the developer for amendment.

As we make multiple changes a day to our Vanilla codebase while aiming for a weekly release, having VRT as part of our continuous integration has afforded us extra confidence that our releases do not contain missed bugs and regressions.


Read more
Anthony Dillon

The Vanilla team needed to solve two issues which have been paining the development of Vanilla Framework for some time.

Firstly we needed to improve our workflow for testing and QAing components on our local machines. Up until now, we have been using npm link on our local branches of Vanilla with our local website branch, then reviewing the examples in the components page of the documentation. This caused a lot of extra overhead to reviewing Vanilla.

Secondly, since we actually build the site using the Documentation theme (vanilla-docs-theme), the Vanilla pattern examples we ended up reviewing were no longer purely styled by Vanilla Framework, but as they were extended by the theme.

The documentation of the matrix pattern in Vanilla

The documentation of the matrix pattern in Vanilla

The solution

To solve both these issues, we decided to decouple the examples from the documentation. This change allowed us to move the coded examples of the patterns into a separate “examples” directory of the codebase and remove the hard-coded examples from the documentation.

As the examples were a part of the Vanilla Framework code we simply linked each example page with the Vanilla built from the same branch. This means all examples are only styled by Vanilla and nothing else.

Another benefit that came from this change was that now we have an easy way to find an example of a pattern when reviewing or QAing a pull request. Whereas previously we had to do the npm link dance. Now we simply check out the branch and run the internal Jekyll site to build Vanilla giving us a directory of pattern pages.

Examples in the docs

So we were happy with these changes: we had solved the issues at hand and were ready to head off and have a celebratory coffee.  But, we couldn’t leave the documentation without examples and code snippets.

To solve this issue, we used an embedding paradigm like on Codepen.

Example of a Codepen embed

Example of a Codepen embed

We set about creating a small JavaScript project that would find a link to the page with a specific class and grab the href attribute from it, replacing the link with an iframe of the link. This gave us a nice progressively enhanced experience:

Example of progressive enhancement - on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

An example of progressive enhancement – on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

We were still lacking the code snippets, so we made the script also pull the HTML source of the linked page into the example, then display the contents of the body in a code block appended after the iframe.

The wrap-up

And that was it. The solution gives us:

  • A single place for example code
  • Examples only displayed using Vanilla
  • A local testing environment
  • Documentation examples that are automatically up to date

We named this mini project example-js. Please feel free to fork it, use it or file any issues you may find.

Read more
Will Moggridge


The web team has been hard at work on our new Ubuntu Tutorials website and we are proud to share our work with the community. Our first set of tutorials are based around snap usage and building snaps with snapcraft. We will continue to work on our catalogue to broaden it to a variety of subjects.

Ubuntu Tutorials is part of a bigger project to improve our documentation across our other projects. Our goals are to improve the discoverability and the ease of use for our documentation. Having followed Ubuntu and been part of the community for many years, I am excited to be involved with this project. I hope we can keep moving forward with this work and give back to the community.

Polymer and our source code

The website is built using Google’s Polymer framework with their Codelabs web components. Polymer has been a great and enjoyable experience and really made the web components so much more more exciting. I am already looking to see where I can use these technologies in the rest of our projects. We recently had a hack day and had the opportunity to explore putting Vanilla Framework in web components. I am happy with our initial work with Vanilla web components we are looking forward to continue exploring and developing them.

The Ubuntu Tutorials website source code is available for you to dive into, at the Ubuntu Tutorials GitHub repository.
A big thank you to Didier Roche, whose work was the foundation for this.

Our next steps

Looking to the future, we are already thinking about and preparing improvements for the site. We have been really happy with the feedback we are getting on the GitHub issues page. A number of the issues have been requests for tutorials on certain topics. This is really useful and interesting to us, so that we can see which areas to focus.

I am interested in simplifying our process for creating and contributing to Ubuntu Tutorials. Not only for us but also to empower you. One strong area for this is adding functionality to write tutorials using markdown. This will increase visibility for all and remove some overhead to us, while also making it simpler for people to contribute to our catalogue. We are currently looking into this and hope we will have a solution soon.

Read more


Le puso cuatro cubitos de hielo al vaso, dudó unos instantes y sacó uno con los dedos, volviéndolo a tirar a la hielera. Con la cantidad de whisky no dudó, llenó el vaso hasta casi el borde.

Sin abandonar la cercanía del barcito medio pelo contra la pared del living le dió el primer gran trago, y después sí, se fue contra la ventana.

Yo no sabía si mirarlo a él o a ella, que se cerraba el deshabillé por demás, agarrándolo con fuerza, tensa, marcando su casi ausencia de curvas en el cuerpo demasiado flaco.

- ¡Borracho de mierda! -le gritó, casi con desesperación.

Él la ignoró, seguía mirando por la ventana. Desde mi posición, sentado en el sillón, no llegaba a verle la cara, pero adivinaba que tenía la vista perdida. No miraba por la ventana, suponía yo, más bien la usaba como excusa para no tener que mirar nada más.

Ella, con la voz todavía ronca por el llanto, pero mucho más calma, le dijo:

- El alcohol, esa oscuridad donde los cobardes van a esconderse de si mismos.

Él se dio vuelta, con la sorpresa dibujada en el rostro, en parte porque ella no era de hacer ese tipo de declaraciones filosóficas altisonantes, pero en parte -y cada vez que recuerdo ese día estoy más seguro- porque finalmente le tocó alguna cuerda interior.

Dejó el vaso por la mitad apoyado contra el marco de la ventana, abrió la puerta, y no lo vimos nunca más.

Read more
Leo Arias

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield


If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Read more
Stéphane Graber

LXD logo

The LXD demo server

The LXD demo server is the service behind
We use it to showcase LXD by leading visitors through an interactive tour of LXD’s features.

Rather than use some javascript simulation of LXD and its client tool, we give our visitors a real root shell using a LXD container with nesting enabled. This environment is using all of LXD’s resource limits as well as a very strict firewall to prevent abuses and offer everyone a great experience.

This is done using lxd-demo-server which can be found at:
The lxd-demo-server is a daemon that offers a public REST API for use from a web browser.
It supports:

  • Creating containers from an existing container or from a LXD image
  • Choose what command to execute in the containers on connection
  • Lets you choose specific profiles to apply to the containers
  • An API to record user feedback
  • An API to fetch usage statistics for reporting
  • A number of resource restrictions:
    • CPU
    • Disk quota (if using btrfs or zfs as the LXD storage backend)
    • Processes
    • Memory
    • Number of sessions per IP
    • Time limit for the session
    • Total number of concurrent sessions
  • Requiring the user to read and agree to terms of service
  • Recording all sessions in a sqlite3 database
  • A maintenance mode

All of it is configured through a simple yaml configuration file.

Setting up your own

The LXD demo server is now available as a snap package and interacts with the snap version of LXD. To install it on your own system, all you need to do is:

Make sure you don’t have the deb version of LXD installed

ubuntu@djanet:~$ sudo apt remove --purge lxd lxd-client
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following packages will be REMOVED:
 lxd* lxd-client*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 25.3 MB disk space will be freed.
Do you want to continue? [Y/n] 
(Reading database ... 59776 files and directories currently installed.)
Removing lxd (2.0.9-0ubuntu1~16.04.2) ...
Warning: Stopping lxd.service, but it can still be activated by:
Purging configuration files for lxd (2.0.9-0ubuntu1~16.04.2) ...
Removing lxd-client (2.0.9-0ubuntu1~16.04.2) ...
Processing triggers for man-db (2.7.5-1) ...

Install the LXD snap

ubuntu@djanet:~$ sudo snap install lxd
lxd 2.8 from 'canonical' installed

Then configure LXD

ubuntu@djanet:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=43]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And finally install lxd-demo-server itself

ubuntu@djanet:~$ sudo snap install lxd-demo-server
lxd-demo-server git from 'stgraber' installed
ubuntu@djanet:~$ sudo snap connect lxd-demo-server:lxd lxd:lxd

At that point, you can hit and will be greeted with this:

To change the configuration, use:

ubuntu@djanet:~$ sudo lxd-demo-server.configure

And that’s it, you have your own instance of the demo server.


As mentioned at the beginning, the demo server comes with a number of options to prevent users from using all the available resources themselves and bringing the whole thing down.

Those should be tweaked for your particular needs and should also update the total number of concurrent sessions so that you don’t end up over-committing on resources.

On the network side of things, the demo server itself doesn’t do any kind of firewalling or similar network restrictions. If you plan on offering sessions to anyone online, you should make sure that the network which LXD is using is severely restricted and that the host this is running on is also placed in a very restricted part of your network.

Containers handed to strangers should never be using “security.privileged” as that’d be a straight route to getting root privileges on the host. You should also stay away from bind-mounting any part of the host’s filesystem into those containers.

I would also very strongly recommend setting up very frequent security updates on your host and kernel live patching or at least automatic reboot when a new kernel is installed. This should avoid a new kernel security issue from being immediately exploited in your environment.


The LXD demo server was initially written as a quick hack to expose a LXD instance to the Internet so we could let people try LXD online and also offer the upstream team a reliable environment we could have people attempt to reproduce their bugs into.

It’s since grown a bit with new features contributed by users and with improvements we’ve made to the original experience on our website.

We’ve now served over 36000 sessions to over 26000 unique visitors. This has been a great tool for people to try and experience LXD and I hope it will be similarly useful to other projects.

Extra information

The main LXD website is at:
Development happens on Github at:
Mailing-list support happens on:
IRC support happens in: #lxcontainers on
Try LXD online:

Read more
Dustin Kirkland

Mobile World Congress is simply one of the biggest trade shows in the entire world.

It's also, perhaps, the best place in the world to see how encompassing the Ubuntu ecosystem actually is.

Canonical and our partners demonstrated Ubuntu running on dozens of devices -- from robots, to augmented reality headsets, digital signs, vending machines, IoT Gateways, cell tower base stations, phones, tablets, servers, from super computers to tiny, battery powered embedded controllers.

But that was only a tiny fraction of the Ubuntu running at MWC!

We saw Ubuntu at the heart of demos from Dell, AMD, Intel, IBM, Deutsche Telekom, DJI, and hundreds of other booths, running autonomous drones, national telephone networks, self driving cars, smart safety helmets, inflight entertainment systems, and so, so, so much more.

Among the thousands of customers, prospects, fans, competitors, students, and industry executives, we even received a visit from (the somewhat controversial?) King of Spain!

It was an incredible week, with no fewer than 12 hours per day, on our feet, telling the Ubuntu story.
And what a story it is... I hope you enjoy.


Read more
Dustin Kirkland

Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.

As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.


Read more
Alan Griffiths

MirAL 1.3

There’s a new MirAL release (1.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged.

The changes in 1.3.0 fall are:

Support for “workspaces”

This is part of the enabling “workspaces” for Unity8 desktop. MirAL doesn’t provide fancy transitions and spreads, but you can see some basic workspace switching in the miral-shell example program:

$ apt install miral-examples
$ miral-app

There are four workspaces (corresponding to F1-F4) and you can switch using Meta-Alt-[F1|F2|F3|F4], or switch taking the active application to the new workspace using Meta-Ctrl-[F1|F2|F3|F4].

Support for “previous window in application”

You can now use Alt-Shift-` to switch to the previous in an application.

miral-shell adds a background

miral-shell now uses its background for a handy guide to the available keyboard shortcuts.

Bug fixes

Two bug fixes related to shutdown problems: one deals with a possible race in libmiral code, the other works around a bug in Mir.

  • [libmiral] Join internal client threads before server shutdown (LP: #1668651)
  • [miral-shell] Workaround for crash on exit (LP: #1667645)

Read more
Anthony Dillon

Hack day 2

This week, the web team managed to get away for our second hack day. These hack days give us an opportunity to scratch our own itches and work on things we find interesting.

We wrote about our first hack day in August last year.

Getting started

We began by outlining the day and reviewing ideas that had been suggested on a Google Doc throughout the previous week by everyone on the team. We each voted by marking the ideas we would be interested in working. Then we chose the most voted ones and assigned groups of 2 or 3 people to each.

The groups broke up and turned their idea into a formal project with a list of the tasks required to produce an MVP. Below is a list of the ideas and outcomes from each team.

Performance audit of the current websites

Team: Rich, Andrea, Robin

The team discovered a tool called Lighthouse by Google Chrome which analyses a web page and returns a full audit of the dependent assets and accessibility issues.

The team spent some time trying to create a service using Lighthouse to produce an API, then realised that Google Chrome team had done this work already. The service is called Moonlight. Moonlight is a SaaS to test the performance of a page.

As Moonlight takes a single webpage endpoint to test., we need a way to recursively test pages. The team created a profiling script to gather the references endpoints of a site.

Canonical web team dashboard

Team: Luke, Ant, Yaili

The goal of this project was to motivate the team to improve key areas at a glance. The  metrics we wanted to capture were:

  • Whether the site is up or down
  • Live visitors countsMonthly unique visitors
  • Monthly unique visitorsOpen issues on the project
  • Open issues on the projectOpen PRs on the project
  • Open PRs on the project
  • Information about the last commit to the sites code base
  • PageSpeed insights tests results

We gathered a set of sites we would like to collect these metrics on:


The team used MERN stack (MongoDB, expressjs, React and Nodejs) and modified its sample project to create a interface which could be displayed depending on the state of the data. For example, the up or down card would display all sites as up but once one went down the card would change to an error state and only display the information about the site that is down. By designing for emotion in this way, we can intelligently utilise the limited space available in a dashboard.

The team also used a few plugins to gather some data:

  • ping-monitor to ping our sites to check if they’re up, down or broken
  • node-http-ping to get response times for the same set of Canonical sites

Storing the data in MongoDB to keep historical data and using the /api endpoint to return the response time and status for each site, the team managed to produce a simplified dashboard showing the available state of our list of sites. dev tools

Team: Graham, Karl

As a team, we have been using gulp scripts to lint and test our code locally and in our CI environments for sometime. But we have never got around to applying these checks to our flagship website,

The plan here was to implement gulp scripts to lint Sass and JavaScript. And, to also look into further options like spell-checking, auto-prefixing and HTML validation.

The team added Sass linting and borrowed the linting tasks from our styling framework vanilla-framework. This produced a long list of lint issues. The team tracked the lint errors and quickly fixed them to get a passing CI run.

Adding JavaScript linting (jsHint)

The team also implemented JavaScript linting using jsHint on the current JavaScript within the sites code base. This produced a number of JavaScript lint errors which were fixed, ignoring the plugin code.

Finally adding the new linting steps to the Travis configuration. So the linting is tested on each pull request.

Vanilla web components prototype

Team: Barry, Will, Robin

To enable Vanilla on a variety of platforms. This would allow people to use Vanilla in modern web apps.

The team  created a base repository using Polymer’s tools and started creating web components for Vanilla.

They discovered that the styling needs tweaking to be compatible with web components. Possibly just by building a shared styles import which is included in each web component.

The team started by importing vanilla-framework from NPM, then built modular scss files containing only relevant parts from Vanilla, and finally imported the modular style file in web component.

Inside the repository there is a vanilla.html which imports all of the components. Components can individually be included as needed.

This work includes a demo system, with API documentation. The demo system displays the component and the markup used to create it. This is accessed by running `polymer serve` and accessing the site.

This work can be used to build solid web components for use in Polymer and we can also use this work to jumpstart React components.

HTTP/2 on

In the midst of all this work. Robin found time to tackle the task of hosting our styling frameworks website on HTTP/2. It’s currently a proof of concept but can now be considered as the start of work item to roll out.

Demo site


Again, this was a successful hack day with everyone busy working on things that interest them. Although there were less completed outcomes this time, we did set up a number of good projects which are ready to be continued.

Read more
Robin Winslow

We’ve been making an effort to secure all our websites with HTTPS. While some Canonical sites have enforced HTTPS for a while (e.g.:,,, it’s been missing from our other sites until now.


The HTTPS movement has been building for years to help secure internet users against black-hat hackers and spies. The movement became more urgent after Edward Snowden revealed significant efforts by government agencies to spy on the world population.

The EFF have helped create two projects: LetsEncrypt – which massively simplifies the free installation of HTTPS certificates; and HTTPS Everywhere – a browser plugin to help you use HTTPS whenever it’s available. The advent of HTTP/2 has helped negate performance concerns when moving to HTTPS.

Google have also made efforts to encourage websites to enable HTTPS: First announcing in 2014 that they would consider HTTPS support in their search ranking algorithm; and last year, that Google Chrome would start visually warning users of “insecure” (non-HTTPS) websites.

Our sites

We made HTTPS-only in October of last year, and have since done so on 10 more sites:

We hope to enable HTTPS on our other sites in the coming months.

Although enabling HTTPS can be relatively simple there were a number of specific challenges we had to overcome for some of our websites. I hope to write more about these in a follow-up post.

Read more
Leo Arias

Last Sunday we went to the Poás Volcano to make free maps.

This is the second geek outing of the JaquerEspéis. From the first one we learned that we had to wait until summer because it's not possible to make maps during a storm. And the day was perfect. It wasn't just sunny, but the crater was totally clear and thus we could add a new spot of Costa Rica to the virtual tour.

In addition to that, this time we arrived much better prepared, with multiple phones with mapillary, osmand and OSMTracker, a 360 cam, a Garmin GPS, a drone and even a notebook and two biologists.

The procession of the MapperSpace

Here's how it works. Everybody with the GPS in the phone activated waits until it finds the location. Then, each person uses the application of his preference to collect data: pictures, audio, video, text notes, traces, annotations in the notebook...

Later, in our homes, we upload, publish and share all the collected data. These is useful to improve the free maps of OpenStreetMap. We add from really simple things like the location of a trash bin to really important things like how accessible is the place for a person in a wheelchair, together with the location of all the accesses or the places that have a lack of them. Each person improves the map a little, in the region that he knows or passed by. With more than 3 million users, OpenStreetMap is the best map of the world that exists; and it has a particular importance in regions like ours, without a lot of economic potential for the megacorporations that make and sell closed maps stealing private data from their users.

Because the maps we make are free, what comes next has no limits. There are groups working on the reconstruction of 3D models from the pictures, on the identification and interpretation of signs, on applications to calculate the optimal route to reach any place using any combination of means of transportation, on applications to assist decission making during the design of the future of a city, and many other things. All of this based on shared knowledge and community.

The image above is the virtual tour in Mapillary. As we recorded it with the 360 cam, you can click and drag with the mouse to see all the angles. You can also click above, in the play button to follow the path we took. Or you can click in any of the green dots in the map to follow your own path.

Thank you very much to everybody who joined us, specially to Denisse and Charles for being our guides, and for filling up the trip with interesting information about flora, fauna, geology and historic importance of El Poás.

Miembros del MaperEspeis

(More pictures and videos here)

The next MapperSpace will be on march the 12th.

Read more

在最新的snapd 2.20中,它开始支持一个叫做classic模式的snap 应用开发.这种classic可以使得我们的应用开发者能够快速地开发我们所需要的应用,这是因为我们不必要对我们的现有的应用做太多的改变.在classic模式下的应用,它可以看见host系统的所有的位于"/"下的文件,就像我们目前正常的应用一样.但是在安装我们的应用后,它的所有文件将位于/snap/foo/current下.它的执行文件将位于/snap/bin目录下,就像我们目前的所有其它的snap应用一样.

当我们安装我们的classic模式下的snap应用时,我们需要使用--classic选项.在上传我们的应用到Ubuntu Core商店时,也需要人工检查.它可以看见位于/snap/core/current下的所有的文件,同时也可以对host里的任何位置的文件进行操作.这样做的目的是为了能够使得开发者快速地发布自己的以snap包为格式的应用,并在以后的开发中逐渐运用Ubuntu Core的confinement以得到完善.在目前看来,classic模式下的应用在可以遇见的将来不能够安装到all-snap系统中,比如Ubuntu Core 16.



在开发之前,我们在desktop上安装core而不是ubuntu-core.我们可以用snap list命令来查看:

liuxg@liuxg:~$ snap list
Name          Version  Rev  Developer  Notes
core          16.04.1  714  canonical  -
firefox-snap  0.1      x1              classic
hello         1.0      x1              devmode
hello-world   6.3      27   canonical  -

如果你的系统里是安装的ubuntu-core的话,建议大家使用devtool中的reset-state来使得我们的系统恢复到最初的状态(没有任何安装的snap).在以后的snapd发布中,我们将不再有ubuntu-core这个snap了.我们也可以适用如下的方法来删除ubuntu-core snap并安装上core snap:

$ sudo apt purge -y snapd
$ sudo apt install snapd
$ sudo snap install core

另外对于有的开发者来说从stable channel得不到最新的snap 2.20,我们可以在我们的Ubuntu Destkop中,打开"System Settings"/"Software & Updates"/"Developer Options":

我们可以打开上面所示的开关,就可以得到最新的所有关于我们Ubuntu桌面系统的发布的软件.snap 2.20版本目前就在这个xenial-proposed之中.




name: hello
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    *   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: classic
type: app  #it can be gadget or framework

   command: bin/env
   command: bin/evil
   command: bin/sh
   command: bin/echo
   command: bin/createfile
   command: bin/createfiletohome
   command: bin/listhome
   command: bin/showroot

  plugin: dump
  source: .    


confinement: classic




$ sudo snap install hello_1.0_amd64.snap --classic --dangerous



cd /
echo "list all of the content in the root:"

echo "show the home content:"
cd home


liuxg@liuxg:~/snappy/desktop/helloworld-classic$ hello.showroot 
list all of the content in the root:
bin    core  home	     lib	 media	proc  sbin  sys  var
boot   dev   initrd.img      lib64	 mnt	root  snap  tmp  vmlinuz
cdrom  etc   initrd.img.old  lost+found  opt	run   srv   usr  vmlinuz.old
show the home content:
liuxg  root.ini
liuxg@liuxg:~/snappy/desktop/helloworld-classic$ ls /
bin    core  home            lib         media  proc  sbin  sys  var
boot   dev   initrd.img      lib64       mnt    root  snap  tmp  vmlinuz
cdrom  etc   initrd.img.old  lost+found  opt    run   srv   usr  vmlinuz.old



set -e
echo "Hello Evil World!"

echo "This example demonstrates the app confinement"
echo "You should see a permission denied error next"

echo "Haha" > /var/tmp/myevil.txt

echo "If you see this line the confinement is not working correctly, please file a bug"

liuxg@liuxg:~/snappy/desktop/helloworld-classic$ hello.evil
Hello Evil World!
This example demonstrates the app confinement
You should see a permission denied error next
If you see this line the confinement is not working correctly, please file a bug



Firefox snapcraft.yaml

name: firefox-snap
version: '0.1'
summary: "A Firefox snap"
description: "Firefox in a classic confined snap"

grade: devel
confinement: classic

    command: firefox
    aliases: [firefox]

    plugin: dump
    source-type: tar



作者:UbuntuTouch 发表于2017/1/6 13:48:04 原文链接
阅读:553 评论:2 查看评论

Read more

我们知道在一个snap包里,我们可以定义任何数量的app.针对desktop应用来说,那么我们如何使得我们的每个应用都有自己的icon及desktop文件呢?在今天的文章中,我们将介绍如何实现这个.特别注意的是,这个新的feature只有在snapcraft 2.25+版本中才可以有.



liuxg@liuxg:~/snappy/desktop/helloworld-desktop$ tree -L 3
├── bin
│   ├── createfile
│   ├── createfiletohome
│   ├── echo
│   ├── env
│   ├── evil
│   ├── sh
│   └── writetocommon
├── echo.desktop
├── setup
│   └── gui
│       ├── echo.png
│       ├── helloworld.desktop
│       └── helloworld.png
└── snapcraft.yaml



[Desktop Entry]
GenericName=Hello world
Comment=A hello world Ubuntu Desktop




name: hello-xiaoguo
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    *   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: strict
type: app  #it can be gadget or framework

   command: bin/env
   command: bin/evil
   command: bin/sh
   command: bin/echo
   desktop: usr/share/applications/echo.desktop
   command: bin/createfile
   command: bin/createfiletohome
   command: bin/writetocommon

        interface: home

  plugin: dump
  source: .
    echo.desktop: usr/share/applications/echo.desktop


   command: bin/echo
   desktop: usr/share/applications/echo.desktop



[Desktop Entry]
GenericName=Hello world
Comment=A hello world Ubuntu Desktop

在这里它指定了自己的执行文件及一个属于自己的icon.我们打包我们的应用,并安装.在Ubuntu Desktop的dash中,我们可以看到:

运行"Hello World"应用显示:


作者:UbuntuTouch 发表于2017/1/23 10:43:39 原文链接
阅读:473 评论:0 查看评论

Read more