Canonical Voices

Dustin Kirkland

If you haven't heard about last week's Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves...


Why?  Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between -- including all versions of Ubuntu since 2007 -- was vulnerable to this face-palming critical security vulnerability.

Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds.  Watch...


Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS.  The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!

If you haven't already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
  2. Install the canonical-livepatch snap
    $ sudo snap install canonical-livepatch 
  3. Enable the service with your token
    $ sudo canonical-livepatch enable [TOKEN]
And you’re done! You can check the status at any time using:

$ canonical-livepatch status --verbose

Let's retry that same vulnerability, on the same system, but this time, having been livepatched...


Aha!  Thwarted!

So that's the Ubuntu 16.04 LTS kernel space...  What about userspace?  Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.

As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers.  If you don't already have it installed, you can install it with:

$ sudo apt install unattended-upgrades

And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default.  Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:

$ sudo dpkg-reconfigure unattended-upgrades


With that combination enabled -- (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates -- Ubuntu 16.04 LTS is the most secure Linux distribution to date.  Period.

Mooooo,
:-Dustin

Read more
Stéphane Graber

LXD logo

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.150.19.1/24
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.0.3.1/24
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123
root@yak:~# lxc start c1
root@yak:~# lxc list c1
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
|  c1  | RUNNING | 10.0.3.123 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true

DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Stéphane Graber

This is the eleventh blog post in this series about LXD 2.0.

LXD logo

Introduction

First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem

There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:

lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!

oslxd-dashboard

You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.

Conclusion

OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Alan Griffiths

MirAL-0.3

There’s a new MirAL release (0.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged. The changes in 0.3.0 fall into three categories:

  1. bugfixes;
  2. enabling keymap in newer Mir versions; and,
  3. additional features for shell developers to use.

Bugfixes

#1631958 Crash after closing Qt dialog

#1632325, #1633052 tooltips positioned wrong with mir-0.24

#1625849 [Xmir] Alt+` switch between different X11-apps not just windows.

Added miral-xrun as a better way to use Xmir

(unnumbered) miral-shell splash screen should be fullscreen.

(unnumbered) deduplicate logging of WindowSpecification::top_left

Enabling Keyboard Map in newer Mir versions

A new class miral::Keymap allows the keyboard map to be specified either programmatically or (by default) on the command line. Being in the UK I can get a familiar keyboard layout like this:

miral-shell --keymap gb

The class is also provided on Mir versions prior to 0.24.1, but does nothing.

Additional Features For Shell Developers To Use

#1629349 Shell wants way to associate initially requested window creation state with the window later created.

Shell code can now set a userdata property on the WindowSpecification in place_new_surface() and this is transferred to the WindowInfo.

Added miral/version.h to allow permit compile-time feature detection. If you want to detect different versions of MirAL at compile time you can, for example, write:

#include <miral/version.h>
#if MIRAL_VERSION >= MIR_VERSION_NUMBER(0, 3, 0)
#include <miral/keymap.h>
#endif

A convenient overload of WindowManagerTools::modify_window() that
doesn’t require the WindowInfo

Read more
Joseph Williams

Working to make Juju more accessible

In the middle of July the Juju team got together to work towards making Juju more accessible. For now the aim was to reach Level AA compliant, with the intention of reaching AAA in the future.

We started by reading through the W3C accessibility guidelines and distilling each principle into sentences that made sense to us as a team and documenting this into a spreadsheet.

We then created separate columns as to how this would affect the main areas across Juju as a product. Namely static pages on jujucharms.com, the GUI and the inspector element within the GUI.

 

 

image02

GUI live on jujucharms.com

 

 

image04

Inspector within the GUI

 

 

image03

Example of static page content from the homepage

 

 

image00

The Juju team working through the accessibility guidelines

 

 

Tackling this as a team meant that we were all on the same page as to which areas of the Juju GUI were affected by not being AA compliant and how we could work to improve it.

We also discussed the amount of design effort needed for each of the areas that isn’t AA compliant and how long we thought it would take to make improvements.

You can have a look at the spreadsheet we created to help us track the changes that we need to make to Juju to make more accessible:

 

 

image01

Spreadsheet created to track changes and improvements needed to be done

 

 

This workflow has helped us manage and scope the tasks ahead and clear up uncertainties that we had about which tasks done or which requirements need to be met to achieve the level of accessibility we are aiming for.

 

 

Read more
Grazina Borosko

The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.

Ubuntu 16.10 Yakkety Yak

yakkety_yak_wallpaper_4096x2304

Ubuntu 16.10 Yakkety Yak (light version)

yakkety_yak_wallpaper_4096x2304_grey_version

Read more
facundo

Ni una menos


Hoy es un día histórico. Las mujeres de todo el país y otros paises de latinoamérica salen a las calles para pelear por sus derechos a ser humanos.

Tenía ganas de escribir algo por acá, pero la verdad es que si quieren saber más del tema (mucho mejor escrito y tratado de lo que puedo hacerlo yo) pueden espiar las cuentas de tuiter de Luciana Peker, Paula, Caro, o tantas otras personas que están con este tema mucho (mucho) más que yo.

Pero después me crucé con este post de V que reproduce algo que está tan bueno que tuve ganas de ponerlo acá (parece que es anónimo, no pude averiguar a quien darle créditos...).


¿Y por qué no "Ni uno menos"?

Porque los varones tenemos el privilegio de caminar tranquilos por las calles sin temor a ser piropeados con palabras obscenas y expresiones repulsivas. Se nos evita lo asqueroso de tener a quienes nos apoyen en los transportes públicos o se masturben en las camionetas dedicando su semen a nuestros cuerpos.

Porque nadie critica nuestra forma de vestir ni nos hablan de cuán cortas son nuestras bermudas o nos tratan de "andar calentando genitales" si se nos ve el boxer.

Porque no se nos pasa por la cabeza salir a bailar y terminar violados porque nos pusieron algo en nuestras bebidas, ni tenemos que ubicar a decenas de desubicados durante toda la noche que se piensan que son nuestros dueños y que tenemos que obedecer y ser sumisos.

Porque, al parecer, para la sociedad las bolsas de consorcio no nos quedan tan bien a nosotros como a ellas.

Porque cuando somos chicos nadie nos regala ni escobas ni bebés ni cocinitas de juguete para que "vayamos practicando".

Porque tenemos el privilegio de que mamá nos cocine, nuestras hermanas laven los platos y papá nos invite al sillón a ver cómodamente el partido.

Porque nuestros amigos no nos tienen que avisar si llegaron bien porque ya lo damos por hecho.

Porque tenemos el privilegio de que no se nos critique por acostarnos con cuantas personas querramos (es más, cuantas más sean más capos somos).

Porque las histéricas son ellas.

Porque nosotros somos más inteligentes y hasta cobramos más haciendo el mismo trabajo.

Porque si asciendo en el trabajo es por mi capacidad y no por haberme cogido a nadie.

Porque si no queremos ser papás nos desentendemos, nos borramos y ya fue todo. Ellas quieren abortar porque son asesinas y no se hacen cargo de lo que les corresponde, que es ser madres ante todo. Porque no se cuidaron y a nosotros no nos corresponde esa parte.

Porque soy bien macho y me burlo de las travas, me las cojo y las mato para reafirmar mi masculinidad.

Porque si me gustan los tipos nadie dice que es porque todavía no me cogí una buena concha.

Porque sé más de política y sé manejarme mejor en ese mundo. Porque si ella llega a diputada es porque había que llenar el cupo o ¿adiviná? sí: se acostó con alguno.

Porque yo no cotizo en el mercado de la prostitución tanto como ellas y no tengo miedo a ser secuestrado para terminar en un puterío haciendo con mi cuerpo algo que no quiero. Porque voy al puterío y soy un campeón y ser puta es una deshonra.

Porque si me mando una cagada, con un ramo de flores y unos bombones en el día de la mujer me convierto en un serñor caballeroso, en un hombre de verdad.

Sencillamente: Porque no te das una idea de lo que es ser ellas en un mundo tan desigual como este.

A ver si lo dejamos bien clarito: todavía no hablamos de "ni uno menos" porque estamos llenos de privilegios que deberíamos cuestionarnos una y mil veces antes de hablar de feminazis exageradas antihombres o hablar de "igualismo".Porque el día en que nos empecemos a plantear una nueva masculinidad, dejemos de criar machitos heteronormativos y patriarcales y nos demos el debate que el tema se merece, el día que dejen de matarlas y humillarlas, ahí sí vamos a poder hablar de otra manera.

El machismo nos ataca a todos en general, pero las mata a ellas en particular.

No seas cómplice.

Basta de violencia machista.

Read more
Alan Griffiths

adding wallpaper to egmde

Before we begin

My previous egmde post described how to build and install MirAL. If you’re using Ubuntu 16.10 (Yakkety) then that is no longer necessary as MirAL is in the archive. All you need is:

$ sudo apt install libmiral-dev

Otherwise, you can follow the instructions there.

The egmde source for this article is available at the same place:

$ git clone https://github.com/AlanGriffiths/egmde.git

The example code

In this article we add some simple, Ubuntu orange wallpaper. The previous egmde.cpp file is largely unchanged we just add a couple of headers and update the main program that looked like this:

int main(int argc, char const* argv[])
{
    miral::MirRunner runner{argc, argv};

    return runner.run_with(
        {
            set_window_managment_policy<ExampleWindowManagerPolicy>()
        });
}

Now it is:

int main(int argc, char const* argv[])
{
    miral::MirRunner runner{argc, argv};
    Wallpaper wallpaper;
    runner.add_stop_callback([&] { wallpaper.stop(); });
    return runner.run_with(
        {
            miral::StartupInternalClient{"wallpaper", std::ref(wallpaper)},
            set_window_managment_policy<ExampleWindowManagerPolicy>()
        });
}

The Wallpaper class is what we’ll be implementing here, StartupInternalClient starts it as an in-process Mir client and the verbose lambda incantations work-around the limitations of the current MirAL implementation.

The Wallpaper class uses a simple “Worker” class to pass work off to a separate thread. I’ll only show the header here as the methods as self-explanatory:

class Worker
{
public:
    ~Worker();
    void start_work();
    void enqueue_work(std::function<void()> const& functor);
    void stop_work();

};

The Wallpaper class

class Wallpaper : Worker
{
public:
    // These operators are the protocol for an "Internal Client"
    void operator()(miral::toolkit::Connection c) { start(c); }
    void operator()(std::weak_ptr<mir::scene::Session> const&){ }

    void start(miral::toolkit::Connection connection);
    void stop();
private:
    std::mutex mutable mutex;
    miral::toolkit::Connection connection;
    miral::toolkit::Surface surface;
    void create_surface();
};

The start and stop methods are fairly self-explanatory:

void Wallpaper::start(miral::toolkit::Connection connection)
{
    {
        std::lock_guard<decltype(mutex)> lock{mutex};
        this->connection = connection;
    }
    enqueue_work([this]{ create_surface(); });
    start_work();
}
void Wallpaper::stop()
{
    {
        std::lock_guard<decltype(mutex)> lock{mutex};
        surface.reset();
        connection.reset();
    }
    stop_work();
}

Most of the work happens in the create_surface() method that creates a surface of a type that will never get focus (and therefore will never be raised above anything else):

void Wallpaper::create_surface()
{
    std::lock_guard<decltype(mutex)> lock{mutex};
    auto const spec = SurfaceSpec::for_normal_surface(
        connection, 100, 100, mir_pixel_format_xrgb_8888)
        .set_buffer_usage(mir_buffer_usage_software)
        .set_type(mir_surface_type_gloss)
        .set_name("wallpaper");

    mir_surface_spec_set_fullscreen_on_output(spec, 0);

    surface = spec.create_surface();
    uint8_t pattern[4] = { 0x14, 0x48, 0xDD, 0xFF };

    MirGraphicsRegion graphics_region;
    MirBufferStream* buffer_stream = mir_surface_get_buffer_stream(surface);
    mir_buffer_stream_get_graphics_region(buffer_stream, &graphics_region);

    render_pattern(&graphics_region, pattern);
    mir_buffer_stream_swap_buffers_sync(buffer_stream);
}

This is unsophisticated, but the point is that the client API is available to do whatever rendering we like.

Now, when we run egmde we no longer get a boring black rectangle. Now we get an Ubuntu orange one.

Read more
Dustin Kirkland

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin

      Read more
      Stéphane Graber

      LXD logo

      What are snaps?

      Snaps were introduced a little while back as a cross-distro package format allowing upstreams to easily generate and distribute packages of their application in a very consistent way, with support for transactional upgrade and rollback as well as confinement through AppArmor and Seccomp profiles.

      It’s a packaging format that’s designed to be upstream friendly. Snaps effectively shift the packaging and maintenance burden from the Linux distribution to the upstream, making the upstream responsible for updating their packages and taking action when a security issue affects any of the code in their package.

      The upside being that upstream is now in complete control of what’s in the package and can distribute a build of the software that matches their test environment and do so within minutes of the upstream release.

      Why distribute LXD as a snap?

      We’ve always cared about making LXD available to everyone. It’s available for a number of Linux distribution already with a few more actively working on packaging it.

      For Ubuntu, we have it in the archive itself, push frequent stable updates, maintain official backports in the archive and also maintain a number of PPAs to make our releases available to all Ubuntu users.

      Doing all that is a lot of work and it makes tracking down bugs that much harder as we have to care about a whole lot of different setups and combination of package versions.

      Over the next few months, we hope to move away from PPAs and some of our backports in favor of using our snap package. This will allow a much shorter turnaround time for new releases and give us more control on the runtime environment of LXD, making our lives easier when dealing with bugs.

      How to get the LXD snap?

      Those instructions have only been tested on fully up to date Ubuntu 16.04 LTS or Ubuntu 16.10 with snapd installed. Please use a system that doesn’t already have LXD containers as the LXD snap will not be able to take over existing containers.

      LXD snap example

      1. Make sure you don’t have a packaged version of LXD installed on your system.
        sudo apt remove --purge lxd lxd-client
      2. Create the “lxd” group and add yourself to it.
        sudo groupadd --system lxd
        sudo usermod -G lxd -a <username>
      3. Install LXD itself
        sudo snap install lxd

      This will get the current version of LXD from the “stable” channel.
      If your user wasn’t already part of the “lxd” group, you may now need to run:

      newgrp lxd

      Once installed, you can set it up and spawn your first container with:

      1. Configure the LXD daemon
        sudo lxd init
      2. Launch your first container
        lxd.lxc launch ubuntu:16.04 xenial

      Channels and updates

      The Ubuntu Snap store offers 4 different release “channels” for snaps:

      • stable
      • candidate
      • stable
      • edge

      For LXD, we currently use “stable”, “candidate” and “edge”.

      • “stable” contains the latest stable release of LXD.
      • “candidate” is a testing area for “stable”.
        We’ll push new releases there a couple of days before releasing to “stable”.
      • “edge” is the current state of our development tree.
        This channel is entirely automated with uploads triggered after the upstream CI confirms that the development tree looks good.

      You can switch between channels by using the “snap refresh” command:

      snap refresh lxd --edge

      This will cause your system to install the current version of LXD from the “edge” channel.

      Be careful when hopping channels though as LXD may break when moving back to an earlier version (going from edge to stable), especially when database schema changes occurred in between.

      Snaps automatically update, either on schedule (typically once a day) or through push notifications from the store. On top of that, you can force an update by running “snap refresh lxd”.

      Known limitations

      Those are all pretty major usability issues and will likely be showstoppers for a lot of people.
      We’re actively working with the Snappy team to get those issues addressed as soon as possible and will keep maintaining all our existing packages until such time as those are resolved.

      Extra information

      More information on snap packages can be found at: http://snapcraft.io
      Bug reports for the LXD snap: https://github.com/lxc/lxd-pkg-ubuntu/issues

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      PS: I have not forgotten about the remaining two posts in the LXD 2.0 series, the next post has been on hold for a while due to some issues with OpenStack/devstack.

      Read more
      Tom Macfarlane

      Ubuntu Core

      Recently the brand team designed new logos for Core and Ubuntu Core. Both of which will replace the existing Snappy logo and bring consistency across all Ubuntu Core branding, online and in print.

       

      db_core_logo-aw

       

      Guidelines for use

      Core

      Use the Core logo when the Ubuntu logo or the word Ubuntu appears within the same field of vision. For example: web pages, exhibition stands, brochure text.

      Ubuntu Core

      Use the Ubuntu Core logo in stand alone circumstances where there is no existing or supporting Ubuntu branding or any mention of Ubuntu within text. For example: third-party websites or print collateral, social media sites, roll-up banners.

      The Ubuntu Core logo is also used for third-party branding.

      The design process

      Extensive design exploration was undertaken considering: logotype arrangement, font weight, roundel designs – exploring the ‘core’ idea, concentric circles and the letter ‘C’ – and how all the elements came together as a logo.

      Logotype

      Options for how the logotype/wordmark is presented:

      • Following the design style set when creating the Ubuntu brandmark
      • Core in a lighter weight, reduced space between Ubuntu and Core
      • Ubuntu in the lighter weight, emphasis on Core
      • Core on its own

       

      db_core_logotype

       

      Roundels

      Core, circles and the letter ‘C’

       


      Design exploration using concentric circles of varying line numbers, spacing and line weights. Some options incorporating the Circle of Friends as an underlying grid to determine specific angles.

      Circle of Friends

       

      Design exploration using the Circle of Friends – in its entirety and stripped down.

      Lock-up

       

      db_core_lock-up

      How the logotype and roundel design sit together.

      Artwork

      Full sets of Core and Ubuntu Core logo artwork are now available at design.ubuntu.com/downloads.

      Read more
      Inayaili de León Persson

      A week in Vancouver with the Landscape team

      Earlier this month Peter and I headed to Vancouver to participate in a week-long Landscape sprint.

      The main goals of the sprint were to review the work that had been done in the past 6 months, and plan for the following cycle.

      IRL

      Landscape is a totally distributed team, so having regular face-to-face time throughout the year is important in order to maintain team spirit and a sense of connection.

      It is also important for us, from the design team, to meet in person the people that we have to work with every day, and that ultimately will implement the designs we create.

      I thought it was interesting to hear the Landscape team discuss candidly about how the previous cycle went, what went well and what could have been improved, and how every team member’s opinion was heard and taken into consideration for the following cycle.

       

      Landscape team in VancouverLandscape team discussing the previous cycle

       

      User interviews

      Peter and I took some time aside to interview some of the developers in 1-2-1 sessions, so they could talk us through what they thought could be improved in Landscape, and what worked well. As we talked to them, I wrote down key ideas on post it notes and Peter wrote down more thorough notes on his laptop. At the end of the interviews, we collated the findings into a Trello board, to identify patterns and try to prioritise design improvements for the next cycle.

      The city

      But the week was not all work!

      Every day we went out for lunch (unlike most sprints which provide the usual hotel food). This allowed us to explore a little bit of the city and its great culinary offerings. It was a great way to get to know the Landscape team a little bit better outside of work.

       

      Vancouver foodLots of great food in Vancouver

       

      Vancouver also has really great coffee places, and, even though I’m more of a tea person, I made sure to go to a few of them during the week.

       

      Vancouver coffeeNice Vancouver coffee

       

      I took a few days off after the sprint, so had some time to explore Vancouver with my family. We even saw a TV show being filmed in one of our favourite coffee shops!

       

      Exploring VancouverExploring Vancouver

       

      This was my first time in Canada, and I really enjoyed it: we had a great sprint and it was good to have some time to explore the city. Maybe I’ll be back some day!

      Read more
      Alan Griffiths

      There are a few “gotchas” in running X11 applications (via Xmir) on Mir servers so I’m sharing a short script to make it easier.

      The following script will work with the example servers from the “mir-demos” package, the miral-shell (from “miral-examples”) and my own egmde project. (With Unity8 there’s a little more to it but as there is existing “magic” in place for launching X11 applications I won’t bother to discuss it further.)

      The principle issue is that each Xmir session is seen as a single application by the Mir server, so we need to create an Xmir server for each application for everything to make sense. And that means each application needs a separate port to connect to its Xmir server.

      For this to work you need to have a Mir server running, and have Xmir installed.

      Here’s the script:

      $ cat ~/bin/Xmir-run
      #!/bin/bash
      port=0
      while [ -e "/tmp/.X11-unix/X${port}" ]; do
          let port+=1
      done
      
      Xmir -rootless :${port} & pid=$!
      DISPLAY=:${port} $*
      kill ${pid}

      The first part of this script finds an available port to run Xmir on.

      The next part starts an Xmir server in “rootless” mode and remembers the pid.

      Then we run the command passed to the script until it exits.

      Finally, we kill the Xmir server.

      Simple!

      Read more
      Dustin Kirkland

      My wife, Kimberly, and I watch Saturday Night Live religiously.  As in, we probably haven't missed a single episode since we started dating more than 12 years ago.  And in fact, we both watched our fair share of SNL before we had even met, going back to our teenage years.

      We were catching up on SNL's 42nd season premier late this past Sunday night, after putting the kids to bed, when I was excited to see a hilarious sketch/parody of Mr. Robot.

      If SNL is my oldest TV favorite, Mr. Robot is certainly my newest!  Just wrapping its 2nd season, it's a brilliantly written, flawlessly acted, impeccably set techno drama series on USA.  I'm completely smitten, and the story seems to be just getting started!

      Okay, so Kim and I are watching a hilarious sketch where Leslie Jones asks Elliot to track down the person who recently hacked her social media accounts.  And, as always, I take note of what's going in the background on the computer screen.  It's just something I do.  I love to try and spot the app, the OS, the version, identify the Linux kernel oops, etc., of anything on any computer screen on TV.

      At about the 1:32 mark of the SNL/Mr.Robot skit, there was something unmistakable on the left computer, just over actor Pete Davidson's right shoulder.  Merely a fraction of a second, and I recognized it instantly!  A dark terminal, split into a dozen sections.  A light grey boarder, with a thicker grey highlighting one split.  The green drip of text from The Matrix in one of the splits. A flashing, bouncing yellow audio wave in another.  An instant rearrangement of all of those windows each second.

      It was Byobu and Hollywood!  I knew it.  Kim didn't believe me at first, until I proved it ;-)

      A couple of years ago, after seeing a 007 film in the theater, I created a bit of silliness -- a joke of a program that could turn any Linux terminal into a James Bond caliber hacker screen.  The result is a package called hollywood, which any Ubuntu user can install and run by simply typing:

      $ sudo apt install hollywood
      $ hollywood

      And a few months ago , Hollywood found its way into an NBC News piece that took itself perhaps a little too seriously, as it drummed up a bit of fear around "Ransomware".

      But, far more appropriately, I'm absolutely delighted to see another NBC program -- Saturday Night Live -- using Hollywood exactly as intended -- for parody!

      Enjoy a few screenshots below...








      Cheers!
      :-Dustin

      Read more
      Alan Griffiths

      MirAL-0.2

      There’s a new MirAL release (0.2.0) available in Yakkety. MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

      Unsurprisingly, given the project’s original goal, the ABI is unchanged. The changes in 0.2.0 fall into four categories:

      1. additional features for shell developers to use;
      2. enabling features available in newer Mir versions,
      3. improvements to the example server; and,
      4. bugfixes.

      Additional Features for Shell Developers to use

      There is a new “–window-management-trace” option for debugging window management. This provides detailed logging of the calls to and from the window management policy.

      There is a new miral::CursorTheme class to load cursor themes.

      Features available in newer Mir versions

      Pointer confinement is now available (where underlying Mir version >= 0.24)
      Enhanced “surface placement” handling and the results of “surface placement” requests are notified to clients (where supported by underlying Mir version >= 0.25).

      Improvements to the Example Server

      Improved resizing of size-constrained windows in miral-shell example.

      Bugfixes

      #1621917 tests fail when built against lp:mir

      #1626659 Dialogs with parents should be modal i.e. they receive focus when the parent is clicked

      #1628482 Deadlock in default window manager when Ctrl+Alt+Backspace with a client connected

      #1590099 Need to support pointer confinement in Mir and toolkits using Mir

      #1616818 User can drag menus (and other inappropriate) surfaces

      #1628033 Starting qtcreator on miral-shell gives an orphan “titlebar”

      #1628594 advise_focus_lost() isn’t called when the active window is minimised/hidden

      #1628630 miral should not change surface geometry because it got maximized

      #1628981 TitlebarWindowManager: sometime the initial display of titlebars doesn’t happen

      #1625853 Mouse cursor disappears (or just never changes) when entering the windows of Qt apps

      Read more
      Inayaili de León Persson

      September’s reading list

      Here are the best links shared by the design team during the month of September:

      1. Empty States
      2. It’s ok to say what’s ok
      3. Sully – 208 Seconds Experience
      4. Google Allo
      5. Tech Giants Team Up to Fix Typography’s Biggest Problem
      6. Redesigning Chrome desktop
      7. Clarity Conference videos

      Thank you to Alejandra, Jamie, Joe and me for the links this month!

      Read more
      UbuntuTouch

      我们可以利用Ubuntu SDK中的GeocodeModel来在进行地理编码.比如说,我们给出一个地名,通过GeocodeModel来查询该地理位置的具体的信息,比如经度维度,街道信息等.在今天的文章中,我们通过一个简单的例程来展示如何查询一个地点.

      就像我们的API文档中介绍的那样:

          GeocodeModel {
              id: geocodeModel
              plugin: plugin
              autoUpdate: false
      
              onStatusChanged: {
                  mymodel.clear()
                  console.log("onStatusChanged")
                  if ( status == GeocodeModel.Ready ) {
                      var count = geocodeModel.count
                      console.log("count: " + geocodeModel.count)
                      for ( var i = 0; i < count; i ++ ) {
                          var location = geocodeModel.get(i);
                          mymodel.append( {"location": location})
                      }
                  }
              }
      
              onLocationsChanged: {
                  console.log("onStatusChanged")
              }
      
              Component.onCompleted: {
                  query = "中国 北京 朝阳 望京"
                  update()
              }
          }
      在上面的代码中,当我们的GeocodeModel被装载后,我们发送一个查询的请求:

                  query = "中国 北京 朝阳 望京"
                  update()
      当这个请求有结果回来后,在我们的onStatusChanged中返回我们所需要的结果.我们可以通过列表的方式显示我们所需要的结果.我们所有的代码为:

      Main.qml


      import QtQuick 2.4
      import Ubuntu.Components 1.3
      import QtLocation 5.3
      import QtPositioning 5.2
      
      MainView {
          // objectName for functional testing purposes (autopilot-qt5)
          objectName: "mainView"
      
          // Note! applicationName needs to match the "name" field of the click manifest
          applicationName: "geocodemodel.liu-xiao-guo"
      
          width: units.gu(60)
          height: units.gu(85)
      
          Plugin {
              id: plugin
              name: "osm"
          }
      
          ListModel {
              id: mymodel
          }
      
          PositionSource {
              id: me
              active: true
              updateInterval: 1000
              preferredPositioningMethods: PositionSource.AllPositioningMethods
              onPositionChanged: {
                  console.log("lat: " + position.coordinate.latitude + " longitude: " +
                              position.coordinate.longitude);
                  console.log(position.coordinate)
                  console.log("mapzoom level: " + map.zoomLevel)
                  map.coordinate = position.coordinate
              }
      
              onSourceErrorChanged: {
                  console.log("Source error: " + sourceError);
              }
          }
      
          GeocodeModel {
              id: geocodeModel
              plugin: plugin
              autoUpdate: false
      
              onStatusChanged: {
                  mymodel.clear()
                  console.log("onStatusChanged")
                  if ( status == GeocodeModel.Ready ) {
                      var count = geocodeModel.count
                      console.log("count: " + geocodeModel.count)
                      for ( var i = 0; i < count; i ++ ) {
                          var location = geocodeModel.get(i);
                          mymodel.append( {"location": location})
                      }
                  }
              }
      
              onLocationsChanged: {
                  console.log("onStatusChanged")
              }
      
              Component.onCompleted: {
                  query = "中国 北京 朝阳 望京"
                  update()
              }
          }
      
          Page {
              id: page
              header: standardHeader
      
              PageHeader {
                  id: standardHeader
                  visible: page.header === standardHeader
                  title: "Geocoding"
                  trailingActionBar.actions: [
                      Action {
                          iconName: "edit"
                          text: "Edit"
                          onTriggered: page.header = editHeader
                      }
                  ]
              }
      
              PageHeader {
                  id: editHeader
                  visible: page.header === editHeader
                  leadingActionBar.actions: [
                      Action {
                          iconName: "back"
                          text: "Back"
                          onTriggered: {
                              page.header = standardHeader
                          }
                      }
                  ]
                  contents: TextField {
                      id: input
                      anchors {
                          left: parent.left
                          right: parent.right
                          verticalCenter: parent.verticalCenter
                      }
                      placeholderText: "input words .."
                      text: "中国 北京 朝阳 望京"
      
                      onAccepted: {
                          geocodeModel.query = text
                          geocodeModel.update()
                      }
                  }
              }
      
              Item  {
                  anchors {
                      left: parent.left
                      right: parent.right
                      bottom: parent.bottom
                      top: page.header.bottom
                  }
      
                  Column {
                      anchors.fill: parent
      
                      ListView {
                          id: listview
                          clip: true
                          width: parent.width
                          height: parent.height/3
                          opacity: 0.5
                          model: mymodel
                          delegate: Item {
                              id: delegate
                              width: listview.width
                              height: layout.childrenRect.height + units.gu(0.5)
      
                              Column {
                                  id: layout
                                  width: parent.width
      
                                  Text {
                                      width: parent.width
                                      text: location.address.text
                                      wrapMode: Text.WordWrap
                                  }
      
                                  Text {
                                      text: "(" + location.coordinate.longitude + ", " +
                                            location.coordinate.latitude + ")"
                                  }
      
                                  Rectangle {
                                      width: parent.width
                                      height: units.gu(0.1)
                                      color: "green"
                                  }
                              }
      
                              MouseArea {
                                  anchors.fill: parent
                                  onClicked: {
                                      console.log("it is clicked")
                                      map.coordinate = location.coordinate
                                      // We do not need the position info any more
                                      me.active = false
                                  }
                              }
                          }
                      }
      
                      Map {
                          id: map
                          width: parent.width
                          height: parent.height*2/3
                          property var coordinate
      
                          plugin : Plugin {
                              name: "osm"
                          }
      
                          zoomLevel: 14
                          center: coordinate
      
                          MapCircle {
                              center: map.coordinate
                              radius: units.gu(3)
                              color: "red"
                          }
      
                          Component.onCompleted: {
                              zoomLevel = 14
                          }
                      }
                  }
              }
      
              Component.onCompleted: {
                  console.log("geocodeModel limit: " + geocodeModel.limit)
              }
          }
      }
      

      运行我们的应用:

         
       

      整个项目的源码:https://github.com/liu-xiao-guo/geocodemodel

      作者:UbuntuTouch 发表于2016/6/7 13:54:14 原文链接
      阅读:654 评论:0 查看评论

      Read more
      UbuntuTouch

      在今天的练习中,我们来做一个设计.在我们的ListView的列表中,我们想点击它的项时,它的项能够展开.这对于我们的有些设计是非常用的.比如我们不希望打开另外一个页面,但是我们可以展示我们当前项的更多的信息.我们可以使用Ubuntu SDK提供的Expandable.这个设计的图片为:


       


      如果每个项的详细信息并不多的时候,我们可以利用这种方法来展示我们的每个项的内容.具体的代码为:


      Main.qml


      import QtQuick 2.4
      import Ubuntu.Components 1.3
      import Ubuntu.Components.ListItems 1.3 as ListItem
      
      MainView {
          // objectName for functional testing purposes (autopilot-qt5)
          objectName: "mainView"
      
          // Note! applicationName needs to match the "name" field of the click manifest
          applicationName: "expandable.liu-xiao-guo"
      
          width: units.gu(60)
          height: units.gu(85)
      
          ListModel {
              id: listmodel
              ListElement { name: "image1.jpg" }
              ListElement { name: "image2.jpg" }
              ListElement { name: "image3.jpg" }
              ListElement { name: "image4.jpg" }
              ListElement { name: "image5.jpg" }
              ListElement { name: "image6.jpg" }
              ListElement { name: "image7.jpg" }
              ListElement { name: "image8.jpg" }
              ListElement { name: "image9.jpg" }
              ListElement { name: "image10.jpg" }
              ListElement { name: "image11.jpg" }
          }
      
          Page {
              header: PageHeader {
                  id: pageHeader
                  title: i18n.tr("expandable")
              }
      
              Item {
                  anchors {
                      left: parent.left
                      right: parent.right
                      bottom: parent.bottom
                      top: pageHeader.bottom
                  }
      
                  UbuntuListView {
                      id: listview
                      anchors.fill: parent
                      height: units.gu(24)
                      model: listmodel
                      delegate: ListItem.Expandable {
                          id: exp
                          expandedHeight: units.gu(15)
                          expanded: listview.currentIndex == index
      
                          Row {
                              id: top
                              height: collapsedHeight
                              spacing: units.gu(2)
                              Image {
                                  height: parent.height
                                  width: height
                                  source: "images/" + name
                              }
      
                              Label {
                                  text: "This is the text on the right"
                              }
                          }
      
                          Label {
                              anchors.top: top.bottom
                              anchors.topMargin: units.gu(0.5)
                              text: "This is the detail"
                          }
      
                          onClicked: {
      //                        expanded = true;
                              listview.currentIndex = index
                          }
                      }
                  }
              }
      
          }
      }
      



      作者:UbuntuTouch 发表于2016/6/15 17:31:13 原文链接
      阅读:592 评论:0 查看评论

      Read more
      UbuntuTouch

      在我们的Ubuntu SDK中,它提供了一个"Web App"的模版,它很方便地让我们把一个Web-based网页转换成一个Ubuntu的应用.大家可以参考我的文章"如何使用在线Webapp生成器生成安装包"来创建Web App.在今天的文章中,我们来介绍如实现一个全屏的Web App.


      下面我们以百度地图为例.当我们利用我们的Ubuntu SDK创建一个应用时:

      baidumap.desktop

      [Desktop Entry]
      Name=百度地图
      Comment=webapp for baidumap
      Type=Application
      Icon=baidumap.png
      Exec=webapp-container --enable-back-forward --store-session-cookies --webappUrlPatterns=https?://map.baidu.com/* http://map.baidu.com %u
      Terminal=false
      X-Ubuntu-Touch=true
      

      细心的开发者可以看到在上面的"--enable-back-forward"选项.它让我们的Web App在最上面有一个title/header的地方,就像下面的图展示的那样:



      在上面的图中,我们可以看到有一个"百度地图"的标题栏.它可以让我们返回以前的页面.

      如果我们把上面的"--enable-back-forward"选项去掉,也就是:

      Exec=webapp-container --store-session-cookies --webappUrlPatterns=https?://map.baidu.com/* http://map.baidu.com %u
      

      那么我们的运行结果将是:



      我们可以看出来,我们的header/title区域不见了.这样的好处是我们可以得到更多的显示区域.当然我们还是可以看到我们的indicator区域.

      对于一些游戏的页面来说,能有一个全屏的用户界面是一个至高无上的需求,那么,我们可以更进一步来做如下的修改.我们添加一个"--fullscreen"的选项.

      [Desktop Entry]
      Name=百度地图
      Comment=webapp for baidumap
      Type=Application
      Icon=baidumap.png
      Exec=webapp-container --fullscreen -cookies --webappUrlPatterns=https?://map.baidu.com/* http://map.baidu.com %u
      Terminal=false
      X-Ubuntu-Touch=true
      

      重新运行我们的项目:



      我们可以看到一个全屏的应用,甚至连indicator的影子都没有.

      更多关于Web App开发的信息可以在网站找到.



      作者:UbuntuTouch 发表于2016/6/23 9:30:22 原文链接
      阅读:648 评论:0 查看评论

      Read more