Canonical Voices

Posts tagged with 'uncategorized'

Cemil Azizoglu

Hi, I’ve been wanting to have a blog for a while now. I am not sure if I’ll have the time to post on a regular basis but I’ll try.

First things first : My name is Cemil (pronounced JEH-mil), a.k.a. ‘camako’ on IRC – I work as a developer and am the team-lead in the Mir project.

Recently, I’ve been working on Mir 1.0 tasks, new Mesa EGL platform backend for Mir, Vulkan Mir WSI driver for Mesa, among other things.

Here’s something pretty for you to look at for now :

https://plus.google.com/113725654283519068012/posts/8jmrQnpJxMc

-Cemil

Read more
Michael Hall

Java is a well established language for developing web applications, in no small part because of it’s industry standard framework for building them: Servlets and JSP.  Another important part of this standard is the Web Archive, or WAR, file format, which defines how to provide a web application’s executables and how they should be run in a way that is independent of the application server that will be running  them.

application-server-market-share-2015WAR files make life easier for developers by separate the web application from the web server. Unfortunately this doesn’t actually make it easier to deploy a webapp, it only shifts some of the burden off of the developers and on to the user, who still needs to setup and configure an application server to host it. One popular option is Apache’s Tomcat webapp server, which is both lightweight and packs enough features to support the needs of most webapps.

And here is where Snaps come in. By combining both the application and the server into a single, installable package you get the best of both, and with a little help from Snapcraft you don’t have to do any extra work.

Snapcraft supports a modular build configuration by having multiple “parts“, each of which provides some aspect of your complete runtime environment in a way that is configurable and reusable. This is extended to a feature called “remote parts” which are pre-defined parts you can easily pull into your snap by name. It’s this combination of reusable and remote parts that are going to make snapping up java web applications incredibly easy.

The remote part we are going to use is the “tomcat” part, which will build the Tomcat application server from upstream source and bundle it in your snap ready to go. All that you, as the web developer, need to provide is your .war file. Below is an simple snapcraft.yaml that will bundle Tomcat’s “sample” war file into a self-contained snap package.

name: tomcat-sample
version: '0.1'
summary: Sample webapp using tomcat part
description: |
 This is a basic webapp snap using the remote Tomcat part

grade: stable
confinement: strict

parts:
  my-part:
    plugin: dump
    source: .
    organize:
      sample.war: ./webapps/sample.war
    after: [tomcat]

apps:
  tomcat:
    command: tomcat-launch
    daemon: simple
    plugs: [network-bind]

The important bits are the ones in bold, let’s go through them one at a time starting with the part named “my-part”. This uses the simple “dump” plugin which is just going to copy everything in it’s source (current directory in this case) into the resulting snap. Here we have just the sample.war file, which we are going to move into a “webapps” directory, because that is where the Tomcat part is going to look for war files.

Now for the magic, by specifying that “my-part” should come after the “tomcat” part (using after: [tomcat]) which isn’t defined elsewhere in the snapcraft.yaml, we will trigger Snapcraft to look for a remote part by that same name, which conveniently exists for us to use. This remote part will do two things, first it will download and build the Tomcat source code, and then it will generate a “tomcat-launch” shell script that we’ll use later. These two parts, “my-part” and “tomcat” will be combined in the final snap, with the Tomcat server automatically knowing about and installing the sample.war webapp.

The “apps” section of the snapcraft.yaml defines the application to be run. In this simple example all we need to execute is the “tomcat-launch” script that was created for us. This sets up the Tomcat environment variables and runtime directories so that it can run fully confined within the snap. And by declaring it to be a simple daemon we are additionally telling it to auto-start as soon as it’s installed (and after any reboot) which will be handled by systemd.

Now when you run “snapcraft” on this config, you will end up with the file tomcat-sample_0.1_amd64.snap which contains your web application, the Tomcat application server, and a headless Java JRE to run it all. That way the only thing your users need to do to run your app is to “snap install tomcat-sample” and everything will be up and running at http://localhost:8080/sample/ right away, no need to worry about installing dependencies or configuring services.

Screenshot from 2017-03-21 14-16-59

If you have a webapp that you currently deploy as a .war file, you can snap it yourself in just a few minutes, use the snapcraft.yaml defined above and replace the sample data with your own. To learn more about Snaps and Snapcraft in general you can follow this tutorial as well as learning how to publish your new snap to the store.

Read more
LaMont Jones

The question came up “how do I add an authoritative (secondary) name server for a domain that is managed by MAAS?”

Why would I want to do that?

There are various reasons, including that the region controller may just be busy enough, or the MAAS region spread out enough, that we don’t want to have all DNS go through it.  Another reason would be to avoid exposing the region controller to the internet, while still allowing it to provide authoritative DNS data for machines inside the region.

How do I do that?

First, we’ll need to create a secondary nameserver.  For purposes of simplicity, we’ll assume that it’s an Ubuntu machine named mysecondary.example.com, and that you have installed the bind9 package.  And we’ll assume that  you have named the domain maas, that the region controller is named region.example.com, with an upstream interface having the IP address a.b.c.d, and that you have a MAAS session called admin.

On mysecondary.example.com, we add this to /etc/bind/named.conf.local:

zone "maas" { type slave; file "db.maas"; masters { a.b.c.d; }; };

Then reload named there via “rndc reload”

With the MAAS CLI, we then say (note the trailing “.” on rrdata):

maas admin dnsresource-records create name=@ domain=maas rrtype=ns rrdata=mysecondary.example.com.

At that point, mysecondary is both authoritative, and named in the NS RRset for the domain.

What else can I do?

If you call the MAAS domain somename.example.com, then you could add NS records to the example.com DNS zone delegating that zone to the MAAS region and it’s secondaries.

What are the actual limitations?

  • The region controller is always listed as a name server for the domain.  For domains other than the default.  See also bug 1672220 about address records.
  • If MAAS is told that it’s authoritative for a domain, it IS the master/primary.
  • The MAAS region does not have zones that are other than “type master”.

Read more
Alan Griffiths

Choosing a backend

I got drawn into a discussion today and swiftly realized there is no right answer. But there should be!

The question is deceptively simple: Which order should graphics toolkits probe for backends?

My contention is that the answer is: “it depends”.

Suppose that I’m running a traditional X11 based desktop and am testing with a new technology (obviously Mir, but the same applies to Wayland) running as a window on top of it. (I.e. Mir-on-X or Wayland-on-X)

In this case I want any new application to *default* to connecting to the main X11 desktop – I don’t want my test session to “capture” any applications launched normally.

Now suppose I’m running a new technology desktop that provides an X11 socket as a backup (Xmir/Xwayland). In this case I want any new application to *default* to connecting to the main Mir/Wayland desktop – only if the toolkit doesn’t support Mir/Wayland should it connect to the X11 socket.

Now GDK, for example, provides for this with GDK_BACKEND=mir,wayland,x11 or GDK_BACKEND=x11,mir,wayland (as needed). But that is only one toolkit: OTTOMH Qt has QT_QPA_PLATFORM and SDL has SDL_VIDEODRIVER. (I’m sure there are others.)

What is needed is a standard environment variable that all toolkits (and other graphics libs) can use to prioritize backends. One of my colleagues suggested XDG_TOOLKIT_BACKEND (working much the way that GDK_BACKEND does).

That only helps if all the toolkits take notice. Is it worth pursuing?

Read more
Alan Griffiths

MirAL 1.2

There’s a new MirAL release (1.2.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged.

Since my last update the integration of libmiral into QtMir has progressed and libmiral has been used in the latest updates to Unity8.

The changes in 1.2.0 fall are:

A new libmirclientcpp-dev package

This is a “C++ wrapper for libmirclient” and has been split
from libmiral-dev.

Currently it comprises RAII wrappers for some Mir client library types: MirConnection, MirWindowSpec, MirWindow and MirWindowId. In addition, the WindowSpec wrapper provides named constructors and function chaining to enable code like the following:

auto const window = mir::client::WindowSpec::
    for_normal_window(connection, 50, 50, mir_pixel_format_argb_8888)
    .set_buffer_usage(mir_buffer_usage_software)
    .set_name(a_window.c_str())
    .create_window();

Refresh the “Building and Using MirAL” doc

This has been rewritten (and renamed) to reflect the presence of MirAL in the Ubuntu archives and make installation (rather than “build it yourself”) the default approach.

Bug fixes

  • [libmiral] Chrome-less shell hint does not work any more (LP: #1658117)
  • “$ miral-app -kiosk” fails with “Unknown command line options:
    –desktop_file_hint=miral-shell.desktop” (LP: #1660933)
  • [libmiral] Fix focus and movement rules for Input Method and Satellite
    windows. (LP: #1660691)
  • [libmirclientcpp-dev] WindowSpec::set_state() wrapper for mir_window_spec_set_state()
    (LP: #1661256)

Read more
Alan Griffiths

mircade-snap

mircade, miral-kiosk and snapcraft.io

mircade is a proof-of-concept game launcher for use with miral-kiosk. It looks for installed games, works out if they use a toolkit supported by Mir and allows the user to play them.

miral-kiosk is a proof-of-concept Mir server for kiosk style use. It has very basic window management designed to support a single fullscreen application.

snapcraft.io is a packaging system that allows you to package applications (as “snaps”) in a way that runs on multiple linux distributions. You first need to have snapcraft installed on your target system (I used a dragonboard with Ubuntu Core as described in my previous article).

The mircade snap takes mircade and a few open games from the Ubuntu archive to create an “arcade style” snap for playing these games.

Setting up the Mir snaps

The mircade snap is based on the “Mir Kiosk Snaps” described here.

Mir support on Ubuntu Core is currently work in progress so the exact incantations for installing the mir-libs and mir-kiosk snaps to work with mircade varies slightly from the referenced articles (to work around bugs) and will (hopefully) change in the near future. Here’s what I found works at the time of writing:

$ snap install mir-libs --channel edge
$ snap install mir-kiosk --channel edge --devmode
$ snap connect mir-kiosk:mir-libs mir-libs:mir-libs
$ sudo reboot

Installing the mircade-snap

I found that installing the mircade snap sometimes ran out of space on the dragonboard /tmp filesystem. So…

$ TMPDIR=/writable/ snap install mircade --devmode --channel=edge
$ snap connect mircade:mir-libs mir-libs:mir-libs
$ snap disconnect mircade:mir;snap connect mircade:mir mir-kiosk:mir
$ snap disable mircade;sudo /usr/lib/snapd/snap-discard-ns mircade;snap enable mircade

Using mircade on the dragonboard

At this point you should see an orange screen with the name of a game. You can change the game by touching/clicking the top or bottom of the screen (or using the arrow keys). Start the current game by touching/clicking the middle of the screen or pressing enter.

Read more
Christian Brauner

lxc exec vs ssh

Recently, I’ve implemented several improvements for lxc exec. In case you didn’t know, lxc exec is LXD‘s client tool that uses the LXD client api to talk to the LXD daemon and execute any program the user might want. Here is a small example of what you can do with it:

asciicast

One of our main goals is to make lxc exec feel as similar to ssh as possible since this is the standard of running commands interactively or non-interactively remotely. Making lxc exec behave nicely was tricky.

1. Handling background tasks

A long-standing problem was certainly how to correctly handle background tasks. Here’s an asciinema illustration of the problem with a pre LXD 2.7 instance:

asciicast

What you can see there is that putting a task in the background will lead to lxc exec not being able to exit. A lot of sequences of commands can trigger this problem:

chb@conventiont|~
> lxc exec zest1 bash
root@zest1:~# yes &
y
y
y
.
.
.

Nothing would save you now. yes will simply write to stdout till the end of time as quickly as it can…
The root of the problem lies with stdout being kept open which is necessary to ensure that any data written by the process the user has started is actually read and sent back over the websocket connection we established.
As you can imagine this becomes a major annoyance when you e.g. run a shell session in which you want to run a process in the background and then quickly want to exit. Sorry, you are out of luck. Well, you were.
The first, and naive approach is obviously to simply close stdout as soon as you detect that the foreground program (e.g. the shell) has exited. Not quite as good as an idea as one might think… The problem becomes obvious when you then run quickly executing programs like:

lxc exec -- ls -al /usr/lib

where the lxc exec process (and the associated forkexec process (Don’t worry about it now. Just remember that Go + setns() are not on speaking terms…)) exits before all buffered data in stdout was read. In this case you will cause truncated output and no one wants that. After a few approaches to the problem that involved, disabling pty buffering (Wasn’t pretty I tell you that and also didn’t work predictably.) and other weird ideas I managed to solve this by employing a few poll() “tricks” (In some sense of the word “trick”.). Now you can finally run background tasks and cleanly exit. To wit:
asciicast

2. Reporting exit codes caused by signals

ssh is a wonderful tool. One thing however, I never really liked was the fact that when the command that was run by ssh received a signal ssh would always report -1 aka exit code 255. This is annoying when you’d like to have information about what signal caused the program to terminate. This is why I recently implemented the standard shell convention of reporting any signal-caused exits using the standard convention 128 + n where n is defined as the signal number that caused the executing program to exit. For example, on SIGKILL you would see 128 + SIGKILL = 137 (Calculating the exit codes for other deadly signals is left as an exercise to the reader.). So you can do:

chb@conventiont|~
> lxc exec zest1 sleep 100

Now, send SIGKILL to the executing program (Not to lxc exec itself, as SIGKILL is not forwardable.):

kill -KILL $(pidof sleep 100)

and finally retrieve the exit code for your program:

chb@conventiont|~
> echo $?
137

Voila. This obviously only works nicely when a) the exit code doesn’t breach the 8-bit wall-of-computing and b) when the executing program doesn’t use 137 to indicate success (Which would be… interesting(?).). Both arguments don’t seem too convincing to me. The former because most deadly signals should not breach the range. The latter because (i) that’s the users problem, (ii) these exit codes are actually reserved (I think.), (iii) you’d have the same problem running the program locally or otherwise.
The main advantage I see in this is the ability to report back fine-grained exit statuses for executing programs. Note, by no means can we report back all instances where the executing program was killed by a signal, e.g. when your program handles SIGTERM and exits cleanly there’s no easy way for LXD to detect this and report back that this program was killed by signal. You will simply receive success aka exit code 0.

3. Forwarding signals

This is probably the least interesting (or maybe it isn’t, no idea) but I found it quite useful. As you saw in the SIGKILL case before, I was explicit in pointing out that one must send SIGKILL to the executing program not to the lxc exec command itself. This is due to the fact that SIGKILL cannot be handled in a program. The only thing the program can do is die… like right now… this instance… sofort… (You get the idea…). But a lot of other signals SIGTERM, SIGHUP, and of course SIGUSR1 and SIGUSR2 can be handled. So when you send signals that can be handled to lxc exec instead of the executing program, newer versions of LXD will forward the signal to the executing process. This is pretty convenient in scripts and so on.

In any case, I hope you found this little lxc exec post/rant useful. Enjoy LXD it’s a crazy beautiful beast to play with. Give it a try online https://linuxcontainers.org/lxd/try-it/ and for all you developers out there: Checkout https://github.com/lxc/lxd and send us patches. </p>
            <a href=Read more

Brandon Schaefer

Mir windowing system in Kodi

Here is a video of Kodi running on mir [1]. The source for the port is sitting [2 xbmc/xbmc/windowing/mir ]. My plan is to upstream the changes and maintain the port [6]. There are some more things to fix up, but the main part is ready to upstream.

It will be compile time support, meaning you’ll need to compile with mir support specifically. As Kodi does not support runtime windowing selection. This also means it’ll have a low risk to the main code base. The port supports both opengl and opengles. I also need to test this out on an embedded device (such as raspi/dragon board). Ideally creating a nice kiosk Kodi media center would be awesome running on mir!

I have also snapped up Kodi with mir support [3] with help from [4] and Alberto Aguirre [5]. This is an amazing use case of a snap, having Kodi in a kiosk type system. The main goal as well is to get this working on an embedded device so you install a ubuntu core image, then just install this snap through the store and you’ve a fully working media center!

If you’ve any questions feel free to poke me (bschaefer) in freenode #ubuntu-mir anytime or email me at brandon.schaefer@canonical.com

Read more
Brandon Schaefer

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Alan Griffiths

MirAL-0.3

There’s a new MirAL release (0.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged. The changes in 0.3.0 fall into three categories:

  1. bugfixes;
  2. enabling keymap in newer Mir versions; and,
  3. additional features for shell developers to use.

Bugfixes

#1631958 Crash after closing Qt dialog

#1632325, #1633052 tooltips positioned wrong with mir-0.24

#1625849 [Xmir] Alt+` switch between different X11-apps not just windows.

Added miral-xrun as a better way to use Xmir

(unnumbered) miral-shell splash screen should be fullscreen.

(unnumbered) deduplicate logging of WindowSpecification::top_left

Enabling Keyboard Map in newer Mir versions

A new class miral::Keymap allows the keyboard map to be specified either programmatically or (by default) on the command line. Being in the UK I can get a familiar keyboard layout like this:

miral-shell --keymap gb

The class is also provided on Mir versions prior to 0.24.1, but does nothing.

Additional Features For Shell Developers To Use

#1629349 Shell wants way to associate initially requested window creation state with the window later created.

Shell code can now set a userdata property on the WindowSpecification in place_new_surface() and this is transferred to the WindowInfo.

Added miral/version.h to allow permit compile-time feature detection. If you want to detect different versions of MirAL at compile time you can, for example, write:

#include <miral/version.h>
#if MIRAL_VERSION >= MIR_VERSION_NUMBER(0, 3, 0)
#include <miral/keymap.h>
#endif

A convenient overload of WindowManagerTools::modify_window() that
doesn’t require the WindowInfo

Read more
Victor Palau

Ok, ok.. sorry for the click-bait headline – but It is mainly true.. I recently got a Nextcloud box , it was pretty easy to set up and here are some great instructions.

But this box is not just a Nextcloud box, it is  a box of unlimited possibilities. In just a few hours I added to my personal cloud  a WIFI access point and  chat server.   So here are some amazing facts you should know about Ubuntu and snaps:

Amazing fact #1 – One box, many apps

With snaps you can transform you single function device, into a box of tricks. You can add software to extend its functionality after you have made it. In this case I created an WIFI access point and added a Rocketchat server to it.

You can release a drone without autonomous capabilities, and once you are sure that you have nailed, you can publish a new app for it… or even sale a pro-version autopilot snap.

You can add an inexpensive Zigbee and Bluetooth module to your home router, and partner with a security firm to provide home surveillance services.. The possibilities are endless.

Amazing fact #2 – Many boxes, One heart

Maybe an infinite box of tricks is attractive to a geek like me,  but what it is interesting is product makers is :make one hardware, ship many products.

Compute parts (cpu,memory,storage) make a large part of  bill of materials of any smart device. So does validation and integration of this components with your software base… and then you need to provide updates for the OS and the kernel for years to come.

What if I told you could build (or buy) a single multi-function core – pre-integrated with a Linux OS  and use it to make drones, home routers, digital advertisement signs, industrial and home automation hubs, base stations, DSLAMs, top-of-rack switches,…

This is the real power of Ubuntu Core, with the OS and kernel being their own snaps – you can be sure the nothing has changes in them across these devices, and that you can reliably update of them.  You not only are able to share validation and maintenance cost across multiple projects, you would be able to increase the volume of your part order and get a better price.

20160927_101912

How was the box of tricks made:

Ingredients for the WIFI ap:

 

I also installed the Rocketchat server  snap for the store.

 


Read more
kevin gunn

hey just a very quick update. Had some more time to play around today and touch is working after all (only difference is I left my usb keyboard disconnected today so maybe it was getting confused)

Anyhow, here’s videos of Qt clocks with touch and Qt samegame with touch

Read more
Alan Griffiths

–window-management-trace

A feature added to the lp:miral trunk yesterday is making life a lot easier for developers working on MirAL based servers, and the toolkit extensions that support Mir. The feature is:

miral-shell --window-management-trace

Actually, the –window-management-trace switch will work with any MirAL base server (so miral-kiosk and egmde support it too).

What this does is cause the server to log every interaction with the window management policy – all the notifications it receives and all the calls it makes to the tools as a result.

This means that it is easy to find out that, for example, a “modal” gtk based dialog window is being created without specifying a parent. (Which is why the behaviour isn’t quite as expected – Mir servers will treat it as non-modal.)

To use this feature before the next MirAL release you do need build it yourself, but this only takes a minute (depending on your kit).It really is the easiest way to see exactly what is going on and why.

Read more
Victor Palau

In order to test k8s you can always deploy a single-node setup locally using minikube, however it is a bit limited if you want to test interactions that require your services to be externally accessible from a mobile or web front-end.

For this reason, I created a basic k8s setup for a Core OS single node in Azure using https://coreos.com/kubernetes/docs/latest/getting-started.html . Once I did this, I decided to automate its deployment via script.

It requires a Core OS instance running, then connect to it and:

git clone https://github.com/vtuson/k8single.git k8
cd k8
./kubeform.sh [myip-address] –> ip associated to eth, you can find it using ifconfig

This will deploy k8 into a single node, it sets up kubectl in the node and deploys skydns add on.

It also includes a busybox node file that can be deployed by:

kubectl create -f files/busybox

This might come useful to debug issues with the set up. To execute commands in busybox run:
kubectl exec busybox — [command]

The script and config files can be access at https://github.com/vtuson/k8single

If you hit any issues while deploying k8s in a single node a few things worth checking are:


sudo systemctl status etcd
sudo systemctl status flanneld
sudo systemctl status docker

Also it is worth checking what docker containers are running and if necessarily check the logs

docker ps -a
docker logs [container-id]


Read more
Victor Palau

I recently blogged about deploying kubernetes in Azure.  After doing so, I wanted to keep an eye on usage of the instances and pods.

Kubernetes recommends Heapster as a cluster aggregator to monitor usage of nodes and pods. Very handy if you are deploying in Google Compute (GCE) as it has a pre-build dashboard to hook it to.

Heapster runs on each node, collects statistics of the system and pods which pipes to a storage backend of your choice. A very handy part of Heapster is that export user labels as part of metadata, which I believe can be used to create custom reports on services across nodes.

monitoring-architecture

If you are not using GCE or just don’t want to use their dashboard, you can deploy a combo of InfluxDB and Grafana as a DIY solution. While this seems promising the documentation, as usual, is pretty short on details..

Start by using the “detailed” guide to deploy the add on, which basically consists of:

**wait! don’t run this yet until you finished reading article**

git clone https://github.com/kubernetes/heapster.git
cd heapster
kubectl create -f deploy/kube-config/influxdb/

These steps exposes Grafana and InfluxDB via the api proxy, you can see them in your deployment by doing:

kubectl cluster-info

This didn’t quite work for me, and while rummaging in the yamls, I found out that this is not really the recommended configuration for live deployments anyway…

So here is what I did:

  1. Remove env variables influxdb-grafana-controller.yaml
  2. Expose service as NodePort or LoadBalancer depends of your preference in grafana-service.yaml. E.g. Under spec section add: type: NodePort
  3. Now run >kubectl create -f deploy/kube-config/influxdb/

You can see the expose port for Grafana by running:
kubectl --namespace=kube-system describe service grafana-service

In this deployment, all the services, rc and pods are added under the kube-system namespace, so remember to add the –namespace flag to your kubectl commands.

Now you should be able to access Grafana on any external ip or dns on the port listed under NodePort. But I was not able to see any data.

Login to Grafana as admin (admin:admin by default), select DataSources>influxdb-datasource and test the connection. The connection is set up as http://monitoring-influxdb:8086, this failed for me.

Since InfluxDB and Grafana are both in the same pod, you can use localhost to access the service. So change the url to http://localhost:8086, save and test the connection again. This worked for me and a minute later I was getting realtime data from nodes and pods.

Proxying Grafana

I run an nginx proxy that terminates https  requests for my domain and a created a https://mydomain/monitoring/ end point as part of it.

For some reason, Grafana needs to know the root-url format that is being accessed from to work properly. This is defined in a config file.. while you could change it and rebuild the image, I preferred to override it via an enviroment variable in the influxdb-grafana-controller.yaml kubernetes file. Just add to the Grafana container section:

env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s:%(http_port)s/monitoring"

You can do this with any of the Grafana config values, which allows you to reuse the official Grafana docker image straight from the main registry.


Read more
Victor Palau

I recently looked to do my first live deployment of kubernetes, after having playing succesfully with minikube.

When trying to deploy kubernetes in public cloud, there is a couple of base options. You could start from scratch or use one of the turnkey solutions.

You have two turnkey solutions fro Azure, Fannel or Weave based. Basically these are two different networking solutions, but the actual turnkey solutions differ more than just the networking layer. I tried both and had issues with both, yeay!! However, I liked the fannel solution over Weave’s straight away. Flannel’s seems to be able to configure and used Azure better. For example, It uses a VM scale sets for the slave nodes, and configures external ips and security groups. This might be because the Flannel solution is sponsored by Microsoft, so I ended up focusing on it over Weave’s.

The documentation is not bad, but a bit short on some basic details. I  did the deployment in both Ubuntu 14.04 and OSX10 and worked in both. The documetation details jq and docker as the main dependencies. I found issues with older versions of jq that are part of the 14.04 Ubuntu archive, so make sure to install the lastest version from the jq website.

Ultimately, Kube-up.sh seems to be a basic configuration wrapper around azkube, a link to it is burried at the end of the kubernetes doc. Cole Mickens is the main developer for azkube and the turnkey soultion. While looking around his github profile, I found this very useful link on the status of support for Kubernetes in Azure. I would hope this eventually lands in the main kubernetes doc site.

As part of the first install instructions, you will need to provide the subscription and tenant id. I found the subscription id easily enough from the web console, but the tenant id was a bit more elusive. Altough the tenant id is not required for installations of 1.3, the script failed to execute without it. It seems like the best way to find it is the Azure cli tool, which you can get from node.js


npm install azure
azure login
azure account show

This will give you ll the details that you need to set it up. You can then just go ahead or you can edit deatils in  cluster/azure/config-default.sh

You might want to edit the number of VMs that the operation will create. Once you run kube-up.sh, you should hopefully get a working kubernetes deployment.

If for any reason, you would like to change the version to be install, you will need to edit the file called “version” under the kubernetes folder setup by the first installation step.

The deployment comes with a ‘utils’ script that makes it very simple do a few things. One is to copy the ssh key that will give you access to the slaves to the master.

$ ./util.sh copykey

From the master, you just need to access the internal ip using the “kube” username and specify your private key for authentication.

Next, I would suggest to configure your local kubectl and deploy the SkyDNS addon. You will really need this to easly access services.

$ ./util.sh configure-kubectl
$ kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/skydns.yaml

And that is it, if you run kubectl get nodes, you will be able to see the master and the slaves.

Since Azure does not have direct integretion for loadbalancer, any services that you expose you will need to configure with a self-deployed solution. But it seems that version 1.4  ofKubernetes is comming with equivalent support for Azure that the current versions boast for  AWS and Co.


Read more
Victor Palau

First of all, I wanted to recommend the following recipe from Digital Ocean on how to rollout your own Docker Registry in Ubuntu 14.04. As with most of their stuff, it is super easy to follow.

I also wanted to share a small improvement on the recipe to include a UI front-end to the registry.

Once you have completed the recipe and have a repository secured and running, you extend your docker-compose file to look like this:

nginx:
 image: "nginx:1.9"
 ports:
 - 443:443
 - 8080:8080
 links:
 - registry:registry
 - web:web
 volumes:
 - ./nginx/:/etc/nginx/conf.d:ro

web:
 image: hyper/docker-registry-web
 ports:
 - 8000:8080
 links:
 - registry
 environment:
 REGISTRY_HOST: registry

registry:
 image: registry:2
 ports:
 - 127.0.0.1:5000:5000
 environment:
 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
 volumes:
 - ./data:/data

You will also need to include a configuration file for web in the nginx folder.

file: ~/docker-registry/nginx/web.conf

upstream docker-registry-web {
 server web:8080;
 }

server {
 listen 8080;
 server_name [YOUR DOMAIN];

# SSL
 ssl on;
 ssl_certificate /etc/nginx/conf.d/domain.crt;
 ssl_certificate_key /etc/nginx/conf.d/domain.key;

location / {

# To add basic authentication to v2 use auth_basic setting plus add_header
 auth_basic "registry.localhost";
 auth_basic_user_file /etc/nginx/conf.d/registry.password;

proxy_pass http://docker-registry-web;
 proxy_set_header Host $http_host; # required for docker client's sake
 proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_read_timeout 900;
 }
 }

docker-compose up and you should be able to have a ssl secured UI frontend in port 8080 (https://yourdomain:8080/)
If you have any improvement tips I am all ears!


Read more
Craig Bender

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Robie Basak

Meeting information

Agenda

Minutes

Review ACTION points from previous meeting

The discussion about “Review ACTION points from previous meeting” started at 16:00.

  • No actions from previous meeting.

Development Release

The discussion about “Development Release” started at 16:00.

Assigned merges/bugwork (rbasak)

The discussion about “Assigned merges/bugwork (rbasak)” started at 16:02.

  • rbasak has triaged some bugs, tagging some “bitesize”, and advised prospective Ubuntu developers to take a look at these.

Server & Cloud Bugs (caribou)

The discussion about “Server & Cloud Bugs (caribou)” started at 16:05.

  • Squid, Samba and MySQL bugs noted; all are in progress. Nothing else in particular to bring up.

Weekly Updates & Questions for the QA Team

The discussion about “Weekly Updates & Questions for the QA Team” started at 16:06.

  • No questions to or from the QA team
  • jgrimm noted that the Canonical Server Team currently have an open QA position and invited applications and recommendations.

Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)” started at 16:07.

  • No questions to/from the kernel team

Upcoming Call For Papers

The discussion about “Upcoming Call For Papers” started at 16:10.

  • No upcoming CfPs of note

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:11.

  • No upcoming events of note

Open Discussion

The discussion about “Open Discussion” started at 16:11.

Announce next meeting date, time and chair

The discussion about “Announce next meeting date, time and chair” started at 16:18.

  • The next meeting will be at Tue 17 May 16:00:00 UTC 2016. jamespage will chair.

Generated by MeetBot 0.1.5 (http://wiki.ubuntu.com/meetingology)

Read more
David Henningsson

So; assume that you have some new hardware that works for the most part, but you have some problems with your built-in sound card. The problem has been fixed upstream, but if you start using that particular upstream kernel only, you will lose Ubuntu kernel security updates. In some cases, bug fixes will come to Ubuntu kernels too – after some time – but in other cases these fixes won’t, for a variety of reasons.

You want to run a standard Ubuntu kernel, except for your sound driver (or some other driver), which you want to be something different. This actually happens quite often when our team enables hardware that isn’t yet on the market, and therefore lack full support in already released kernels.

DKMS

To the rescue comes DKMS (short for Dynamic kernel module support), which installs the source of the actual driver on the machine, and, whenever the Ubuntu kernel is upgraded, automatically recompiles the driver to fit the new kernel. The compiled modules are installed into the right directory for them to be used at next boot. We’ve used this tool for several years, and found it to be incredibly useful.

Launchpad automation

Launchpad got a feature called recipes, which combines one or more bzr branches and automatically makes a source package whenever one of the source packages change. The source package is then uploaded to a ppa, which builds a binary package from the source package.

What is then the result of all this well-oiled machinery? That every day, you have the latest sound driver which is ready for you to install and use to see if it fixes your sound issues – and because it’s packaged as a normal Debian package, uninstallation is easy in case it does not work. We have had this up and running for the Intel HDA driver for several years now, and it’s been useful for both Canonical and the Ubuntu community.

Details

That’s the birds-eye overview. In practice, things are a bit more complicated. Get ready for the mandatory boxes-and-arrows picture:

hda-build-flow2

Preparing for import

Our main source is the master branch of sound.git, maintained by Takashi Iwai. However, Launchpad does not yet support git in recipe builds, therefore, a machine somewhere in the cloud runs a preparation script. This script checks the git branch for updates every hour and if there is one, starts with filtering out the “sound” directory (this is a simple optimization, because kernel trees are huge). The result is added to a bzr branch.

Actually this cloud machine does one more thing, but it’s more of a bonus: it runs some hda-emu based testing. Hda-emu is a tool for emulating an HD-audio codec, and takes alsa-info as input. So, we contributed a lot of alsa-infos from machines Canonical enable to the upstream hda-emu repository, along with some scripts to run some emulation tests on all of them. So, in case something breaks, we get an early warning, before the code reaches more people. The most common case for the test to break however is not an actual bug, but that the hda-emu tool needs updating to handle changes in the kernel driver. Therefore, the script is not stopped when this happens, it just puts a warning message in the bzr commit log.

The cloud machine runs a bzr server, which Launchpad then checks a few times per day for updates, and imports changes into a Launchpad hosted mirror branch.

Making a DKMS package

As our launchpad recipe detects that the bzr branch has changed, it re-runs the recipe. The recipe is quite simple – it only copies files from different branches into some directory, creates a source package out of the result, and uploads that package to a PPA. That’s where we combine the upstream source with our DKMS configuration. There is some scripting involved to e g figure out the names of the built kernel modules – if you’re making your own DKMS package, it will probably be easier to write that file by hand.

Unfortunately, compiling a new driver on an older kernel can be challenging, when the driver starts relying on features only present in the new kernel. Therefore, we regularly need to manually add patches to the new driver to make it compile on the older kernel.

Launchpad build

This part is just like any other build on a Launchpad PPA – it takes a source package and builds a binary package. This is where the backport patches actually get applied to the source. Please note that even though this is a binary package, what’s inside the package is not compiled binaries, it’s the source code for the driver. This is because the final compilation occurs on the end user machine.

(A funny quirk: when DKMS is invoked, it creates a .deb file by itself, but for some reason Launchpad wouldn’t accept this .deb file. I never really figured out why, but instead worked around it by manually unpacking DKMS’s .deb, then repacking it again using low-level dpkg-gencontrol and dpkg-deb tools.)

The binary package is then published in the PPA, downloaded/copied by the end user to his/her machine, and installed just like any other Debian package.

On the end user machine

The final step; where the driver source is combined with a standard Ubuntu kernel, is done on the end user’s machine. DKMS itself installs triggers on the end user machine that will be called every time a new kernel is installed, upgraded or removed.

On installation of a new kernel, DKMS will verify that the relevant kernel header package is also installed, then use these headers to recompile all installed DKMS binary packages against the new kernel. The resulting files are copied into /lib/modules/<kernel>/updates/dkms. On installation of a new DKMS binary package, the default is to recompile the new package against the latest kernel and the currently running kernel.

DKMS also runs depmod to ensure the kernel will pick up the newly compiled modules.

Final remarks

There are some caveats which might be worth mentioning.

First, if you combine the regular Ubuntu kernel (with security updates) with a DKMS driver, you will get security updates for the entire kernel except that specific driver, so in theory, you could be left with a security issue if the vulnerability is in the specific driver you use DKMS for. However, in practice the vast majority of security bugs are userspace facing code, rather than deep down in hardware specific drivers.

Second, on every Ubuntu kernel released there is a potential risk for breakage, e g, if the DKMS driver calls a function in the kernel and that function changes its signature, then the DKMS driver will fail to compile and install on the new kernel. Or even worse, the function changes behavior without changing signature, so that the DKMS driver will compile just fine, but break in some way when the driver runs. All I can say about that is that, to my knowledge, if this can happen then it happens very rarely – I’ve never seen it cause any problems in practice.

Read more