Canonical Voices

This is the second in a series of blog posts on creating an asynchronous D-Bus service in python. For part 1, go here.

Last time we created a base for our asynchronous D-Bus service with a simple synchronous server/client. In this post, we’ll start from that base which can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1. Of course, all of today’s code can be found in the same project with the part2 tag: https://github.com/larryprice/python-dbus-blog-series/tree/part2.

Why Asynchronous?

Before we dive into making our service asynchronous, we need a reason to make our service asynchronous. Currently, our only d-bus object contains a single method, quick, which lives up to its namesake and is done very quickly. Let’s add another method to RandomData which takes a while to finish its job.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import dbus.service
import random
import time

class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")
        random.seed()

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def quick(self, bits=8):
        return str(random.getrandbits(bits))

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def slow(self, bits=8):
        num = str(random.randint(0, 1))
        while bits > 1:
            num += str(random.randint(0, 1))
            bits -= 1
            time.sleep(1)

        return str(int(num, 2))

Note the addition of the slow method on the RandomData object. slow is a contrived implementation of building an n-bit random number by concatenating 1s and 0s, sleeping for 1 second between each iteration. This will still go fairly quickly for a small number of bits, but could take quite some time for numbers as low as 16 bits.

In order to call the new method, we need to modify our client binary. Let’s add in the argparse module and take in a new argument: --slow. Of course, --slow will instruct the program to call slow instead of quick, which we’ll add to the bottom of the program.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/usr/bin/env python3

# Take in a single optional integral argument
import sys
import argparse

arg_parser = argparse.ArgumentParser(description='Get random numbers')
arg_parser.add_argument('bits', nargs='?', default=16)
arg_parser.add_argument('-s', '--slow', action='store_true',
                        default=False, required=False,
                        help='Use the slow method')

args = arg_parser.parse_args()

# Create a reference to the RandomData object on the  session bus
import dbus, dbus.exceptions
try:
    bus = dbus.SessionBus()
    random_data = bus.get_object("com.larry-price.test", "/com/larry_price/test/RandomData")
except dbus.exceptions.DBusException as e:
    print("Failed to initialize D-Bus object: '%s'" % str(e))
    sys.exit(2)

# Call the appropriate method with the given number of bits
if args.slow:
    print("Your random number is: %s" % random_data.slow(int(args.bits)))
else:
    print("Your random number is: %s" % random_data.quick(int(args.bits)))

Now we can run our client a few times to see the result of running in slow mode. Make sure to start or restart the service binary before running these commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ ./client 4
Your random number is: 2
$ ./client 4 --slow
Your random number is: 15
$ ./client 16
Your random number is: 64992
$ ./client 16 --slow
Traceback (most recent call last):
  File "./client", line 26, in <module>
    print("Your random number is: %s" % random_data.slow(int(args.bits)))
  File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__
    return self._proxy_method(*args, **keywords)
  File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__
    **keywords)
  File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking
    message, timeout)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

Your mileage may vary (it is a random number generator, after all), but you should eventually see a similar crash which is caused by a timeout in the response of the D-Bus server. We know that this algorithm works; it just needs more time to run. Since a synchronous call won’t work here, we’ll have to switch over to more asynchronous methods…

An Asynchronous Service

At this point, we can go one of two ways. We can use the threading module to spin threads within our process, or we can use the multiprocessing module to create child processes. Child processes will be slightly pudgier, but will give us more functionality. Threads are a little simpler, so we’ll start there. We’ll create a class called SlowThread, which will do the work we used to do within the slow method. This class will spin up a thread that performs our work. When the work is finished, it will set a threading.Event that can be used to check that the work is completed. threading.Event is a cross-thread synchronization object; when the thread calls set on the Event, we know that the thread is ready for us to check the result. In our case, we call is_set on our event to tell a user whether or not our data is ready.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# ...

import threading
class SlowThread(object):
    def __init__(self, bits):
        self.finished = threading.Event()
        self.result = ''

        self.thread = threading.Thread(target=self.work, args=(bits,))
        self.thread.start()
        self.thread_id = str(self.thread.ident)

    @property
    def done(self):
        return self.finished.wait(1)

    def work(self, bits):
        num = str(random.randint(0, 1))
        while bits > 1:
            num += str(random.randint(0, 1))
            bits -= 1
            time.sleep(1)

        self.result = str(num)
        self.finished.set()

# ...

On the RandomData object itself, we’ll initialize a new thread tracking list called threads. In slow, we’ll initialize a SlowThread object, append it to our threads list, and return the thread identifier from SlowThread. We’ll also want to add a method to try to get the result from a given SlowThread called slow_result, which will take in the thread identifier we returned earlier and try to find the appropriate thread. If the thread is finished (the event is set), we’ll remove the thread from our list and return the result to the caller.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# ...

class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")

        random.seed()
        self.threads = []

    # ...

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def slow(self, bits=8):
        thread = SlowThread(bits)
        self.threads.append(thread)
        return thread.thread_id

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='s', out_signature='s')
    def slow_result(self, thread_id):
        thread = [t for t in self.threads if t.thread_id == thread_id]
        if not thread:
            return 'No thread matching id %s' % thread_id

        thread = thread[-1]
        if thread.done:
            result = thread.result
            self.threads.remove(thread)
            return result

        return ''

Last thing we need to do is to update the client to use the new methods. We’ll call slow as we did before, but this time we’ll store the intermediate result as the thread identifier. Next we’ll use a while loop to spin forever until the result is ready.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
# ...

if args.slow:
    import time
    thread_id = random_data.slow(int(args.bits))
    while True:
        result = random_data.slow_result(thread_id)
        if result:
            print("Your random number is: %s" % result)
            break
        time.sleep(1)

# ...

Note that this is not the smartest way to do this; more on that in the next post. Let’s give it a try!

1
2
3
4
5
6
7
8
$ ./client 4
Your random number is: 7
$ ./client 4 --slow
Your random number is: 12
$ ./client 16
Your random number is: 5192
$ ./client 16 --slow
27302

Next time

This polling method works as a naive approach, but we can do better. Next time we’ll look into using D-Bus signals to make our client more asynchronous and remove our current polling implementation.

As a reminder, the end result of our code in this post is MIT Licensed and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part2.

Read more
facundo

PyCamp 2017, en Baradero


¡Otra vez un PyCamp relativamente cerca de casa! Eso me permitió ir en auto. Bah, fuimos en auto, Diego, Nico, Edu y yo. Salimos tempranito y a eso de las nueve ya estábamos ahí.

Las primeras dos horas las pasamos armando toda la infraestructura: fue colgar la bandera, poner las Antenas Sable Láser, configurar la red con el router nuevo, preparar los badges y otros pequeños souvenirs, hacer mate, etc, etc.

A las once y media arrancamos la gran charla introductoria, que era muy necesaria porque este Pycamp era el primero para un montón de los asistentes. Y a continuación de la misma presentamos todos los proyectos que llevamos y votamos más o menos en cuales nos gustaría participar.

Luego el primer almuerzo, y ya arrancamos el PyCamp a todo motor.

Algunos trabajando, otros en un curso superbásico inicial

Trabajando a la sombrita

Yo participé en varios proyectos, pero el fuerte del tiempo lo puse en estos tres:

  • Linkode: aprovechando que estábamos cara a cara, estuvimos pensando con Mati Barriento mucho en un cambio muy grande que estamos haciendo, lo que nos llevó a un refactor de la base de datos, con migración de información que la hicimos el último día en producción. Mucho laburo puesto en esto, que nos dejó un modelo más simple y el código listo para la próxima gran mejora, que es la renovación de como manejamos el lado del cliente.
  • Fades: Gilgamezh y yo laburamos bastante también acá. Él estuvo principalmente con el video instructivo que armamos el año pasado y necesitaba mucha edición, y con mucha ayuda de Marian lograron un resultado bárbaro, que pueden ver acá. En el proyecto en sí yo metí dos fixes pequeñitos, y estuve ayudando y haciendo reviews a dos branches de Juan y Facundo (otro, no yo).
  • Recordium: acá no hicimos mucho, pero conté de qué iba el proyecto y ahí surgieron varias pequeñas mejoras para hacer, incluso para la GUI final a la que apuntar. Y también tocamos un tema de seguridad, donde Matías nos contó qué detalle habría que mejorar para que no nos puedan "inyectar" mensajes.

Laburanding

Charlando diseño y aprendiendo malabares

Pero aparte de los proyectos en sí, también tuvimos un campeonato de ping pong (pasé la primera ronda, pero luego perdí un partido de la segunda ronda y quedé afuera), pileta (me metí y todo), hicimos la foto grupal de siempre, un partidito de futbol (sobre pasto, ¡en patas!), un asado, la típica reunión de PyAr integrada al Pycamp, y mucha, mucha charla con diferentes grupos, viendo qué hacían, tratando de tirar alguna idea o aplicar alguna experiencia.

La grupal

Como actividades fuera del predio, tuvimos un paseo guiado una mañana (con guia y todo, que nos contó muchísimo del pasado y presente de Baradero y alrededores), y un festival de jazz una noche (muy lindo, y la gente de donde nos hospedábamos se copó y nos armó una vianda así los que íbamos al festival cenábamos allá).

El último día también hicimos (como queremos que se haga costumbre) un video donde todos los que empujamos algún proyecto pasamos y contamos qué se hizo. Está muy bueno a nivel resumen para los que estuvimos ahí y como registro para los que no pudieron ir (y que quede a futuro). Mil gracias a José Luis que siempre se copa con la edición del video final.

Caminando por Baradero

Festival de Jazz

Un punto aparte para lo que fue el "lugar" del PyCamp. Mucho verde a metros nomás de donde estábamos trabajando, que era un salón grandote donde entrábamos relativamente cómodos (aunque en el día a día siempre había grupos laburando en lo que era el comedor y/o al aire libre). Las habitaciones estaban bien (considerando que eran grupales) y los baños limpios. La comida bárbara, incluyendo el asado, y un lujo todas las preparaciones especiales para gente vegetariana, con dietas raras, alergias, etc (les estuve preguntando a varios y todos comentaron que estuvo perfecto). Hasta la internet, anduvo...

En fin, pasó otro PyCamp. Yo siempre digo que es el mejor evento del año, y este no fue la excepción. Es más, esta edición, la 10°, fue uno de los mejores PyCamps!

Clases de espadas

El parque y la pileta

PD: charlando sobre que era la décima edición, nos anotamos cuales habían sido hasta ahora, lo dejo acá como registro...

    2008  Los Cocos, Córdoba
    2009  Los Cocos, Córdoba
    2010  Verónica, Buenos Aires
    2011  La Falda, Córdoba
    2012  Verónica, Buenos Aires
    2013  Villa Giardino, Córdoba
    2014  Villa Giardino, Córdoba
    2015  La Serranita, Córdoba
    2016  La Serranita, Córdoba
    2017  Baradero, Buenos Aires

PD2: fotos! las mías y las que sacó Yami (casi fotógrafa oficial del evento).

Read more

I’ve been working on a d-bus service to replace some of the management guts of my project for a while now. We started out creating a simple service, but some of our management processes take a long time to run, causing a timeout error when calling these methods. I needed a way to run these tasks in the background and report status to any possible clients. I’d like to outline my approach to making this possible. This will be a multi-part blog series starting from the bottom: a very simple, synchronous d-bus service. By the end of this series, we’ll have a small codebase with asynchronous tasks which can be interacted with (input/output) from D-Bus clients.

All of this code is written with python3.5 on Ubuntu 17.04 (beta), is MIT licensed, and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1.

What is D-Bus?

From Wikipedia:

In computing, D-Bus or DBus (for “Desktop Bus”), a software bus, is an inter-process communication (IPC) and remote procedure call (RPC) mechanism that allows communication between multiple computer programs (that is, processes) concurrently running on the same machine.

D-Bus allows different processes to communicate indirectly through a known interface. The bus can be system-wide or user-specific (session-based). A D-Bus service will post a list of available objects with available methods which D-Bus clients can consume. It’s at the heart of much Linux desktop software, allowing processes to communicate with one another without forcing direct dependencies.

A synchronous service

Let’s start by building a base of a simple, synchronous service. We’re going to initialize a loop as a context to run our service within, claim a unique name for our service on the session bus, and then start the loop.

service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/usr/bin/env python3

import dbus, dbus.service, dbus.exceptions
import sys

from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GLib

# Initialize a main loop
DBusGMainLoop(set_as_default=True)
loop = GLib.MainLoop()

# Declare a name where our service can be reached
try:
    bus_name = dbus.service.BusName("com.larry-price.test",
                                    bus=dbus.SessionBus(),
                                    do_not_queue=True)
except dbus.exceptions.NameExistsException:
    print("service is already running")
    sys.exit(1)

# Run the loop
try:
    loop.run()
except KeyboardInterrupt:
    print("keyboard interrupt received")
except Exception as e:
    print("Unexpected exception occurred: '{}'".format(str(e)))
finally:
    loop.quit()

Make this binary executable (chmod +x service) and run it. Your service should run indefinitely and do… nothing. Although we’ve already written a lot of code, we haven’t added any objects or methods which can be accessed on our service. Let’s fix that.

dbustest/random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
import dbus.service
import random

class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")
        random.seed()

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def quick(self, bits=8):
        return str(random.getrandbits(bits))

We’ve defined a D-Bus object RandomData which can be accessed using the path /com/larry_price/test/RandomData. This style of string is the general style of an object path. We’ve defined an interface implemented by RandomData called com.larry_price.test.RandomData with a single method quick as declared with the @dbus.service.method context decorator. quick will take in a single parameter, bits, which must be an integer as designated by the in_signature in our context decorator. quick will return a string as specified by the out_signature parameter. All that quick does is return a random string given a number of bits. It’s simple and it’s fast.

Now that we have an object, we need to declare an instance of that object in our service to attach it properly. Let’s assume that random_data.py is in a directory dbustest with an empty __init__.py, and our service binary is still sitting in the root directory. Just before we start the loop in the service binary, we can add the following code:

service
1
2
3
4
5
6
7
8
9
# ...
# Run the loop
try:
    # Create our initial objects
    from dbustest.random_data import RandomData
    RandomData(bus_name)

    loop.run()
# ...

We don’t need to do anything with the object we’ve initialized; creating it is enough to attach it to our D-Bus service and prevent it from being garbage collected until the service exits. We pass in bus_name so that RandomData will connect to the right bus name.

A synchronous client

Now that you have an object with an available method on our service, you’re probably interested in calling that method. You can do this on the command line with something like dbus-send, or you could find the service using a GUI tool such as d-feet and call the method directly. But eventually we’ll want to do this with a custom program, so let’s build a very small program to get started.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/usr/bin/env python3

# Take in a single optional integral argument
import sys
bits = 16
if len(sys.argv) == 2:
    try:
        bits = int(sys.argv[1])
    except ValueError:
        print("input argument must be integer")
        sys.exit(1)

# Create a reference to the RandomData object on the  session bus
import dbus, dbus.exceptions
try:
    bus = dbus.SessionBus()
    random_data = bus.get_object("com.larry-price.test", "/com/larry_price/test/RandomData")
except dbus.exceptions.DBusException as e:
    print("Failed to initialize D-Bus object: '%s'" % str(e))
    sys.exit(2)

# Call the quick method with the given number of bits
print("Your random number is: %s" % random_data.quick(bits))

A large chunk of this code is parsing an input argument as an integer. By default, client will request a 16-bit random number unless it gets a number as input from the command line. Next we spin up a reference to the session bus and attempt to find our RandomData object on the bus using our known service name and object path. Once that’s initialized, we can directly call the quick method over the bus with the specified number of bits and print the result.

Make this binary executable also. If you try to run client without running service, you should see an error message explaining that the com.larry-price.test D-Bus service is not running (which would be true). Start service, and then run client with a few different input options and observe the results:

1
2
3
4
5
6
7
8
9
$ ./service & # to kill service later, be sure to note the pid here!
$ ./client
Your random number is: 41744
$ ./client 100
Your random number is: 401996322348922753881103222071
$ ./client 4
Your random number is: 14
$ ./client "new donk city"
input argument must be integer

That’s all there is to it. A simple, synchronous server and client. The server and client do not directly depend on each other but are able to communicate unidirectionally through simple method calls.

Next time

Next time, I’ll go into detail on how we can create an asynchronous service and client, and hopefully utilize signals to add a new direction to our communication.

Again, all the code can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1.

Read more
Inayaili de León Persson

Last month the web team ran its first design sprint as outlined in The Sprint Book, by Google Ventures’ Jake Knapp. Some of us had read the book recently and really wanted to give the method a try, following the book to the letter.

In this post I will outline what we’ve learned from our pilot design sprint, what went well, what could have gone better, and what happened during the five sprint days. I won’t go into too much detail about explaining what each step of the design sprint consists of — for that you have the book. If you don’t have that kind of time, but would still like to know what I’m talking about, here’s an 8-minute video that explains the concept:

 

Before the sprint

One of the first things you need to do when running a design sprint is to agree on a challenge you’d like to tackle. Luckily, we had a big challenge that we wanted to solve: ubuntu.com‘s navigation system.

 

ubuntu.com navigation layers: global nav, main nav, second and third level navubuntu.com’s different levels of navigation

 

Assigning roles

If you’ve decided to run a design sprint, you’ve also probably decided who will be the Facilitator. If you haven’t, you should, as this person will have work to do before the sprint starts. In our case, I was the Facilitator.

My first Facilitator task was to make sure we knew who was going to be the Decider at our sprint.

We also agreed on who was going to participate, and booked one of our meeting rooms for the whole week plus an extra one for testing on Friday.

My suggestion for anyone running a sprint for the first time is to also name an Assistant. There is so much work to do before and during the sprint, that it will make the Facilitator’s life a lot easier. Even though we didn’t officially name anyone, Greg was effectively helping to plan the sprint too.

Evangelising the sprint

In the week that preceded the sprint, I had a few conversations with other team members who told me the sprint sounded really great and they were going to ‘pop in’ whenever they could throughout the week. I had to explain that, sadly, this wasn’t going to be possible.

If you need to do the same, explain why it’s important that the participants commit to the entire week, focusing on the importance of continuity and of accumulated knowledge that the sprint’s team will gather throughout the week. Similarly, be pleasant but firm when participants tell you they will have to ‘pop out’ throughout the week to attend to other matters — only the Decider should be allowed to do this, and even so, there should be a deputy Decider in the room at all times.

Logistics

Before the sprint, you also need to make sure that you have all the supplies you need. I tried as much as possible to follow the suggestions for materials outlined in the book, and I even got a Time Timer. In retrospect, it would have been fine for the Facilitator to just keep time on a phone, or a less expensive gadget if you really want to be strict with the no-phones-in-the-room policy.

Even though the book says you should start recruiting participants for the Friday testing during the sprint, we started a week before that. Greg took over that side of the preparation, sending prompts on social media and mailing lists for people to sign up. When participants didn’t materialise in this manner, Greg sent a call for participants to the mailing list of the office building we work at, which worked wonders for us.

Know your stuff

Assuming you have read the book before your sprint, if it’s your first sprint, I recommend re-reading the chapter for the following day the evening before, and take notes.

I printed out the checklists provided in the book’s website and wrote down my notes for the following day, so everything would be in one place.

 

Facilitator checklist with handwritten notesFacilitator checklists with handwritten notes

 

I also watched the official video for the day (which you can get emailed to you by the Sprint Bot the evening before), and read all the comments in the Q&A discussions linked to from the emails. These questions and comments from other people who have run sprints was incredibly useful throughout the week.

 

Sprint Bot emailSprint Bot email for the first day of the sprint

 

Does this sound like a lot of work? It was. I think if/when we do another sprint the time spent preparing will probably be reduced by at least 50%. The uncertainty of doing something as involved as this for the first time made it more stressful than preparing for a normal workshop, but it’s important to spend the time doing it so that things run smoothly during the sprint week.

Day 1

The morning of the sprint I got in with plenty of time to spare to set up the room for the kick-off at 10am.

I bought lots of healthy snacks (which were promptly frowned on by the team, who were hoping for sweater treats); brought a jug of water and cups, and all the supplies to the room; cleared the whiteboards; and set up the chairs.

What follows are some of the outcomes, questions and other observations from our five days.

Morning

In the morning of day 1 you define a long term goal for your project, list the ways in which the project could fail in question format, and draw a flowchart, or map, of how customers interact with your product.

  • Starting the map was a little bit tricky as it wasn’t clear how the map should look when there are more than one type of customer who might have different outcomes
  • In the book there are no examples with more than one type of customer, which meant we had to read and re-read that part of the book until we decided how to proceed as we have several customer types to cater for
  • Moments like these can take the confidence in the process away from the team, that’s why it’s important for the Facilitator to read everything carefully more than once, and ideally for him or her not to be the only person to do so
  • We did the morning exercises much faster than prescribed, but the same didn’t happen in the afternoon!

 

The team discussing the target for the sprint in front of the journey mapDiscussing the target for the sprint

 

Afternoon

In the afternoon experts from the sprint and guests come into the room and you ask them lots of questions about your product and how things work. Throughout the interviews the team is taking notes in the “How Might We” format (for example, “How might we reduce the amount of copy?”). By the end of the interviews, you group the notes into themes, vote on the ones you find most useful or interesting, move the most voted notes onto their right place within your customer map and pick a target in the map as the focus for the rest of the sprint.

  • If you have time, explain “How Might We” notes work before the lunch break, so you save that time for interviews in the afternoon
  • Each expert interview should last for about 15-30 minutes, which didn’t fee like long enough to get all the valuable knowledge from our experts — we had to interrupt them somewhat abruptly to make sure the interviews didn’t run over. Next time it might be easier to have a list of questions we want to cover before the interviews start
  • Choreographing the expert interviews was a bit tricky as we weren’t sure how long each would take. If possible, tell people you’ll call them a couple of minutes before you need them rather than set a fixed time — we had to send people back a few times because we weren’t yet finished asking all the question to the previous person!
  • It took us a little longer than expected to organise the notes, but in the end, the most voted notes did cluster around the key section of the map, as predicted in the book!

 

How Might We notes on the wallsSome of the How Might We notes on the wall after the expert interviews

 

Other thoughts on day 1

  • Sprint participants might cancel at the last minute. If this happens, ask yourself if they could still appear as experts on Monday afternoon? If not, it’s probably better to write them off the sprint completely
  • There was a lot of checking the book as the day went by, to confirm we were doing the right thing
  • We wondered if this comes up in design sprints frequently: what if the problem you set out to solve pre-sprint doesn’t match the target area of the map at the end of day 1? In our case, we had planned to focus on navigation but the target area was focused on how users learn more about the products/services we offer

A full day of thinking about the problem and mapping it doesn’t come naturally, but it was certainly useful. We conduct frequent user research and usability testing, and are used to watching interviews and analysing findings, nevertheless the expert interviews and listening to different perspectives from within the company was very interesting and gave us a different type of insight that we could build upon during the sprint.

Day 2

By the start of day 2, it felt like we had been in the sprint for a lot longer than just one day — we had accomplished a lot on Monday!

Morning

The morning of day 2 is spent doing “Lightning Demos” after a quick 20-minute research. These can be anything that might be interesting, from competitor products to previous internal attempts at solving the sprint challenge. Before lunch, the team decides who will sketch what in the afternoon: will everyone sketch the same thing or different parts of the map.

  • We thought the “Lightning Demos” was a great way to do demos — it was fast and captured the most important thing quickly
  • Deciding who would sketch what wasn’t as straightforward as we might have thought. We decided that everyone should do a journey through our cloud offerings so we’d get different ideas on Wednesday, knowing there was the risk of not everything being covered in the sketches
  • Before we started sketching, we made a list of sections/pages that should be covered in the storyboards
  • As on day 1, the morning exercises were done faster than prescribed, we were finished by 12:30 with a 30 minute break from 11-11:30

 

Sketches from lightning demosOur sketches from the lightning demos

 

Afternoon

In the afternoon, you take a few minutes to walk around the sprint room and take down notes of anything that might be useful for the sketching. You then sketch, starting with quick ideas and moving onto a more detailed sketch. You don’t look at the final sketches until Wednesday morning.

  • We spent the first few minutes of the afternoon looking at the current list of participants for the Friday testing to decide which products to focus on in our sketches, as our options were many
  • We had a little bit of trouble with the “Crazy 8s” exercise, where you’re supposed to sketch 8 variations of one idea in 8 minutes. It wasn’t clear what we had to do so we re-read that part a few times. This is probably the point of the exercise: to remove you from your comfort zone, make you think of alternative solutions and get your creative muscles warmed up
  • We had to look at the examples of detailed sketches in the book to have a better idea of what was expected from our sketches
  • It took us a while to get started sketching but after a few minutes everyone seemed to be confidently and quietly sketching away
  • With complicated product offerings there’s the instinct to want to have access to devices to check product names, features, etc – I assumed this was not allowed but some people were sneakily checking their laptops!
  • Naming your sketch wasn’t as easy as it sounded
  • Contrary to what we expected, the afternoon sketching exercises took longer than the morning’s, at 5pm some people were still sketching

 

The team sketchingEveryone sketching in silence on Tuesday afternoon

 

Tuesday was lots of fun. Starting the day with the demos, without much discussion on the validity of the ideas, creates a positive mood in the team. Sketching in a very structured manner removes some of the fear of the blank page, as you build up from loose ideas to a very well-defined sketch. The silent sketching was also great as it meant we had some quiet time to pause and think a solution through, giving the people who tend to be more quiet an opportunity to have their ideas heard on par with everyone else.

Day 3

No-one had seen the sketches done on Tuesday, so the build-up to the unveiling on day 3 was more exciting than for the usual design review!

Morning

On the Wednesday morning, you decide which sketch (or sketches) you will prototype. You stick the sketches on the wall and review them in silence, discuss each sketch briefly and each person votes on their favourite. After this, the Decider casts three votes, which can follow or not the votes of the rest of the team. Whatever the Decider votes on will be prototyped. Before lunch, you decide whether you will need to create one or more prototypes, depending on whether the Decider’s (or Deciders’) votes fit together or not.

  • We had 6 sketches to review
  • Although the book wasn’t clear as to when the guest Decider should participate, we invited ours from 10am to 11.30am as it seemed that he should participate in the entire morning review process — this worked out well
  • During the speed critique people started debating the validity or feasibility of solutions, which was expected but meant some work for the Facilitator to steer the conversation back on track
  • The morning exercises put everyone in a positive mood, it was an interesting way to review and select ideas
  • Narrating the sketches was harder than what might seem at first, and narrating your own sketch isn’t much easier either!
  • It was interesting to see that many of the sketches included similar solutions — there were definite patterns that emerged
  • Even though I emphasised that the book recommends more than one prototype, the team wasn’t keen on it and the focus of the pre-lunch discussion was mostly on how to merge all the voted solutions into one prototype
  • As for all other days, and because we decided for an all-in-one prototype, we finished the morning exercises by noon

 

Reviewing the sketches in silenceThe team reviewing the sketches in silence on Wednesday morning

 

Afternoon

In the afternoon of day 3, you sketch a storyboard of the prototype together, starting one or two steps before the customer encounters your prototype. You should move the existing sketches into the frames of the storyboard when possible, and add only enough detail that will make it easy to build the prototype the following day.

  • Using masking tape was easier than drawing lines for the storyboard frames
  • It was too easy to come up with new ideas while we were drawing the storyboard and it was tricky to tell people that we couldn’t change the plan at this point
  • It was hard to decide the level of detail we needed to discuss and add to the storyboard. We finished the first iteration of the storyboard a few minutes before 3pm. Our first instinct was to start making more detailed wireframes with the remaining time, but we decided to take a break for coffee and come back to see where we needed more detail in the storyboard instead
  • It was useful to keep asking the team what else we needed to define as we drew the storyboard before we started building the prototype the following day
  • Because we read out the different roles in preparation for Thursday, we ended up assigning roles straight away

 

Drawing the storyboardDiscussing what to add to our storyboard

 

Other thoughts on day 3

  • One sprint participant couldn’t attend on Tuesday, but was back on Wednesday, which wasn’t ideal but didn’t impact negatively
  • While setting up for the third day, I wasn’t sure if the ideas from the “Lightning Demos” could be erased from the whiteboard, so I took a photo of them and erased it as, even with the luxury of massive whiteboards, we wouldn’t have had space for the storyboard later on!

By the end of Wednesday we were past the halfway mark of the sprint, and the excitement in anticipation for the Friday tests was palpable. We had some time left before the clock hit 5 and wondered if we should start building the prototype straight away, but decided against it — we needed a good night’s sleep to be ready for day 4.

Day 4

Thursday is all about prototyping. You need to choose which tools you will use, prioritising speed over perfection, and you also need to assign different roles for the team so everyone knows what they need to do throughout the day. The interviewer should write the interview script for Friday’s tests.

  • For the prototype building day, we assigned: two writers, one interviewer, one stitcher, two makers and one asset collector
  • We decided to build the pages we needed with HTML and CSS (instead of using a tool like Keynote or InVision) as we could build upon our existing CSS framework
  • Early in the afternoon we were on track, but we were soon delayed by a wifi outage which lasted for almost 1,5 hours
  • It’s important to keep communication flowing throughout the day to make sure all the assets and content that are needed are created or collected in time for the stitcher to start stitching
  • We were finished by 7pm — if you don’t count the wifi outage, we probably would have been finished by 6pm. The extra hour could have been curtailed if there had been just a little bit more detail in the storyboard page wireframes and in the content delivered to the stitcher, and fewer last minute tiny changes, but all-in-all we did pretty well!

 

Maker and asset collector working on the prototypeJoana and Greg working on the prototype

 

Other thoughts on day 4

  • We had our sprint in our office, so it would have been possible for us to ask for help from people outside of the sprint, but we didn’t know whether this was “allowed”
  • We could have assigned more work to the asset collector: the makers and the stitcher were looking for assets themselves as they created the different components and pages rather than delegating the search to the asset collector, which is how we normally work
  • The makers were finished with their tasks more quickly than expected — not having to go through multiple rounds of reviews that sometimes can take weeks makes things much faster!

By the end of Thursday there was no denying we were tired, but happy about what we had accomplished in such a small amount of time: we had a fully working prototype and five participants lined up for Friday testing. We couldn’t wait for the next day!

Day 5

We were all really excited about the Friday testing. We managed to confirm all five participants for the day, and had an excellent interviewer and solid prototype. As the Facilitator, I was also happy to have a day where I didn’t have a lot to do, for a change!

Thoughts and notes on day 5

On Friday, you test your prototype with five users, taking notes throughout. At the end of the day, you identify patterns within the notes and based on these you decide which should be the next steps for your project.

  • We’re lucky to work in a building with lots of companies who employ our target audience, but we wonder how difficult it would have been to find and book the right participants within just 4 days if we needed different types of users or were based somewhere else
  • We filled up an entire whiteboard with notes from the first interview and had to go get extra boards during the break
  • Throughout the day, we removed duplicate notes from the boards to make them easier to scan
  • Some participants don’t talk a lot naturally and need a lot of constant reminding to think out loud
  • We had the benefit of having an excellent researcher in our team who already knows and does everything the book recommends doing. It might have been harder for someone with less research experience to make sure the interviews were unbiased and ran smoothly
  • At the end of the interviews, after listing the patterns we found, we weren’t sure whether we could/should do more thorough analysis of the testing later or if we should chuck the post-it notes in the bin and move on
  • Our end-of-sprint decision was to have a workshop the following week where we’d plan a roadmap based on the findings — could this be considered “cheating” as we’re only delaying making a decision?

 

The team in the observation roomThe team observing the interviews on Friday

 

A wall of interview notesA wall of interview notes

 

The Sprint Book notes that you can have one of two results at the end of your sprint: an efficient failure, or a flawed success. If your prototype doesn’t go down well with the participants, your team has only spent 5 days working on it, rather than weeks or potentially months — you’ve failed efficiently. And if the prototype receives positive feedback from participants, most likely there will still be areas that can be improved and retested — you’ve succeeded imperfectly.

At the end of Friday we all agreed that we our prototype was a flawed success: there were things we tested that we’d had never think to try before and that received great feedback, but some aspects certainly needed a lot more work to get right. An excellent conclusion to 5 intense days of work!

Final words

Despite the hard work involved in planning and getting the logistics right, running the web team’s trial design sprint was fun.

The web team is small and stretched over many websites and products. We really wanted to test this approach so we could propose it to the other teams we work with as an efficient way to collaborate at key points in our release schedules.

We certainly achieved this goal. The people who participated directly in the sprint learned a great deal during the five days. Those in the web team who didn’t participate were impressed with what was achieved in one week and welcoming of the changes it initiated. And the teams we work with seem eager to try the process out in their teams, now that they’ve seen what kind of results can be produced in such a short time.

How about you? Have you run a design sprint? Do you have any advice for us before we do it again? Leave your thoughts in the comments section.

Read more
Stéphane Graber

Introduction

I maintain a number of development systems that are used as throw away machines to reproduce LXC and LXD bugs by the upstream developers. I use MAAS to track who’s using what and to have the machines deployed with whatever version of Ubuntu or Centos is needed to reproduce a given bug.

A number of those systems are proper servers with hardware BMCs on a management network that MAAS can drive using IPMI. Another set of systems are virtual machines that MAAS drives through libvirt.

But I’ve long had another system I wanted to get in there. That machine is a desktop computer but with a server grade SAS controller and internal and external arrays. That machine also has a Fiber Channel HBA and Infiniband card for even less common setups.

The trouble is that this being a desktop computer, it’s lacking any kind of remote management that MAAS supports. That machine does however have a good PCIe network card which provides reliable wake-on-lan.

Back in the days (MAAS 1.x), there was a wake-on-lan power type that would have covered my use case. This feature was however removed from MAAS 2.x (see LP: #1589140) and the development team suggests that users who want the old wake-on-lan feature, instead install Ubuntu 14.04 and the old MAAS 1.x branch.

Implementing Wake on LAN in MAAS 2.x

I am, however not particularly willing to install an old Ubuntu release and an old version of MAAS just for that one trivial feature, so I instead spent a bit of time to just implement the bits I needed and keep a patch around to be re-applied whenever MAAS changes.

MAAS doesn’t provide a plugin system for power types, so I unfortunately couldn’t just write a plugin and distribute that as an unofficial power type for those who need WOL. I instead had to resort to modifying MAAS directly to add the extra power type.

The code change needed to re-implement a wake-on-lan power type is pretty simple and only took me a few minutes to sort out. The patch can be found here: https://dl.stgraber.org/maas-wakeonlan.diff

To apply it to your MAAS, do:

sudo apt install wakeonlan
wget https://dl.stgraber.org/maas-wakeonlan.diff
sudo patch -p1 -d /usr/lib/python3/dist-packages/provisioningserver/ < maas-wakeonlan.diff
sudo systemctl restart maas-rackd.service maas-regiond.service

Once done, you’ll now see this in the web UI:

After selecting the new “Wake on LAN” power type, enter the MAC address of the network interface that you have WOL enabled on and save the change.

MAAS will then be able to turn the system on, allowing for the normal commissioning and deployment stages. For everything else, this power type behaves like the “Manual” type, asking the user to manually go shutdown or reboot the system as you can’t do that through Wake on LAN.

Note that you’ll have to re-apply part of the patch whenever MAAS is updated. The patch modifies two files and adds a new one. The new file won’t be removed during an upgrade, but the two modified files will get reverted and need patching again.

Conclusion

This is certainly a hack and if your system supports anything better than Wake on LAN, or you’re willing to buy a supported PDU just for that one system, then you should do that instead.

But if the inability to turn a system on is all that stands in your way from adding it to your MAAS, as was the case for me, then that patch may help you.

I hope that in time MAAS will either get that feature back in some way or get a plugin system that I can use to ship that extra power type in its own separate package without needing to alter any of MAAS’ own files.

Read more
UbuntuTouch

我们知道对于python项目来说,我们只需要在我们的snapcraft.yaml中指定plugin为python它即可为python项目下载在snapcraft中指定的python的版本。但是对于有一些项目来说,我们的开发者可能需要一个特定的python的版本,那么我们怎么来实现这个功能呢?在今天的教程中,我们来介绍在snapcraft 2.27中所增添的一个新的功能。


我们首先来看一下我做的一个项目:

https://github.com/liu-xiao-guo/python-plugin

snapcraft.yaml

name: python36
version: '0.1' 
summary: This is a simple example not using python plugin
description: |
  This is a python3 example

grade: stable 
confinement: strict

apps:
  python36:
    command: helloworld_in_python
  python-version:
    command: python3 --version

parts:
  my-python-app:
    source: https://github.com/liu-xiao-guo/python-helloworld.git
    plugin: python
    after: [python]
  python:
    source: https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
    plugin: autotools
    configflags: [--prefix=/usr]
    build-packages: [libssl-dev]
    prime:
      - -usr/include

在这里,针对我们的python项目,它指定了在我们项目python part中所定义的python。这个python的版本是直接从网上直接进行下载的。

我们可以直接打包我们的应用,并并运行我们的应用:

$ python36
Hello, world

显然我们的python是可以正常工作的。我们可以通过命令python36.python-version命令来检查我们的python的版本:

$ python36.python-version 
Python 3.6.0

它显示了,我们目前正在运行的python的版本3.6。它就是我们在snapcraft中所下载的版本。
作者:UbuntuTouch 发表于2017/2/20 9:23:30 原文链接
阅读:477 评论:0 查看评论

Read more
UbuntuTouch

Socket.io可以使得我们的服务器和客户端进行双向的实时的数据交流。它比HTTP来说更具有传输数据量少的优点。同样地,websocket也具有同样的优点。你可以轻松地把你的数据发送到服务器,并收到以事件为驱动的响应,而不用去查询。在今天的教程中,我们来讲一下如何利用socket.io和websocket来做一个双向的通讯。


1)创建一个socket.io的服务器


首先我们先看一下我完成的一个项目:


我们首先看一下我们的snapcraft.yaml文件:

snapcraft.yaml

name: socketio
version: "0.1"
summary: A simple shows how to make use of socket io
description: socket.io snap example

grade: stable
confinement: strict

apps:
  socket:
    command: bin/socketio
    daemon: simple
    plugs: [network-bind]

parts:
  nod:
    plugin: nodejs
    source: .
   

这是一个nodejs的项目。我们使用了nodejs的plugin。我们的package.json文件如下:

package.json

{
  "name": "socketio",
  "version": "0.0.1",
  "description": "Intended as a nodejs app in a snap",
  "license": "GPL-3.0",
  "author": "xiaoguo, liu",
  "private": true,
  "bin": "./app.js",
  "dependencies": {
    "express": "^4.10.2",
    "nodejs-websocket": "^1.7.1",
    "socket.io": "^1.3.7"
  }
}

由于我们需要使用到webserver,所有我们安装了express架构包。另外,我们使用到socket.io及websocket,所有,我们把这些包都打入到我们的snap包中。

再来看看我们的应用app.js的设计:

app.js

#!/usr/bin/env node

var express = require('express');
var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);	

app.get('/', function(req, res){
   res.sendFile(__dirname + '/www/index.html');
});

app.use(express.static(__dirname + '/www'));

//Whenever someone connects this gets executed
io.on('connection', function(socket){
  console.log('A user connected');
  
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  io.emit('light-sensor-value', '' + value);
	  // console.log("value: " + value)
	  
	  // This is another way to send data
	  socket.send(value);
  }, 2000); 

  //Whenever someone disconnects this piece of code executed
  socket.on('disconnect', function () {
    console.log('A user disconnected');
  });

});

http.listen(4000, function(){
  console.log('listening on *:4000');
});

var ws = require("nodejs-websocket")

console.log("Going to create the server")

String.prototype.format = function() {
    var formatted = this;
    for (var i = 0; i < arguments.length; i++) {
        var regexp = new RegExp('\\{'+i+'\\}', 'gi');
        formatted = formatted.replace(regexp, arguments[i]);
    }
    return formatted;
};
 
// Scream server example: "hi" -> "HI!!!" 
var server = ws.createServer(function (conn) {
	
    console.log("New connection")
	var connected = true;
    
    conn.on("text", function (str) {
        console.log("Received "+str)
        conn.sendText(str.toUpperCase()+"!!!")
    })
    
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
        connected = false
    })
        	
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  var data = '{"data":"{0}"}'.format(value)
	  if (connected){
		conn.send(data);
	  }
  }, 2000); 	
}).listen(4001)

在代码的第一部分,我们创建了一个webserver,它使用的端口地址是4000。我们也同时启动了socket.io服务器,等待客户端的连接。一旦有一个连接的话,我们使用如下的代码每过一段时间来发送一些数据:

//Whenever someone connects this gets executed
io.on('connection', function(socket){
  console.log('A user connected');
  
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  io.emit('light-sensor-value', '' + value);
	  // console.log("value: " + value)
	  
	  // This is another way to send data
	  socket.send(value);
  }, 2000); 

  //Whenever someone disconnects this piece of code executed
  socket.on('disconnect', function () {
    console.log('A user disconnected');
  });

});

虽然这些数据是一些随机的,但是我们主要用来展示它是如何工作的。在实际的应用中,这些数据可以是从一些传感器中得到的。在我们的客户端中,我们可以打开webserver运行的地址:


我们可以看到数据不断地进来,并在我们的客户端中显示出来。具体的设计请参考在www目录中的index.html文件。


2)创建一个websocket的服务器


在我们的app.js中,我们利用如下的代码来实现一个websocket的服务器。端口地址为4001。

app.js


var ws = require("nodejs-websocket")

console.log("Going to create the server")

String.prototype.format = function() {
    var formatted = this;
    for (var i = 0; i < arguments.length; i++) {
        var regexp = new RegExp('\\{'+i+'\\}', 'gi');
        formatted = formatted.replace(regexp, arguments[i]);
    }
    return formatted;
};
 
// Scream server example: "hi" -> "HI!!!" 
var server = ws.createServer(function (conn) {
	
    console.log("New connection")
	var connected = true;
    
    conn.on("text", function (str) {
        console.log("Received "+str)
        conn.sendText(str.toUpperCase()+"!!!")
    })
    
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
        connected = false
    })
        	
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  var data = '{"data":"{0}"}'.format(value)
	  if (connected){
		conn.send(data);
	  }
  }, 2000); 	
}).listen(4001)

同样地,一旦有个连接,我们每隔两秒钟发送一个数据到我们的客户端。为了说明问题方便,我们设计了一个QML的客户端。

Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import Ubuntu.Components.Pickers 1.3
import Qt.WebSockets 1.0
import QtQuick.Layouts 1.1

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "dialer.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    function interpreteData(data) {
        var json = JSON.parse(data)
        console.log("Websocket data: " + data)

        console.log("value: " + json.data)
        mainHand.value = json.data
    }

    WebSocket {
        id: socket
        url: input.text
        onTextMessageReceived: {
            console.log("something is received!: " + message);
            interpreteData(message)
        }

        onStatusChanged: {
            if (socket.status == WebSocket.Error) {
                console.log("Error: " + socket.errorString)
            } else if (socket.status == WebSocket.Open) {
                // socket.sendTextMessage("Hello World....")
            } else if (socket.status == WebSocket.Closed) {
            }
        }
        active: true
    }

    Page {
        header: PageHeader {
            id: pageHeader
            title: i18n.tr("dialer")
        }

        Item {
            anchors {
                top: pageHeader.bottom
                left: parent.left
                right: parent.right
                bottom: parent.bottom
            }

            Column {
                anchors.fill: parent
                spacing: units.gu(1)
                anchors.topMargin: units.gu(2)

                Dialer {
                    id: dialer
                    size: units.gu(30)
                    minimumValue: 0
                    maximumValue: 1000
                    anchors.horizontalCenter: parent.horizontalCenter

                    DialerHand {
                        id: mainHand
                        onValueChanged: console.log(value)
                    }
                }


                TextField {
                    id: input
                    width: parent.width
                    text: "ws://192.168.1.106:4001"
                }

                Label {
                    id: value
                    text: mainHand.value
                }
            }
        }
    }
}

运行我们的服务器及客户端:



我们可以看到我们数值在不断地变化。这个客户端的代码在:https://github.com/liu-xiao-guo/dialer

在这篇文章中,我们展示了如何利用socket.io及websocket来进行双向的实时的通讯。在很多的物联网的应用中,我们可以充分利用这些通讯协议来更好地设计我们的应用。

作者:UbuntuTouch 发表于2017/2/7 16:07:16 原文链接
阅读:442 评论:3 查看评论

Read more
UbuntuTouch

[原]为自己的snap应用添加变量

在很多snap应用开发的时候,我们可以使用我们自己的一个wrapper,并在这个wrapper中指定一些变量从而能够使得我们的应用能够正常地运行。这个特性也特别适合在移植有些snap应用中需要特别设定一些路径到我们snap应用的一些可读写目录中从而避免安全的问题。那么我们怎么实现这个功能呢?


我们先来看一下我们做的一个例程:

https://github.com/liu-xiao-guo/helloworld-env

snapcraft.yaml

name: hello
version: "1.0"
summary: The 'hello' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox

grade: stable
confinement: strict
type: app  #it can be gadget or framework
icon: icon.png

apps:
 env:
   command: bin/env
   environment:
     VAR1: $SNAP/share
     VAR2: "hello, the world"
 evil:
   command: bin/evil
 sh:
   command: bin/sh

parts:
 hello:
  plugin: dump
  source: .

在上面的例子中,在“env”命令中,我们添加了environment项。在它的里面,我们定义了两个环境变量:VAR1及VAR2。
打包我们的应用,同时执行我们的命令“hello.env”。

$ hello.env | grep VAR
VAR1=$SNAP/share
VAR2=hello, the world

在这里,我们可以看出来我们在没有使用脚本的情况下,为我们的应用添加了两个环境变量VAR1及VAR2。


作者:UbuntuTouch 发表于2017/2/20 10:33:47 原文链接
阅读:317 评论:0 查看评论

Read more
UbuntuTouch

在今天的文章中,我们将介绍如何把一个HTML5的应用打包为一个snap应用。我们知道有很多的HTML5应用,但是我们如何才能把它们打包为我们的snap应用呢?特别是在Ubuntu手机手机开发的时候,有很多的已经开发好的HTML5游戏。我们可以通过我们今天讲的方法来把先前的click HTML5应用直接打包为snap应用,并可以在我们的Ubuntu桌面电脑上进行运行。当然,今天介绍的方法并不仅限于Ubuntu手机开发的HTML应用。这里的方法也适用于其它的HTML5应用。




1)HTML5应用


首先,我们看一下我之前做过的一个为Ubuntu手机而设计的一个HTML5应用。它的地址为:


你可以通过如下的方式得到这个代码:

bzr branch lp:~liu-xiao-guo/debiantrial/wuziqi

在这个应用中,我们只关心的是在它www目录里面的内容。这个项目的所有文件如下:

$ tree
.
├── manifest.json
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们希望把在www里面的内容能够最终打包到我们的snap应用中去。

2)打包HTML5应用为snap


为了能够把我们的HTML5应用打包为一个snap应用,我们可以在项目的根目录下打入如下的命令:

$ snapcraft init

上面的命令将在我们的当前的目录下生产一个新的snap目录,并在里面生一个叫做snapcraft.yaml的文件。这实际上是一个模版。我们可以通过修改这个snapcraft.yaml文件来把我们的应用进行打包。运行完上面的命令后,文件架构如下:

$ tree
.
├── manifest.json
├── snap
│   └── snapcraft.yaml
├── wuziqi.apparmor
├── wuziqi.desktop
├── wuziqi.png
├── wuziqi.ubuntuhtmlproject
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    └── js
        └── app.js

我们通过修改snapcraft.yaml文件,并最终把它变为:

snapcraft.yaml


name: wuziqi
version: '0.1'
summary: Wuziqi Game. It shows how to snap a html5 app into a snap
description: |
  This is a Wuziqi Game. There are two kind of chesses: white and black. Two players
  play it in turn. The first who puts the same color chesses into a line is the winner.

grade: stable
confinement: strict

apps:
  wuziqi:
    command: webapp-launcher www/index.html
    plugs:
      - browser-sandbox
      - camera
      - mir
      - network
      - network-bind
      - opengl
      - pulseaudio
      - screen-inhibit-control
      - unity7

plugs:
  browser-sandbox:
    interface: browser-support
    allow-sandbox: false
  platform:
    interface: content
    content: ubuntu-app-platform1
    target: ubuntu-app-platform
    default-provider: ubuntu-app-platform

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

这里的解释如下:
  • 由于这是一个HTML5的应用,我们可以通过webapp-helper来启动我们的应用。在我们的应用中我们使用被叫做webapp-helper的remote part
  • 由于在Ubuntu的手机中,web的底层部分是由Qt进行完成的,所以我们必须要把Qt也打包到我们的应用中。但是由于Qt库是比较大的,我们可以通过ubuntu-app-platform snap应用通过它提供的platform接口来得到这些Qt库。开发者可以参阅我们的文章https://developer.ubuntu.com/en/blog/2016/11/16/snapping-qt-apps/
  • 在我们的index.html文件中,有很多的诸如<script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>。这显然和ubuntu-html5-ui-toolkit有关,所以,我们必须把ubuntu-html5-ui-toolkit这个包也打入到我们的应用中。这个我们通过stage-packages来安装ubuntu-html5-ui-toolkit包来实现
  • 我们通过organize把从ubuntu-html5-ui-toolkit中安装的目录ubuntu-html5-ui-toolkit重组到我们项目下的www目录中以便index.html文件引用
我们再来看看我们的原始的index.html文件:

index.html

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>An Ubuntu HTML5 application</title>
    <meta name="description" content="An Ubuntu HTML5 application">
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0">

    <!-- Ubuntu UI Style imports - Ambiance theme -->
    <link href="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/css/appTemplate.css" rel="stylesheet" type="text/css" />

    <!-- Ubuntu UI javascript imports - Ambiance theme -->
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="/usr/share/ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

    <!-- Application script -->
    <script src="js/app.js"></script>
    <link href="css/app.css" rel="stylesheet" type="text/css" />

  </head>

  <body>
        <div class='test'>
          <div>
              <img src="images/w.png" alt="white" id="chess">
          </div>
          <div>
              <button id="start">Start</button>
          </div>
        </div>

        <div>
            <canvas width="640" height="640" id="canvas" onmousedown="play(event)">
                 Your Browser does not support HTML5 canvas
            </canvas>
        </div>
  </body>
</html>

从上面的代码中,在index.hml文件中它引用的文件是从/usr/share这里开始的。在一个confined的snap应用中,这个路径是不可以被访问的(因为一个应用只能访问自己安装在自己项目根目录下的文件)。为此,我们必须修改这个路径。我们必须把上面的/usr/share/的访问路径改变为相对于本项目中的www目录的访问路径:

    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/fast-buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/core.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/buttons.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/dialogs.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/page.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/pagestacks.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tab.js"></script>
    <script src="ubuntu-html5-ui-toolkit/0.1/ambiance/js/tabs.js"></script>

这就是为什么我们在之前的snapcraft.yaml中看到的:

parts:
  webapp:
    after: [ webapp-helper, desktop-ubuntu-app-platform ]
    plugin: dump
    source: .
    stage-packages:
      - ubuntu-html5-ui-toolkit
    organize:
      'usr/share/ubuntu-html5-ui-toolkit/': www/ubuntu-html5-ui-toolkit
    prime:
      - usr/*
      - www/*

在上面,我们通过organize把ubuntu-html5-ui-toolkit安装后的目录重新组织并移到我的项目的www目录中,从而使得这里的文件可以直接被我们的项目所使用。我们经过打包后的文件架构显示如下:

$ tree -L 3
.
├── bin
│   ├── desktop-launch
│   └── webapp-launcher
├── command-wuziqi.wrapper
├── etc
│   └── xdg
│       └── qtchooser
├── flavor-select
├── meta
│   ├── gui
│   │   ├── wuziqi.desktop
│   │   └── wuziqi.png
│   └── snap.yaml
├── snap
├── ubuntu-app-platform
├── usr
│   ├── bin
│   │   └── webapp-container
│   └── share
│       ├── doc
│       ├── ubuntu-html5-theme -> ubuntu-html5-ui-toolkit
│       └── webbrowser-app
└── www
    ├── css
    │   └── app.css
    ├── images
    │   ├── b.png
    │   └── w.png
    ├── index.html
    ├── js
    │   ├── app.js
    │   └── jquery.min.js
    └── ubuntu-html5-ui-toolkit
        └── 0.1

在上面,我们可以看出来ubuntu-html5-ui-toolkit现在处于在www文件目录下,可以直接被我们的项目所使用。

我们在项目的根目录下打入如下的命令:

$ snapcraft

如果一切顺利的话,我们可以得到一个.snap文件。我们可以通过如下的命令来进行安装:

$ sudo snap install wuziqi_0.1_amd64.snap --dangerous

安装完后,由于我们使用了content sharing的方法来访问Qt库,所以,我们必须安装如下的snap:

$ snap install ubuntu-app-platform 
$ snap connect wuziqi:platform ubuntu-app-platform:platform

执行上面的命令后,我们可以看到:

$ snap interfaces
Slot                          Plug
:account-control              -
:alsa                         -
:avahi-observe                -
:bluetooth-control            -
:browser-support              wuziqi:browser-sandbox
:camera                       -
:core-support                 -
:cups-control                 -
:dcdbas-control               -
:docker-support               -
:firewall-control             -
:fuse-support                 -
:gsettings                    -
:hardware-observe             -
:home                         -
:io-ports-control             -
:kernel-module-control        -
:libvirt                      -
:locale-control               -
:log-observe                  snappy-debug
:lxd-support                  -
:modem-manager                -
:mount-observe                -
:network                      downloader,wuziqi
:network-bind                 socketio,wuziqi
:network-control              -
:network-manager              -
:network-observe              -
:network-setup-observe        -
:ofono                        -
:opengl                       wuziqi
:openvswitch                  -
:openvswitch-support          -
:optical-drive                -
:physical-memory-control      -
:physical-memory-observe      -
:ppp                          -
:process-control              -
:pulseaudio                   wuziqi
:raw-usb                      -
:removable-media              -
:screen-inhibit-control       wuziqi
:shutdown                     -
:snapd-control                -
:system-observe               -
:system-trace                 -
:time-control                 -
:timeserver-control           -
:timezone-control             -
:tpm                          -
:uhid                         -
:unity7                       wuziqi
:upower-observe               -
:x11                          -
ubuntu-app-platform:platform  wuziqi
-                             wuziqi:camera
-                             wuziqi:mir

当然在我们的应用中,我们也使用了冗余的plug,比如上面的camera及mir等。我们可以看到wuziqi应用和其它Core及ubuntu-app-platform snap的连接情况。在确保它们都连接好之后,我们可以在命令行中打入如下的命令:

$ wuziqi

它将启动我们的应用。当然,我们也可以从我们的Desktop的dash中启动我们的应用:






作者:UbuntuTouch 发表于2017/2/13 10:16:55 原文链接
阅读:283 评论:0 查看评论

Read more
UbuntuTouch

[原]Ubuntu Core 配置

Core snap提供了一些配置的选项。这些选项可以允许我们定制系统的运行。就像和其它的snap一样,Core snap的配置选项可以通过snap set命令来实现:

$ snap set core option=value


选项目前的值可以通过snap get命令来获取:

$ snap get core option
value


下面我们来展示如何来禁止系统的ssh服务:

警告:禁止ssh将会使得我们不能在默认的方式下访问Ubuntu Core系统。如果我们不提供其它的方式来管理或登陆系统的话,你的系统将会是一个砖。建议你可以正在系统中设置一个用户名及密码,以防止你不能进入到系统。如果你有其它的方式进入到系统,也是可以的。当我们进入到Ubunutu Core系统中后,我们可以使用如下的命令来创建一个用户名及密码,这样我们可以通过键盘及显示器的方式来登陆。

$ sudo passwd <ubuntu-one id>
<password>

设置的选项接受如下的值:

  • false (默认):启动ssh服务。ssh服务将直接对连接请求起作用
  • true:禁止ssh服务。目前存在的ssh连接将继续保留,但是任何更进一步的连接将是不可能的
$ snap set core service.ssh.disable=true
当我们执行完上面的命令后,任何更进一步的访问将被禁止:

$ ssh liu-xiao-guo@192.168.1.106
ssh: connect to host 192.168.1.106 port 22: Connection refused

$ snap set core service.ssh.disable=false
执行上面的命令后将使得我们可以重新连接ssh。
我们可以通过如下的命令来获得当前的值:

$ snap get core service.ssh.disable
false

更多阅读:https://docs.ubuntu.com/core/en/reference/core-configuration

作者:UbuntuTouch 发表于2017/2/15 10:29:38 原文链接
阅读:253 评论:0 查看评论

Read more
UbuntuTouch

[转]Qt on Ubuntu Core

Are you working on an IoT, point of sale or digital signage device? Are you looking for a secure, supported solution to build it on? Do you have needs for graphic performance and complex UI? Did you know you could build great solutions using Qt on Ubuntu and Ubuntu Core? 

To find out how why not join this upcoming webinar. You will learn the following:

- Introduction to Ubuntu and Qt in IoT and digital signage
- Using Ubuntu and Ubuntu Core in your device
- Packaging your Qt app for easy application distribution 
- Dealing with hardware variants and GPUs


https://www.brighttalk.com/webcast/6793/246523?utm_source=China&utm_campaign=3)%20Device_FY17_IOT_Vertical_DS_Webinar_Qt&utm_medium=Social

作者:UbuntuTouch 发表于2017/2/27 13:07:38 原文链接
阅读:291 评论:0 查看评论

Read more
Alan Griffiths

MirAL 1.3.2

There’s a bugfix MirAL release (1.3.2) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

The bugfixes in 1.3.2 are:

In libmiral a couple of “fails to build from source” fixes:

Fix FTBFS against Mir < 0.26 (Xenial, Yakkety)

Update to fix FTBFS against lp:mir (and clang)

In the miral-shell example, a crash fixed:

With latest zesty’s libstdc++-6-dev miral-shell will crash when trying to draw its background text. (LP: #1677550)

Some of the launch scripts have been updated to reflect a change to the way GDK chooses the graphics backend:

change the server and client launch scripts to avoid using the default Mir socket (LP: #1675794)

Update miral-xrun to match GDK changes (LP: #1675115)

In addition a misspelling of “management” has been corrected:

miral/set_window_management_policy.h

Read more
Cemil Azizoglu

Yeay, the new Mesa (17.0.2-1ubuntu2) has landed! (Many thanks to Timo.) This new Mesa incorporates a new EGL backend for Mir (as a distro patch). We will be moving away from the old backend by Mir 1.0, but for now both the new and old backends coexist.

This new backend has been implemented as a new platform in Mesa EGL so that we can easily rip out the old platform when we are ready. Being ready means switching _all_ the EGL clients out there to the new Mesa EGL types exported by this backend.

In case you are wondering, the new EGL types are [1]:

MirConnection* –> EGLNativeDisplayType

MirSurface* –> EGLNativeWindowType

Note that we currently use MirRenderSurface for what will soon be renamed to MirSurface. So at the moment, technically we have MirRenderSurface* as theEGLNativeWindowType.

Once we feel confident we will be pushing this patch upstream as well.

There should be no visible differences in your EGL applications due to this change which is a good thing. If you are curious about the code differences that this new backend introduces check out the ‘eglapp’ wrapper that we use in a number of our example apps :

http://bazaar.launchpad.net/~mir-team/mir/development-branch/view/head:/examples/eglapp.c

The new backend is activated by the ‘-r’ switch which sets the ‘new_egl’ flag, so you can see what is done differently in the code by looking at how this flag makes the code change.

Our interfaces are maturing and we are a big step closer to Mir 1.0.

-Cemil

[1] Mir does not support pixmaps.

Read more
Brandon Schaefer

When Choosing a Backend Fails

There was a recent GDK release into zesty that now probes for Mir over X11. This can cause issues when still using an X11 desktop such as Unity7 when a Mir server is running at the same time.

A common way to test Mir is to run it on top of X, which is called Mir-on-X. This means there are now two display servers running at the same time.

An example of an issue this can cause is gnome-terminal-server. It will attempt to spawn its clients on Mir instead of X11 once the Mir server is opened. You now attempt to spawn a new terminal which causes the gnome-terminal-server to crash since it now tries to spawn on Mir but you already spawned terminals on X. As you can imagine this is frustrating to your workflow!

A simple workaround is to add this to your ~/.profile:

if [ "$XDG_CURRENT_DESKTOP" = "Unity:Unity7" ]; then
    dbus-update-activation-environment --systemd GDK_BACKEND=x11
fi

Depending on your desktop the “Unity:Unity7” bit will change.

As more toolkits will start to pick other display servers as their first pick more of these issues will become possible. Other environment variables to consider:

SDL_VIDEODRIVER
QT_QPA_PLATFORM

A bit more detail on the issue can be found here:

Choosing a Backend

Read more
Cemil Azizoglu

Hi, I’ve been wanting to have a blog for a while now. I am not sure if I’ll have the time to post on a regular basis but I’ll try.

First things first : My name is Cemil (pronounced JEH-mil), a.k.a. ‘camako’ on IRC – I work as a developer and am the team-lead in the Mir project.

Recently, I’ve been working on Mir 1.0 tasks, new Mesa EGL platform backend for Mir, Vulkan Mir WSI driver for Mesa, among other things.

Here’s something pretty for you to look at for now :

https://plus.google.com/113725654283519068012/posts/8jmrQnpJxMc

-Cemil

Read more
Stéphane Graber

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Michael Hall

Late last year Amazon introduce a new EC2 image customized for Machine Learning (ML) workloads. To make things easier for data scientists and researchers, Amazon worked on including a selection of ML libraries into these images so they wouldn’t have to go through the process of downloading and installing them (and often times building them) themselves.

But while this saved work for the researchers, it was no small task for Amazon’s engineers. To keep offering the latest version of these libraries they had to repeat this work every time there was a new release , which was quite often for some of them. Worst of all they didn’t have a ready-made way to update those libraries on instances that were already running!

By this time they’d heard about Snaps and the work we’ve been doing with them in the cloud, so they asked if it might be a solution to their problems. Normally we wouldn’t Snap libraries like this, we would encourage applications to bundle them into their own Snap package. But these libraries had an unusual use-case: the applications that needed them weren’t mean to be distributed. Instead the application would exist to analyze a specific data set for a specific person. So as odd as it may sound, the application developer was the end user here, and the library was the end product, which made it fit into the Snap use case.

Screenshot from 2017-03-23 16-43-19To get them started I worked on developing a proof of concept based on MXNet, one of their most used ML libraries. The source code for it is part C++, part Python, and Snapcraft makes working with both together a breeze, even with the extra preparation steps needed by MXNet’s build instructions. My snapcraft.yaml could first compile the core library and then build the Python modules that wrap it, pulling in dependencies from the Ubuntu archives and Pypi as needed.

This was all that was needed to provide a consumable Snap package for MXNet. After installing it you would just need to add the snap’s path to your LD_LIBRARY_PATH and PYTHONPATH environment variables so it would be found, but after that everything Just Worked! For an added convenience I provided a python binary in the snap, wrapped in a script that would set these environment variables automatically, so any external code that needed to use MXNet from the snap could simply be called with /snap/bin/mxnet.python rather than /usr/bin/python (or, rather, just mxnet.python because /snap/bin/ is already in PATH).

I’m now working with upstream MXNet to get them building regular releases of this snap package to make it available to Amazon’s users and anyone else. The Amazon team is also seeking similar snap packages from their other ML libraries. If you are a user or contributor to any of these libraries, and you want to make it easier than ever for people to get the latest and greatest versions of them, let’s get together and make it happen! My MXNet example linked to above should give you a good starting point, and we’re always happy to help you with your snapcraft.yaml in #snapcraft on rocket.ubuntu.com.

If you’re just curious to try it out ourself, you can download my snap and then follow along with the MXNet tutorial, using the above mentioned mxnet.python for your interactive python shell.

Read more
Alan Griffiths

miral gets cut & paste

For some time now I’ve been intending to investigate the cut & paste mechanisms in the Unity8/Mir stack with the intention of ensuring they are supported in MirAL.

I’ve never had the time to do this, so I was surprised to discover that cut & paste is now working! (At least on Zesty.)

I assume that this is piggy-backing off the support being added to enable the “experimental” Unity8 desktop session, so I hope that this “magic” continues to work.

Read more
Michael Hall

Java is a well established language for developing web applications, in no small part because of it’s industry standard framework for building them: Servlets and JSP.  Another important part of this standard is the Web Archive, or WAR, file format, which defines how to provide a web application’s executables and how they should be run in a way that is independent of the application server that will be running  them.

application-server-market-share-2015WAR files make life easier for developers by separate the web application from the web server. Unfortunately this doesn’t actually make it easier to deploy a webapp, it only shifts some of the burden off of the developers and on to the user, who still needs to setup and configure an application server to host it. One popular option is Apache’s Tomcat webapp server, which is both lightweight and packs enough features to support the needs of most webapps.

And here is where Snaps come in. By combining both the application and the server into a single, installable package you get the best of both, and with a little help from Snapcraft you don’t have to do any extra work.

Snapcraft supports a modular build configuration by having multiple “parts“, each of which provides some aspect of your complete runtime environment in a way that is configurable and reusable. This is extended to a feature called “remote parts” which are pre-defined parts you can easily pull into your snap by name. It’s this combination of reusable and remote parts that are going to make snapping up java web applications incredibly easy.

The remote part we are going to use is the “tomcat” part, which will build the Tomcat application server from upstream source and bundle it in your snap ready to go. All that you, as the web developer, need to provide is your .war file. Below is an simple snapcraft.yaml that will bundle Tomcat’s “sample” war file into a self-contained snap package.

name: tomcat-sample
version: '0.1'
summary: Sample webapp using tomcat part
description: |
 This is a basic webapp snap using the remote Tomcat part

grade: stable
confinement: strict

parts:
  my-part:
    plugin: dump
    source: .
    organize:
      sample.war: ./webapps/sample.war
    after: [tomcat]

apps:
  tomcat:
    command: tomcat-launch
    daemon: simple
    plugs: [network-bind]

The important bits are the ones in bold, let’s go through them one at a time starting with the part named “my-part”. This uses the simple “dump” plugin which is just going to copy everything in it’s source (current directory in this case) into the resulting snap. Here we have just the sample.war file, which we are going to move into a “webapps” directory, because that is where the Tomcat part is going to look for war files.

Now for the magic, by specifying that “my-part” should come after the “tomcat” part (using after: [tomcat]) which isn’t defined elsewhere in the snapcraft.yaml, we will trigger Snapcraft to look for a remote part by that same name, which conveniently exists for us to use. This remote part will do two things, first it will download and build the Tomcat source code, and then it will generate a “tomcat-launch” shell script that we’ll use later. These two parts, “my-part” and “tomcat” will be combined in the final snap, with the Tomcat server automatically knowing about and installing the sample.war webapp.

The “apps” section of the snapcraft.yaml defines the application to be run. In this simple example all we need to execute is the “tomcat-launch” script that was created for us. This sets up the Tomcat environment variables and runtime directories so that it can run fully confined within the snap. And by declaring it to be a simple daemon we are additionally telling it to auto-start as soon as it’s installed (and after any reboot) which will be handled by systemd.

Now when you run “snapcraft” on this config, you will end up with the file tomcat-sample_0.1_amd64.snap which contains your web application, the Tomcat application server, and a headless Java JRE to run it all. That way the only thing your users need to do to run your app is to “snap install tomcat-sample” and everything will be up and running at http://localhost:8080/sample/ right away, no need to worry about installing dependencies or configuring services.

Screenshot from 2017-03-21 14-16-59

If you have a webapp that you currently deploy as a .war file, you can snap it yourself in just a few minutes, use the snapcraft.yaml defined above and replace the sample data with your own. To learn more about Snaps and Snapcraft in general you can follow this tutorial as well as learning how to publish your new snap to the store.

Read more