Canonical Voices

Colin Watson

Well, it’s been a while!  Since we last posted a general update, the Launchpad team has become part of Canonical’s Online Services department, so some of our efforts have gone into other projects.  There’s still plenty happening with Launchpad, though, and here’s a changelog-style summary of what we’ve been up to.

Answers

  • Lock down question title and description edits from random users
  • Prevent answer contacts from editing question titles and descriptions
  • Prevent answer contacts from editing FAQs

Blueprints

  • Optimise SpecificationSet.getStatusCountsForProductSeries, fixing Product:+series timeouts
  • Add sprint deletion support (#2888)
  • Restrict blueprint count on front page to public blueprints

Build farm

  • Add fallback if nominated architecture-independent architecture is unavailable for building (#1530217)
  • Try to load the nbd module when starting launchpad-buildd (#1531171)
  • Default LANG/LC_ALL to C.UTF-8 during binary package builds (#1552791)
  • Convert buildd-manager to use a connection pool rather than trying to download everything at once (#1584744)
  • Always decode build logtail as UTF-8 rather than guessing (#1585324)
  • Move non-virtualised builders to the bottom of /builders; Ubuntu is now mostly built on virtualised builders
  • Pass DEB_BUILD_OPTIONS=noautodbgsym during binary package builds if we have not been told to build debug symbols (#1623256)

Bugs

  • Use standard milestone ordering for bug task milestone choices (#1512213)
  • Make bug activity records visible to anonymous API requests where appropriate (#991079)
  • Use a monospace font for “Add comment” boxes for bugs, to match how the comments will be displayed (#1366932)
  • Fix BugTaskSet.createManyTasks to map Incomplete to its storage values (#1576857)
  • Add basic GitHub bug linking (#848666)
  • Prevent rendering of private team names in bugs feed (#1592186)
  • Update CVE database XML namespace to match current file on cve.mitre.org
  • Fix Bugzilla bug watches to support new versions that permit multiple aliases
  • Sort bug tasks related to distribution series by series version rather than series name (#1681899)

Code

  • Remove always-empty portlet from Person:+branches (#1511559)
  • Fix OOPS when editing a Git repository with over a thousand refs (#1511838)
  • Add Git links to DistributionSourcePackage:+branches and DistributionSourcePackage:+all-branches (#1511573)
  • Handle prerequisites in Git-based merge proposals (#1489839)
  • Fix OOPS when trying to register a Git merge with a target path but no target repository
  • Show an “Updating repository…” indication when there are pending writes
  • Launchpad’s Git hosting backend is now self-hosted
  • Fix setDefaultRepository(ForOwner) to cope with replacing an existing default (#1524316)
  • Add “Configure Code” link to Product:+git
  • Fix Git diff generation crash on non-ASCII conflicts (#1531051)
  • Fix stray link to +editsshkeys on Product:+configure-code when SSH keys were already registered (#1534159)
  • Add support for Git recipes (#1453022)
  • Fix OOPS when adding a comment to a Git-based merge proposal without using AJAX (#1536363)
  • Fix shallow git clones over HTTPS (#1547141)
  • Add new “Code” portlet on Product:+index to make it easier to find source code (#531323)
  • Add missing table around widget row on Product:+configure-code, so that errors are highlighted properly (#1552878)
  • Sort GitRepositorySet.getRepositories API results to make batching reliable (#1578205)
  • Show recent commits on GitRef:+index
  • Show associated merge proposals in Git commit listings
  • Show unmerged and conversation-relevant Git commits in merge proposal views (#1550118)
  • Implement AJAX revision diffs for Git
  • Fix scanning branches with ghost revisions in their ancestry (#1587948)
  • Fix decoding of Git diffs involving non-UTF-8 text that decodes to unpaired surrogates when treated as UTF-8 (#1589411)
  • Fix linkification of references to Git repositories (#1467975)
  • Fix +edit-status for Git merge proposals (#1538355)
  • Include username in git+ssh URLs (#1600055)
  • Allow linking bugs to Git-based merge proposals (#1492926)
  • Make Person.getMergeProposals have a constant query count on the webservice (#1619772)
  • Link to the default git repository on Product:+index (#1576494)
  • Add Git-to-Git code imports (#1469459)
  • Improve preloading of {Branch,GitRepository}.{landing_candidates,landing_targets}, fixing various timeouts
  • Export GitRepository.getRefByPath (#1654537)
  • Add GitRepository.rescan method, useful in cases when a scan crashed

Infrastructure

  • Launchpad’s SSH endpoints (bazaar.launchpad.net, git.launchpad.net, upload.ubuntu.com, and ppa.launchpad.net) now support newer key exchange and MAC algorithms, allowing compatibility with OpenSSH >= 7.0 (#1445619)
  • Make cross-referencing code more efficient for large numbers of IDs (#1520281)
  • Canonicalise path encoding before checking a librarian TimeLimitedToken (#677270)
  • Fix Librarian to generate non-cachable 500s on missing storage files (#1529428)
  • Document the standard DELETE method in the apidoc (#753334)
  • Add a PLACEHOLDER account type for use by SSO-only accounts
  • Add support to +login for acquiring discharge macaroons from SSO via an OpenID exchange (#1572605)
  • Allow managing SSH keys in SSO
  • Re-raise unexpected HTTP errors when talking to the GPG key server
  • Ensure that the production dump is usable before destroying staging
  • Log SQL statements as Unicode to avoid confusing page rendering when the visible_render_time flag is on (#1617336)
  • Fix the librarian to fsync new files and their parent directories
  • Handle running Launchpad from a Git working tree
  • Handle running Launchpad on Ubuntu 16.04 (upgrade currently in progress)
  • Fix delete_unwanted_swift_files to not crash on segments (#1642411)
  • Update database schema for PostgreSQL 9.5 and 9.6
  • Check fingerprints of keys received from the keyserver rather than trusting it implicitly

Registry

  • Make public SSH key records visible to anonymous API requests (#1014996)
  • Don’t show unpublished packages or package names from private PPAs in search results from the package picker (#42298, #1574807)
  • Make Person.time_zone always be non-None, allowing us to easily show the edit widget even for users who have never set their time zone (#1568806)
  • Let latest questions, specifications and products be efficiently calculated
  • Let project drivers edit series and productreleases, as series drivers can; project drivers should have series driver power over all series
  • Fix misleading messages when joining a delegated team
  • Allow team privacy changes when referenced by CodeReviewVote.reviewer or BugNotificationRecipient.person
  • Don’t limit Person:+related-projects to a single batch

Snappy

  • Add webhook support for snaps (#1535826)
  • Allow deleting snaps even if they have builds
  • Provide snap builds with a proxy so that they can access external network resources
  • Add support for automatically uploading snap builds to the store (#1572605)
  • Update latest snap builds table via AJAX
  • Add option to trigger snap builds when top-level branch changes (#1593359)
  • Add processor selection in new snap form
  • Add option to automatically release snap builds to store channels after upload (#1597819)
  • Allow manually uploading a completed snap build to the store
  • Upload *.manifest files from builders as well as *.snap (#1608432)
  • Send an email notification for general snap store upload failures (#1632299)
  • Allow building snaps from an external Git repository
  • Move upload to FAILED if its build was deleted (e.g. because of a deleted snap) (#1655334)
  • Consider snap/snapcraft.yaml and .snapcraft.yaml as well as snapcraft.yaml for new snaps (#1659085)
  • Add support for building snaps with classic confinement (#1650946)
  • Fix builds_for_snap to avoid iterating over an unsliced DecoratedResultSet (#1671134)
  • Add channel track support when uploading snap builds to the store (contributed by Matias Bordese; #1677644)

Soyuz (package management)

  • Remove some more uses of the confusing .dsc component; add the publishing component to SourcePackage:+index in compensation
  • Add include_meta option to SPPH.sourceFileUrls, paralleling BPPH.binaryFileUrls
  • Kill debdiff after ten minutes or 1GiB of output by default, and make sure we clean up after it properly (#314436)
  • Fix handling of << and >> dep-waits
  • Allow PPA admins to set external_dependencies on individual binary package builds (#671190)
  • Fix NascentUpload.do_reject to not send an erroneous Accepted email (#1530220)
  • Include DEP-11 metadata in Release file if it is present
  • Consistently generate Release entries for uncompressed versions of files, even if they don’t exist on the filesystem; don’t create uncompressed Packages/Sources files on the filesystem
  • Handle Build-Depends-Arch and Build-Conflicts-Arch from SPR.user_defined_fields in Sources generation and SP:+index (#1489044)
  • Make index compression types configurable per-series, and add xz support (#1517510)
  • Use SHA-512 digests for GPG signing where possible (#1556666)
  • Re-sign PPAs with SHA-512
  • Publish by-hash index files (#1430011)
  • Show SHA-256 checksums rather than MD5 on DistributionSourcePackageRelease:+files (#1562632)
  • Add a per-series switch allowing packages in supported components to build-depend on packages in unsupported components, used for Ubuntu 16.04 and later
  • Expand archive signing to kernel modules (contributed by Andy Whitcroft; #1577736)
  • Uniquely index PackageDiff(from_source, to_source) (part of #1475358)
  • Handle original tarball signatures in source packages (#1587667)
  • Add signed checksums for published UEFI/kmod files (contributed by Andy Whitcroft; #1285919)
  • Add support for named authentication tokens for private PPAs
  • Show explicit add-apt-repository command on Archive:+index (#1547343)
  • Use a per-archive OOPS timeline in archivepublisher scripts
  • Link to package versions on DSP:+index using fmt:url rather than just a relative link to the version, to avoid problems with epochs (#1629058)
  • Fix RepositoryIndexFile to gzip without timestamps
  • Fix Archive.getPublishedBinaries API call to have a constant query count (#1635126)
  • Include the package name in package copy job OOPS reports and emails (#1618133)
  • Remove headers from Contents files (#1638219)
  • Notify the Changed-By address for PPA uploads if the .changes contains “Launchpad-Notify-Changed-By: yes” (#1633608)
  • Accept .debs containing control.tar.xz (#1640280)
  • Add Archive.markSuiteDirty API call to allow requesting that a given archive/suite be published
  • Don’t allow cron-control to interrupt publish-ftpmaster part-way through (#1647478)
  • Optimise non-SQL time in PublishingSet.requestDeletion (#1682096)
  • Store uploaded .buildinfo files (#1657704)

Translations

  • Allow TranslationImportQueue to import entries from file objects rather than having to read arbitrarily-large files into memory (#674575)

Miscellaneous

  • Use gender-neutral pronouns where appropriate
  • Self-host the Ubuntu webfonts (#1521472)
  • Make the beta and privacy banners float over the rest of the page when scrolling
  • Upgrade to pytz 2016.4 (#1589111)
  • Publish Launchpad’s code revision in an X-Launchpad-Revision header
  • Truncate large picker search results rather than refusing to display anything (#893796)
  • Sync up the lists footer with the main webapp footer a bit (#1679093)

Read more
facundo


A los que armamos presentaciones mostrando programitas o pequeñas porciones de código siempre se nos presentó un inconveniente: ¿cómo mostrar ese código apropiadamente coloreado?

Con "apropiadamente coloreado" no me refiero a pintarrajeado como adolescente que sale a bailar, o decorado con florcitas, soles, y/o aviones de guerra, sino a algo que es típico en el mundo de la programación donde los editores le ponen distintos colores a las palabras que forman el código en función de qué tipo de palabra son: un color para las variables, otro para los textos, otro para los nombres de las funciones, otro para...

No voy a entrar en detalle sobre qué es ese coloreado (que en inglés llamamos "syntax highlighting"), pero les muestro un ejemplo:

Ejemplo de código coloreado

En fin, volviendo a meter código coloreado en LibreOffice. Lo charlé bastante en su momento con varias personas, lo mejor parecía capturar una imagen del código y meter eso, pero es una porquería porque no queda bien ante el menor cambio de tamaño, y si encima hay que tocar cualquier cosa de ese texto es imposible.

También buscando encontré Coooder, que es una extensión de LibreOffice que hacía exactamente eso. El verbo hacer de la oración anterior está en pasado porque sólo funciona para los LibreOffice del 3.3 a 3.6 (yo actualmente tengo 5.1).

Finalmente encontré la manera de hacerlo! No es la más directa, pero el resultado es el que estaba buscando: un texto coloreado dentro de LibreOffice. Genial!

Los pasos se dividen en dos partes grandes:

  • generar un documento en formato RTF
  • meter este doc RTF en la presentación

Cómo generar el doc RTF:

  • Abrir el código con gvim
  • Escribir :TOhtml, lo cual abrirá otra ventana con el código HTML correspondiente a nuestro texto coloreado.
  • Escribir :saveas /tmp/cod.html, lo cual grabará ese HTML en el path ahí especificado
  • Cerrar cualquier libreffice abierto (sino el próximo paso falla :/).
  • Desde una terminal, ejecutar unoconv -f rtf /tmp/cod.html lo cual nos dejará un archivo en /tmp/cod.rtf justamente con el código nuestro, en formato RTF.
  • Abrir el LibreOffice Impress
  • Ir al Menu, Insertar, Archivo; un par de clicks en "siguiente" y ya tenemos el texto adentro.
  • Seleccionar el texto que acabamos de insertar, y cambiarle la tipografía a alguna monoespaciada.

Voilà!

Read more
Anthony Dillon

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.

One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.

Here is the process we developed and have been using on multiple projects.

The process

Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.

Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.

The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.

As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.

Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.

Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.

That’s it!

Benefits

If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.

Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.

If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.

The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.

Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.

Conclusion

This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.

Read more
Alan Griffiths

Fairer than Death

The changes at Canonical have had an effect both on the priorities for the Mir project and on the resources available for future development. We have been meeting to make new plans. In short:

Mir is alive, there are Canonical IoT projects that use it. Work will continue on Mir to support these and on cleaning and upstreaming the distro patches Ubuntu carries to support Mir.

Canonical are no longer working on a desktop environment or phone shell. However we will maintain the existing support Mir has for compositing and window management. (We’re happy to receive PRs in support of similar efforts.)

Read more
Colin Ian King

Tracking CoverityScan issues on Linux-next

Over the past 6 months I've been running static analysis on linux-next with CoverityScan on a regular basis (to find new issues and fix some of them) as well as keeping a record of the defect count.


Since the beginning of September over 2000 defects have been eliminated by a host of upstream developers and the steady downward trend of outstanding issues is good to see.  A proportion of the outstanding defects are false positives or issues where the code is being overly zealous, for example, bounds checking where some conditions can never happen. Considering there are millions of lines of code, the defect rate is about average for such a large project.

I plan to keep the static analysis running long term and I'll try and post stats every 6 months or so to see how things are progressing.

Read more
Alan Griffiths

unity8-team

In previous posts I’ve alluded to the ecosystem of projects developed to support Unity8. While I have come across most of them during my time with Canonical I wouldn’t be confident of creating a complete list.

But some of the Unity8 developers (mostly Pete Woods) have been working to make it easy to identify these projects. They have been copied onto github:

https://github.com/unity8-team

This is simply a snapshot for easy reference, not the start of another fork.

Read more
facundo


Hace rato que Exaile es mi reproductor de música de cabecera. Tiene todo lo que quiero, y el resto de las cosas que no quiero no son intrusivas ni me molestan (no tengo que pelearme con el programa para usarlo, digamos).

Y está hecho en Python :). Es una ventaja a la hora de debuguear algún problema (y si no recuerdo mal algún parche he mandado por algún bug...).

Exaile

Con las idas y venidas de Ubuntu en el escritorio, en algún momento tuve problemas usando la versión oficial o última liberada, y en ese momento lo resolví saltando directamente a usarlo desde el proyecto. Cuando decidí hacer eso probé directamente master, y me anduvo, así que me quedé ahí.

Es un toque riesgoso (a nivel de estabilidad) porque estás probando lo último que meten los desarrolladores, pero por ahora estamos (casi) bien; hay que tener en cuenta que no lo actualizo todo el tiempo, sino cuando estoy buscando alguna corrección específica que se haya hecho.

El otro día vi que habían solucionado algo que me molestaba (un detalle nomás, relacionado con el arrastrar canciones en la playlist), e hice git pull para actualizar a lo último. Algunas cosas mejoraron (puntualmente lo que estaba buscando, joya), pero unos minutos después me di cuenta que no me andaba mi hotkey de teclado para pausar y rearrancar la música.

Yo estoy muy acostumbrado a apretar ctrl-shift-espacio para hacer que la música se frene, y el mismo golpe de teclas para que la música reanude, y de repente no me funcionaba más :(.

Empecé a investigar qué era, y me di cuenta que Exaile no tenía más el plugin gnomemmkeys, que es el que le permite "recibir las teclas de multimedia que uno aprieta" (así muy entre comillas, porque no es la descripción más realista de lo que sucede, pero transmite la idea).

Buscando (en el proyecto mismo) en qué momento eso desapareció encontré un commit que hacía referencia a mpris2, que resulta que es una interfaz de D-Bus para controlar reproductores de sonido/video.

Caution, geek

Aprendiendo sobre esta tecnología encontré que había un cliente de mpris de linea de comandos, así que lo instalé (sudo apt-get install mpris-remote) y configuré en el sistema para que ctrl-shift-espacio sea mpris-remote pause.

Nota: el comando que puse arriba manda la señal "pause", que pausa y "despausa", ojo, no confundir con "play", que arranca la próxima canción (no sigue de donde estaba).

Nota 2: después de que lo había implementado, me dijeron en el canal de IRC de Exaile que directamente podía hacer exaile --play-pause desde la linea de comandos. Me quedé con la implementación original, sin embargo, porque es más rápida (solo manda una señal, no levanta todo un reproductor de música solo para mandarla).

Read more
Alan Griffiths

Why Mir

Mir provides a framework for integration between three parts of the graphics stack.

These parts are:

  1. The drivers that control the hardware
  2. The desktop environment or shell
  3. Applications with a GUI

Mir currently works with mesa-kms graphics, mesa-x11 graphics or android HWC graphics (work has been done on vulkan graphics and is well beyond proof-of-concept but hasn’t been released).

Switching the driver support doesn’t impact the shell or applications. (Servers will run unchanged on mesa, on X11 and android.) Mir provides “abstractions” so that, for example, user input or display configuration changes look the same to servers and client applications regardless of the drivers being used.

Mir supports writing a display server by providing sensible defaults for (e.g.) positioning tooltips without imposing a desktop style. It has always carried example programs demonstrating how to do “fullscreen” (kiosk style), traditional “floating windows” and “tiling” window management to ensure we don’t “bake in” too many policies.

Because the work has been funded by Canonical features that were important to Ubuntu Phone and Unity8 desktop have progressed faster and are more complete than others.

When Mir was started we needed a mechanism for client-server communications (and Wayland wasn’t in the state it is today). We did something that worked well enough (libmirclient) and, because it’s just a small, intentionally isolated part of the whole, we could change later. We never imagined what a “big deal” that decision would become.


[added]

Seeing the initial reactions I can tell I made a farce of explaining this. I’ll try again:

For the author of a shell what Mir provides is subtly but significantly different from a set of libraries you can use to build on: It provides a default shell that can be customized.

Read more

This is the third in a series of blog posts on creating an asynchronous D-Bus service in python. For the inital entry, go here. For the previous entry, go here

Last time we transformed our base synchronous D-Bus service to include asynchronous calls in a rather naive way. In this post, we’ll refactor those asynchronous calls to include D-Bus signals; codewise, we’ll pick up right where we left off after part 2: https://github.com/larryprice/python-dbus-blog-series/tree/part2. Of course, all of today’s code can be found in the same project with the part3 tag: https://github.com/larryprice/python-dbus-blog-series/tree/part3.

Sending Signals

We can fire signals from within our D-Bus service to notify clients of tasks finishing, progress updates, or data availability. Clients subscribe to these signals and act accordingly. Let’s start by changing the signature of the slow_result method of RandomData to be a signal:

random_data.py
1
2
3
4
# ...
@dbus.service.signal("com.larry_price.test.RandomData", signature='ss')
def slow_result(self, thread_id, result):
    pass

We’ve replaced the context decorator with a signal, and we’ve swapped out the guts of this method for a pass, meaning the method will call but doesn’t do anything else. We now need a way to call this signal, which we can do from the SlowThread class we were using before. When creating a SlowThread in the slow method, we can pass in this signal as a callback. At the same time, we can remove the threads list we used to use to keep track of existing SlowThread objects.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")

        random.seed()

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def slow(self, bits=8):
        thread = SlowThread(bits, self.slow_result)
        return thread.thread_id

    # ...

Now we can make some updates to SlowThread. The first thing we should do is add a new parameter callback and store it on the object. Because slow_result no longer checks the done property, we can remove that and the finished event. Instead of calling set on the event, we can now simply call the callback we stored with the current thread_id and result. We end up with a couple of unused variables here, so I’ve also gone ahead and refactored the work method on SlowThread to be a little cleaner.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# ...

class SlowThread(object):
    def __init__(self, bits, callback):
        self._callback = callback
        self.result = ''

        self.thread = threading.Thread(target=self.work, args=(bits,))
        self.thread.start()
        self.thread_id = str(self.thread.ident)

    def work(self, bits):
        num = ''

        while True:
            num += str(random.randint(0, 1))
            bits -= 1
            time.sleep(1)

            if bits <= 0:
                break

        self._callback(self.thread_id, str(int(num, 2)))

And that’s it for the service-side. Any callers will need to subscribe to our slow_result method, call our slow method, and wait for the result to come in.

Receiving Signals

We need to make some major changes to our client program in order to receive signals. We’ll need to introduce a main loop, which we’ll spin up in a separate thread, for communicating on the bus. The way I like to do this is with a ContextManager so we can guarantee that the loop will be exited when the program exits. We’ll move the logic we previously used in client to get the RandomData object into a private member method called _setup_object, which we’ll call on context entry after creating the loop. On context exit, we’ll simply call quit on the loop.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Encapsulate calling the RandomData object on the session bus with a main loop
import dbus, dbus.exceptions, dbus.mainloop.glib
import threading
from gi.repository import GLib
class RandomDataClient(object):
    def __enter__(self):
        self._setup_dbus_loop()
        self._setup_object()

        return self

    def __exit__(self, exc_type, exc_value, traceback):
        self._loop.quit()
        return True

    def _setup_dbus_loop(self):
        dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
        self._loop = GLib.MainLoop()

        self._thread = threading.Thread(target=self._loop.run)
        self._thread.start()

    def _setup_object(self):
        try:
            self._bus = dbus.SessionBus()
            self._random_data = self._bus.get_object("com.larry-price.test",
                                                     "/com/larry_price/test/RandomData")
        except dbus.exceptions.DBusException as e:
            print("Failed to initialize D-Bus object: '%s'" % str(e))
            sys.exit(2)

We can add methods on RandomDataClient to encapsulate quick and slow. quick is easy - we’ll just return self._random_data.quick(bits). slow, on the other hand, will take a bit of effort. We’ll need to subscribe to the slow_result signal, giving a callback for when the signal is received. Since we want to wait for the result here, we’ll create a threading.Event object and wait for it to be set, which we’ll do in our handler. The handler, which we’ll call _finished will validate that it has received the right result based on the current thread_id and then set the result on the RandomDataClient object. After all this, we’ll remove the signal listener from our bus connection and return the final result.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class RandomDataClient(object):
    # ...

    def quick(self, bits):
        return self._random_data.quick(bits)

    def _finished(self, thread_id, result):
        if self._thread_id == self._thread_id:
            self._result = result
            self._done.set()

    def slow(self, bits):
        self._done = threading.Event()
        self._thread_id = None
        self._result = None

        signal = self._bus.add_signal_receiver(path="/com/larry_price/test/RandomData", handler_function=self._finished,
                                               dbus_interface="com.larry_price.test.RandomData", signal_name='slow_result')
        self._thread_id = self._random_data.slow(bits)
        self._done.wait()
        signal.remove()

        return self._result

Now we’re ready to actually call these methods. We’ll wrap our old calling code with the RandomDataClient context manager, and we’ll directly call the methods as we did before on the client:

client
1
2
3
4
5
6
7
8
# ...

# Call the appropriate method with the given number of bits
with RandomDataClient() as client:
    if args.slow:
        print("Your random number is: %s" % client.slow(int(args.bits)))
    else:
        print("Your random number is: %s" % client.quick(int(args.bits)))

This should have feature-parity with our part 2 code, but now we don’t have to deal with an infinite loop waiting for the service to return.

Next time

We have a working asynchronous D-Bus service using signals. Next time I’d like to dive into forwarding command output from a D-Bus service to a client.

As a reminder, the end result of our code in this post is MIT Licensed and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part3.

Read more
Alan Griffiths

A new hope

Disclaimer: With the changes in progress at Canonical I am not currently in a position to make any commitment about the future of Mir.

It is no secret that I think there’s value to the Mir project and I’d like it to be a valued contribution to the free software landscape.

I’ve written elsewhere about my efforts to make it easy to use Mir for making desktop, phone and “Internet of Things” shells, I won’t repeat that here beyond saying “have a look”.

It is important to me that Mir is GPL. That makes it a contribution to a “commons” that I care about.

The dream of convergence dies hard. Canonical may have abandoned it, but I hope it survives. A lot of the issues have been tackled and knowledge gained.

I read that UBPorts will be using Mir “for the time being”. They sensibly don’t want to maintain Mir and are planning a migration to an (unidentified) Wayland compositor.

However, we can also see from G+ Mark Shuttleworth is planning to keep “investing in Mir” for the Internet of Things.

This opens up an interesting possibility: there’s no obvious technical reason that Mir could not support clients using libwayland directly. It would take some research to confirm this but I can’t foresee anything technical blocking such an approach.

There could be some benefits to Canonical from this: the current design of Mir client-server interation makes sense in a traditional Debian (or RPM) repository based world, but less so for Snap (or Flatpak).

In a traditional environment where the libraries are a shared resource updates simply need to maintain ABI compatibility to work with existing clients. That makes it possible to keep Mir server and client and server libraries “in step” while making incompatible changes to the communications protocol.

However with Snaps the client and server “snap”s package the libraries they use with the applications.That presents issues for keeping them in step. These issues are soluble but create an additional burden for Mir, server and client developers. Using a protocol based solution would ease this burden.

For the wider community native support for Wayland clients in Mir would make the task of toolkit maintainers and others simpler.

If Canonical could be persuaded to add this feature to Mir and/or maintain it in the project would anyone care?

Is anyone else willing to help with such a feature?

Read more
Alan Griffiths

The end of a dream?

We read in the press that Canonical has pulled out of the dream of “convergence”. With that the current support for a whole family of related projects dies.

That doesn’t mean that the dream has to die, but it does mean changes.

I hope the dream doesn’t die, because Canonical has done a lot of the “heavy lifting” – the foundations are laid, the walls are up, we have windows, plumbing and power. But we’re lacking the paintwork and there’s no buyer.

My expertise is developing working software and I’m going to donate some of that to the dream.

Stable Intermediate Forms is an important principle – keep things working while making changes. If you throw away a large chunk intending to replace it you’ll find re-integration really, really hard. Do things gradually!

So, don’t simply fork Unity8 and plan to get it working on Wayland. You’ll end up with a single wall that falls over before you’ve replaced the rest of the building. (Sorry, I went back to “metaphor”.)

Take the whole infrastructure etc. and keep it in place until any replacements are demonstrably ready.

The Elephant in the room

Many have issues with the way Mir has been presented to the community, but in the opinion of the developers it is a good piece of software and not inherently incompatible with Wayland. (Just look at what the developers have written about it especially the early posts that addressed this directly.)

There are two plausible evolutions of the dream that reconcile Mir with Wayland.

Plan 1: (my recommendation) Add support to libmirserver for Wayland clients in parallel to the existing protocol. Once this is working this either junk libmirclient or rework its interaction with libmirserver.

Plan 2: Implement an analog of QtMir/MirAL on your choice of Wayland server. Then transition Unity8 to these and junk Mir.

I can’t guarantee that my recommendation of “plan 1” isn’t biased by my history with the Mir project, clearly I know its potential better than that of competing projects and I would find developing these easier than someone new to the code. In then end, the choice will depend on who takes on the work and what they can achieve most effectively.

Read more

This is the second in a series of blog posts on creating an asynchronous D-Bus service in python. For part 1, go here.

Last time we created a base for our asynchronous D-Bus service with a simple synchronous server/client. In this post, we’ll start from that base which can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1. Of course, all of today’s code can be found in the same project with the part2 tag: https://github.com/larryprice/python-dbus-blog-series/tree/part2.

Why Asynchronous?

Before we dive into making our service asynchronous, we need a reason to make our service asynchronous. Currently, our only d-bus object contains a single method, quick, which lives up to its namesake and is done very quickly. Let’s add another method to RandomData which takes a while to finish its job.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import dbus.service
import random
import time

class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")
        random.seed()

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def quick(self, bits=8):
        return str(random.getrandbits(bits))

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def slow(self, bits=8):
        num = str(random.randint(0, 1))
        while bits > 1:
            num += str(random.randint(0, 1))
            bits -= 1
            time.sleep(1)

        return str(int(num, 2))

Note the addition of the slow method on the RandomData object. slow is a contrived implementation of building an n-bit random number by concatenating 1s and 0s, sleeping for 1 second between each iteration. This will still go fairly quickly for a small number of bits, but could take quite some time for numbers as low as 16 bits.

In order to call the new method, we need to modify our client binary. Let’s add in the argparse module and take in a new argument: --slow. Of course, --slow will instruct the program to call slow instead of quick, which we’ll add to the bottom of the program.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#!/usr/bin/env python3

# Take in a single optional integral argument
import sys
import argparse

arg_parser = argparse.ArgumentParser(description='Get random numbers')
arg_parser.add_argument('bits', nargs='?', default=16)
arg_parser.add_argument('-s', '--slow', action='store_true',
                        default=False, required=False,
                        help='Use the slow method')

args = arg_parser.parse_args()

# Create a reference to the RandomData object on the  session bus
import dbus, dbus.exceptions
try:
    bus = dbus.SessionBus()
    random_data = bus.get_object("com.larry-price.test", "/com/larry_price/test/RandomData")
except dbus.exceptions.DBusException as e:
    print("Failed to initialize D-Bus object: '%s'" % str(e))
    sys.exit(2)

# Call the appropriate method with the given number of bits
if args.slow:
    print("Your random number is: %s" % random_data.slow(int(args.bits)))
else:
    print("Your random number is: %s" % random_data.quick(int(args.bits)))

Now we can run our client a few times to see the result of running in slow mode. Make sure to start or restart the service binary before running these commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ ./client 4
Your random number is: 2
$ ./client 4 --slow
Your random number is: 15
$ ./client 16
Your random number is: 64992
$ ./client 16 --slow
Traceback (most recent call last):
  File "./client", line 26, in <module>
    print("Your random number is: %s" % random_data.slow(int(args.bits)))
  File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__
    return self._proxy_method(*args, **keywords)
  File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__
    **keywords)
  File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking
    message, timeout)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

Your mileage may vary (it is a random number generator, after all), but you should eventually see a similar crash which is caused by a timeout in the response of the D-Bus server. We know that this algorithm works; it just needs more time to run. Since a synchronous call won’t work here, we’ll have to switch over to more asynchronous methods…

An Asynchronous Service

At this point, we can go one of two ways. We can use the threading module to spin threads within our process, or we can use the multiprocessing module to create child processes. Child processes will be slightly pudgier, but will give us more functionality. Threads are a little simpler, so we’ll start there. We’ll create a class called SlowThread, which will do the work we used to do within the slow method. This class will spin up a thread that performs our work. When the work is finished, it will set a threading.Event that can be used to check that the work is completed. threading.Event is a cross-thread synchronization object; when the thread calls set on the Event, we know that the thread is ready for us to check the result. In our case, we call is_set on our event to tell a user whether or not our data is ready.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# ...

import threading
class SlowThread(object):
    def __init__(self, bits):
        self.finished = threading.Event()
        self.result = ''

        self.thread = threading.Thread(target=self.work, args=(bits,))
        self.thread.start()
        self.thread_id = str(self.thread.ident)

    @property
    def done(self):
        return self.finished.wait(1)

    def work(self, bits):
        num = str(random.randint(0, 1))
        while bits > 1:
            num += str(random.randint(0, 1))
            bits -= 1
            time.sleep(1)

        self.result = str(num)
        self.finished.set()

# ...

On the RandomData object itself, we’ll initialize a new thread tracking list called threads. In slow, we’ll initialize a SlowThread object, append it to our threads list, and return the thread identifier from SlowThread. We’ll also want to add a method to try to get the result from a given SlowThread called slow_result, which will take in the thread identifier we returned earlier and try to find the appropriate thread. If the thread is finished (the event is set), we’ll remove the thread from our list and return the result to the caller.

random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# ...

class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")

        random.seed()
        self.threads = []

    # ...

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def slow(self, bits=8):
        thread = SlowThread(bits)
        self.threads.append(thread)
        return thread.thread_id

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='s', out_signature='s')
    def slow_result(self, thread_id):
        thread = [t for t in self.threads if t.thread_id == thread_id]
        if not thread:
            return 'No thread matching id %s' % thread_id

        thread = thread[-1]
        if thread.done:
            result = thread.result
            self.threads.remove(thread)
            return result

        return ''

Last thing we need to do is to update the client to use the new methods. We’ll call slow as we did before, but this time we’ll store the intermediate result as the thread identifier. Next we’ll use a while loop to spin forever until the result is ready.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
# ...

if args.slow:
    import time
    thread_id = random_data.slow(int(args.bits))
    while True:
        result = random_data.slow_result(thread_id)
        if result:
            print("Your random number is: %s" % result)
            break
        time.sleep(1)

# ...

Note that this is not the smartest way to do this; more on that in the next post. Let’s give it a try!

1
2
3
4
5
6
7
8
$ ./client 4
Your random number is: 7
$ ./client 4 --slow
Your random number is: 12
$ ./client 16
Your random number is: 5192
$ ./client 16 --slow
27302

Next time

This polling method works as a naive approach, but we can do better. Next time we’ll look into using D-Bus signals to make our client more asynchronous and remove our current polling implementation.

As a reminder, the end result of our code in this post is MIT Licensed and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part2.

Read more
facundo

PyCamp 2017, en Baradero


¡Otra vez un PyCamp relativamente cerca de casa! Eso me permitió ir en auto. Bah, fuimos en auto, Diego, Nico, Edu y yo. Salimos tempranito y a eso de las nueve ya estábamos ahí.

Las primeras dos horas las pasamos armando toda la infraestructura: fue colgar la bandera, poner las Antenas Sable Láser, configurar la red con el router nuevo, preparar los badges y otros pequeños souvenirs, hacer mate, etc, etc.

A las once y media arrancamos la gran charla introductoria, que era muy necesaria porque este Pycamp era el primero para un montón de los asistentes. Y a continuación de la misma presentamos todos los proyectos que llevamos y votamos más o menos en cuales nos gustaría participar.

Luego el primer almuerzo, y ya arrancamos el PyCamp a todo motor.

Algunos trabajando, otros en un curso superbásico inicial

Trabajando a la sombrita

Yo participé en varios proyectos, pero el fuerte del tiempo lo puse en estos tres:

  • Linkode: aprovechando que estábamos cara a cara, estuvimos pensando con Mati Barriento mucho en un cambio muy grande que estamos haciendo, lo que nos llevó a un refactor de la base de datos, con migración de información que la hicimos el último día en producción. Mucho laburo puesto en esto, que nos dejó un modelo más simple y el código listo para la próxima gran mejora, que es la renovación de como manejamos el lado del cliente.
  • Fades: Gilgamezh y yo laburamos bastante también acá. Él estuvo principalmente con el video instructivo que armamos el año pasado y necesitaba mucha edición, y con mucha ayuda de Marian lograron un resultado bárbaro, que pueden ver acá. En el proyecto en sí yo metí dos fixes pequeñitos, y estuve ayudando y haciendo reviews a dos branches de Juan y Facundo (otro, no yo).
  • Recordium: acá no hicimos mucho, pero conté de qué iba el proyecto y ahí surgieron varias pequeñas mejoras para hacer, incluso para la GUI final a la que apuntar. Y también tocamos un tema de seguridad, donde Matías nos contó qué detalle habría que mejorar para que no nos puedan "inyectar" mensajes.

Laburanding

Charlando diseño y aprendiendo malabares

Pero aparte de los proyectos en sí, también tuvimos un campeonato de ping pong (pasé la primera ronda, pero luego perdí un partido de la segunda ronda y quedé afuera), pileta (me metí y todo), hicimos la foto grupal de siempre, un partidito de futbol (sobre pasto, ¡en patas!), un asado, la típica reunión de PyAr integrada al Pycamp, y mucha, mucha charla con diferentes grupos, viendo qué hacían, tratando de tirar alguna idea o aplicar alguna experiencia.

La grupal

Como actividades fuera del predio, tuvimos un paseo guiado una mañana (con guia y todo, que nos contó muchísimo del pasado y presente de Baradero y alrededores), y un festival de jazz una noche (muy lindo, y la gente de donde nos hospedábamos se copó y nos armó una vianda así los que íbamos al festival cenábamos allá).

El último día también hicimos (como queremos que se haga costumbre) un video donde todos los que empujamos algún proyecto pasamos y contamos qué se hizo. Está muy bueno a nivel resumen para los que estuvimos ahí y como registro para los que no pudieron ir (y que quede a futuro). Mil gracias a José Luis que siempre se copa con la edición del video final.

Caminando por Baradero

Festival de Jazz

Un punto aparte para lo que fue el "lugar" del PyCamp. Mucho verde a metros nomás de donde estábamos trabajando, que era un salón grandote donde entrábamos relativamente cómodos (aunque en el día a día siempre había grupos laburando en lo que era el comedor y/o al aire libre). Las habitaciones estaban bien (considerando que eran grupales) y los baños limpios. La comida bárbara, incluyendo el asado, y un lujo todas las preparaciones especiales para gente vegetariana, con dietas raras, alergias, etc (les estuve preguntando a varios y todos comentaron que estuvo perfecto). Hasta la internet, anduvo...

En fin, pasó otro PyCamp. Yo siempre digo que es el mejor evento del año, y este no fue la excepción. Es más, esta edición, la 10°, fue uno de los mejores PyCamps!

Clases de espadas

El parque y la pileta

PD: charlando sobre que era la décima edición, nos anotamos cuales habían sido hasta ahora, lo dejo acá como registro...

    2008  Los Cocos, Córdoba
    2009  Los Cocos, Córdoba
    2010  Verónica, Buenos Aires
    2011  La Falda, Córdoba
    2012  Verónica, Buenos Aires
    2013  Villa Giardino, Córdoba
    2014  Villa Giardino, Córdoba
    2015  La Serranita, Córdoba
    2016  La Serranita, Córdoba
    2017  Baradero, Buenos Aires

PD2: fotos! las mías y las que sacó Yami (casi fotógrafa oficial del evento).

Read more

I’ve been working on a d-bus service to replace some of the management guts of my project for a while now. We started out creating a simple service, but some of our management processes take a long time to run, causing a timeout error when calling these methods. I needed a way to run these tasks in the background and report status to any possible clients. I’d like to outline my approach to making this possible. This will be a multi-part blog series starting from the bottom: a very simple, synchronous d-bus service. By the end of this series, we’ll have a small codebase with asynchronous tasks which can be interacted with (input/output) from D-Bus clients.

All of this code is written with python3.5 on Ubuntu 17.04 (beta), is MIT licensed, and can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1.

What is D-Bus?

From Wikipedia:

In computing, D-Bus or DBus (for “Desktop Bus”), a software bus, is an inter-process communication (IPC) and remote procedure call (RPC) mechanism that allows communication between multiple computer programs (that is, processes) concurrently running on the same machine.

D-Bus allows different processes to communicate indirectly through a known interface. The bus can be system-wide or user-specific (session-based). A D-Bus service will post a list of available objects with available methods which D-Bus clients can consume. It’s at the heart of much Linux desktop software, allowing processes to communicate with one another without forcing direct dependencies.

A synchronous service

Let’s start by building a base of a simple, synchronous service. We’re going to initialize a loop as a context to run our service within, claim a unique name for our service on the session bus, and then start the loop.

service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/usr/bin/env python3

import dbus, dbus.service, dbus.exceptions
import sys

from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GLib

# Initialize a main loop
DBusGMainLoop(set_as_default=True)
loop = GLib.MainLoop()

# Declare a name where our service can be reached
try:
    bus_name = dbus.service.BusName("com.larry-price.test",
                                    bus=dbus.SessionBus(),
                                    do_not_queue=True)
except dbus.exceptions.NameExistsException:
    print("service is already running")
    sys.exit(1)

# Run the loop
try:
    loop.run()
except KeyboardInterrupt:
    print("keyboard interrupt received")
except Exception as e:
    print("Unexpected exception occurred: '{}'".format(str(e)))
finally:
    loop.quit()

Make this binary executable (chmod +x service) and run it. Your service should run indefinitely and do… nothing. Although we’ve already written a lot of code, we haven’t added any objects or methods which can be accessed on our service. Let’s fix that.

dbustest/random_data.py
1
2
3
4
5
6
7
8
9
10
11
12
import dbus.service
import random

class RandomData(dbus.service.Object):
    def __init__(self, bus_name):
        super().__init__(bus_name, "/com/larry_price/test/RandomData")
        random.seed()

    @dbus.service.method("com.larry_price.test.RandomData",
                         in_signature='i', out_signature='s')
    def quick(self, bits=8):
        return str(random.getrandbits(bits))

We’ve defined a D-Bus object RandomData which can be accessed using the path /com/larry_price/test/RandomData. This style of string is the general style of an object path. We’ve defined an interface implemented by RandomData called com.larry_price.test.RandomData with a single method quick as declared with the @dbus.service.method context decorator. quick will take in a single parameter, bits, which must be an integer as designated by the in_signature in our context decorator. quick will return a string as specified by the out_signature parameter. All that quick does is return a random string given a number of bits. It’s simple and it’s fast.

Now that we have an object, we need to declare an instance of that object in our service to attach it properly. Let’s assume that random_data.py is in a directory dbustest with an empty __init__.py, and our service binary is still sitting in the root directory. Just before we start the loop in the service binary, we can add the following code:

service
1
2
3
4
5
6
7
8
9
# ...
# Run the loop
try:
    # Create our initial objects
    from dbustest.random_data import RandomData
    RandomData(bus_name)

    loop.run()
# ...

We don’t need to do anything with the object we’ve initialized; creating it is enough to attach it to our D-Bus service and prevent it from being garbage collected until the service exits. We pass in bus_name so that RandomData will connect to the right bus name.

A synchronous client

Now that you have an object with an available method on our service, you’re probably interested in calling that method. You can do this on the command line with something like dbus-send, or you could find the service using a GUI tool such as d-feet and call the method directly. But eventually we’ll want to do this with a custom program, so let’s build a very small program to get started.

client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/usr/bin/env python3

# Take in a single optional integral argument
import sys
bits = 16
if len(sys.argv) == 2:
    try:
        bits = int(sys.argv[1])
    except ValueError:
        print("input argument must be integer")
        sys.exit(1)

# Create a reference to the RandomData object on the  session bus
import dbus, dbus.exceptions
try:
    bus = dbus.SessionBus()
    random_data = bus.get_object("com.larry-price.test", "/com/larry_price/test/RandomData")
except dbus.exceptions.DBusException as e:
    print("Failed to initialize D-Bus object: '%s'" % str(e))
    sys.exit(2)

# Call the quick method with the given number of bits
print("Your random number is: %s" % random_data.quick(bits))

A large chunk of this code is parsing an input argument as an integer. By default, client will request a 16-bit random number unless it gets a number as input from the command line. Next we spin up a reference to the session bus and attempt to find our RandomData object on the bus using our known service name and object path. Once that’s initialized, we can directly call the quick method over the bus with the specified number of bits and print the result.

Make this binary executable also. If you try to run client without running service, you should see an error message explaining that the com.larry-price.test D-Bus service is not running (which would be true). Start service, and then run client with a few different input options and observe the results:

1
2
3
4
5
6
7
8
9
$ ./service & # to kill service later, be sure to note the pid here!
$ ./client
Your random number is: 41744
$ ./client 100
Your random number is: 401996322348922753881103222071
$ ./client 4
Your random number is: 14
$ ./client "new donk city"
input argument must be integer

That’s all there is to it. A simple, synchronous server and client. The server and client do not directly depend on each other but are able to communicate unidirectionally through simple method calls.

Next time

Next time, I’ll go into detail on how we can create an asynchronous service and client, and hopefully utilize signals to add a new direction to our communication.

Again, all the code can be found on Github: https://github.com/larryprice/python-dbus-blog-series/tree/part1.

Read more
Inayaili de León Persson

Last month the web team ran its first design sprint as outlined in The Sprint Book, by Google Ventures’ Jake Knapp. Some of us had read the book recently and really wanted to give the method a try, following the book to the letter.

In this post I will outline what we’ve learned from our pilot design sprint, what went well, what could have gone better, and what happened during the five sprint days. I won’t go into too much detail about explaining what each step of the design sprint consists of — for that you have the book. If you don’t have that kind of time, but would still like to know what I’m talking about, here’s an 8-minute video that explains the concept:

 

Before the sprint

One of the first things you need to do when running a design sprint is to agree on a challenge you’d like to tackle. Luckily, we had a big challenge that we wanted to solve: ubuntu.com‘s navigation system.

 

ubuntu.com navigation layers: global nav, main nav, second and third level navubuntu.com’s different levels of navigation

 

Assigning roles

If you’ve decided to run a design sprint, you’ve also probably decided who will be the Facilitator. If you haven’t, you should, as this person will have work to do before the sprint starts. In our case, I was the Facilitator.

My first Facilitator task was to make sure we knew who was going to be the Decider at our sprint.

We also agreed on who was going to participate, and booked one of our meeting rooms for the whole week plus an extra one for testing on Friday.

My suggestion for anyone running a sprint for the first time is to also name an Assistant. There is so much work to do before and during the sprint, that it will make the Facilitator’s life a lot easier. Even though we didn’t officially name anyone, Greg was effectively helping to plan the sprint too.

Evangelising the sprint

In the week that preceded the sprint, I had a few conversations with other team members who told me the sprint sounded really great and they were going to ‘pop in’ whenever they could throughout the week. I had to explain that, sadly, this wasn’t going to be possible.

If you need to do the same, explain why it’s important that the participants commit to the entire week, focusing on the importance of continuity and of accumulated knowledge that the sprint’s team will gather throughout the week. Similarly, be pleasant but firm when participants tell you they will have to ‘pop out’ throughout the week to attend to other matters — only the Decider should be allowed to do this, and even so, there should be a deputy Decider in the room at all times.

Logistics

Before the sprint, you also need to make sure that you have all the supplies you need. I tried as much as possible to follow the suggestions for materials outlined in the book, and I even got a Time Timer. In retrospect, it would have been fine for the Facilitator to just keep time on a phone, or a less expensive gadget if you really want to be strict with the no-phones-in-the-room policy.

Even though the book says you should start recruiting participants for the Friday testing during the sprint, we started a week before that. Greg took over that side of the preparation, sending prompts on social media and mailing lists for people to sign up. When participants didn’t materialise in this manner, Greg sent a call for participants to the mailing list of the office building we work at, which worked wonders for us.

Know your stuff

Assuming you have read the book before your sprint, if it’s your first sprint, I recommend re-reading the chapter for the following day the evening before, and take notes.

I printed out the checklists provided in the book’s website and wrote down my notes for the following day, so everything would be in one place.

 

Facilitator checklist with handwritten notesFacilitator checklists with handwritten notes

 

I also watched the official video for the day (which you can get emailed to you by the Sprint Bot the evening before), and read all the comments in the Q&A discussions linked to from the emails. These questions and comments from other people who have run sprints was incredibly useful throughout the week.

 

Sprint Bot emailSprint Bot email for the first day of the sprint

 

Does this sound like a lot of work? It was. I think if/when we do another sprint the time spent preparing will probably be reduced by at least 50%. The uncertainty of doing something as involved as this for the first time made it more stressful than preparing for a normal workshop, but it’s important to spend the time doing it so that things run smoothly during the sprint week.

Day 1

The morning of the sprint I got in with plenty of time to spare to set up the room for the kick-off at 10am.

I bought lots of healthy snacks (which were promptly frowned on by the team, who were hoping for sweater treats); brought a jug of water and cups, and all the supplies to the room; cleared the whiteboards; and set up the chairs.

What follows are some of the outcomes, questions and other observations from our five days.

Morning

In the morning of day 1 you define a long term goal for your project, list the ways in which the project could fail in question format, and draw a flowchart, or map, of how customers interact with your product.

  • Starting the map was a little bit tricky as it wasn’t clear how the map should look when there are more than one type of customer who might have different outcomes
  • In the book there are no examples with more than one type of customer, which meant we had to read and re-read that part of the book until we decided how to proceed as we have several customer types to cater for
  • Moments like these can take the confidence in the process away from the team, that’s why it’s important for the Facilitator to read everything carefully more than once, and ideally for him or her not to be the only person to do so
  • We did the morning exercises much faster than prescribed, but the same didn’t happen in the afternoon!

 

The team discussing the target for the sprint in front of the journey mapDiscussing the target for the sprint

 

Afternoon

In the afternoon experts from the sprint and guests come into the room and you ask them lots of questions about your product and how things work. Throughout the interviews the team is taking notes in the “How Might We” format (for example, “How might we reduce the amount of copy?”). By the end of the interviews, you group the notes into themes, vote on the ones you find most useful or interesting, move the most voted notes onto their right place within your customer map and pick a target in the map as the focus for the rest of the sprint.

  • If you have time, explain “How Might We” notes work before the lunch break, so you save that time for interviews in the afternoon
  • Each expert interview should last for about 15-30 minutes, which didn’t fee like long enough to get all the valuable knowledge from our experts — we had to interrupt them somewhat abruptly to make sure the interviews didn’t run over. Next time it might be easier to have a list of questions we want to cover before the interviews start
  • Choreographing the expert interviews was a bit tricky as we weren’t sure how long each would take. If possible, tell people you’ll call them a couple of minutes before you need them rather than set a fixed time — we had to send people back a few times because we weren’t yet finished asking all the question to the previous person!
  • It took us a little longer than expected to organise the notes, but in the end, the most voted notes did cluster around the key section of the map, as predicted in the book!

 

How Might We notes on the wallsSome of the How Might We notes on the wall after the expert interviews

 

Other thoughts on day 1

  • Sprint participants might cancel at the last minute. If this happens, ask yourself if they could still appear as experts on Monday afternoon? If not, it’s probably better to write them off the sprint completely
  • There was a lot of checking the book as the day went by, to confirm we were doing the right thing
  • We wondered if this comes up in design sprints frequently: what if the problem you set out to solve pre-sprint doesn’t match the target area of the map at the end of day 1? In our case, we had planned to focus on navigation but the target area was focused on how users learn more about the products/services we offer

A full day of thinking about the problem and mapping it doesn’t come naturally, but it was certainly useful. We conduct frequent user research and usability testing, and are used to watching interviews and analysing findings, nevertheless the expert interviews and listening to different perspectives from within the company was very interesting and gave us a different type of insight that we could build upon during the sprint.

Day 2

By the start of day 2, it felt like we had been in the sprint for a lot longer than just one day — we had accomplished a lot on Monday!

Morning

The morning of day 2 is spent doing “Lightning Demos” after a quick 20-minute research. These can be anything that might be interesting, from competitor products to previous internal attempts at solving the sprint challenge. Before lunch, the team decides who will sketch what in the afternoon: will everyone sketch the same thing or different parts of the map.

  • We thought the “Lightning Demos” was a great way to do demos — it was fast and captured the most important thing quickly
  • Deciding who would sketch what wasn’t as straightforward as we might have thought. We decided that everyone should do a journey through our cloud offerings so we’d get different ideas on Wednesday, knowing there was the risk of not everything being covered in the sketches
  • Before we started sketching, we made a list of sections/pages that should be covered in the storyboards
  • As on day 1, the morning exercises were done faster than prescribed, we were finished by 12:30 with a 30 minute break from 11-11:30

 

Sketches from lightning demosOur sketches from the lightning demos

 

Afternoon

In the afternoon, you take a few minutes to walk around the sprint room and take down notes of anything that might be useful for the sketching. You then sketch, starting with quick ideas and moving onto a more detailed sketch. You don’t look at the final sketches until Wednesday morning.

  • We spent the first few minutes of the afternoon looking at the current list of participants for the Friday testing to decide which products to focus on in our sketches, as our options were many
  • We had a little bit of trouble with the “Crazy 8s” exercise, where you’re supposed to sketch 8 variations of one idea in 8 minutes. It wasn’t clear what we had to do so we re-read that part a few times. This is probably the point of the exercise: to remove you from your comfort zone, make you think of alternative solutions and get your creative muscles warmed up
  • We had to look at the examples of detailed sketches in the book to have a better idea of what was expected from our sketches
  • It took us a while to get started sketching but after a few minutes everyone seemed to be confidently and quietly sketching away
  • With complicated product offerings there’s the instinct to want to have access to devices to check product names, features, etc – I assumed this was not allowed but some people were sneakily checking their laptops!
  • Naming your sketch wasn’t as easy as it sounded
  • Contrary to what we expected, the afternoon sketching exercises took longer than the morning’s, at 5pm some people were still sketching

 

The team sketchingEveryone sketching in silence on Tuesday afternoon

 

Tuesday was lots of fun. Starting the day with the demos, without much discussion on the validity of the ideas, creates a positive mood in the team. Sketching in a very structured manner removes some of the fear of the blank page, as you build up from loose ideas to a very well-defined sketch. The silent sketching was also great as it meant we had some quiet time to pause and think a solution through, giving the people who tend to be more quiet an opportunity to have their ideas heard on par with everyone else.

Day 3

No-one had seen the sketches done on Tuesday, so the build-up to the unveiling on day 3 was more exciting than for the usual design review!

Morning

On the Wednesday morning, you decide which sketch (or sketches) you will prototype. You stick the sketches on the wall and review them in silence, discuss each sketch briefly and each person votes on their favourite. After this, the Decider casts three votes, which can follow or not the votes of the rest of the team. Whatever the Decider votes on will be prototyped. Before lunch, you decide whether you will need to create one or more prototypes, depending on whether the Decider’s (or Deciders’) votes fit together or not.

  • We had 6 sketches to review
  • Although the book wasn’t clear as to when the guest Decider should participate, we invited ours from 10am to 11.30am as it seemed that he should participate in the entire morning review process — this worked out well
  • During the speed critique people started debating the validity or feasibility of solutions, which was expected but meant some work for the Facilitator to steer the conversation back on track
  • The morning exercises put everyone in a positive mood, it was an interesting way to review and select ideas
  • Narrating the sketches was harder than what might seem at first, and narrating your own sketch isn’t much easier either!
  • It was interesting to see that many of the sketches included similar solutions — there were definite patterns that emerged
  • Even though I emphasised that the book recommends more than one prototype, the team wasn’t keen on it and the focus of the pre-lunch discussion was mostly on how to merge all the voted solutions into one prototype
  • As for all other days, and because we decided for an all-in-one prototype, we finished the morning exercises by noon

 

Reviewing the sketches in silenceThe team reviewing the sketches in silence on Wednesday morning

 

Afternoon

In the afternoon of day 3, you sketch a storyboard of the prototype together, starting one or two steps before the customer encounters your prototype. You should move the existing sketches into the frames of the storyboard when possible, and add only enough detail that will make it easy to build the prototype the following day.

  • Using masking tape was easier than drawing lines for the storyboard frames
  • It was too easy to come up with new ideas while we were drawing the storyboard and it was tricky to tell people that we couldn’t change the plan at this point
  • It was hard to decide the level of detail we needed to discuss and add to the storyboard. We finished the first iteration of the storyboard a few minutes before 3pm. Our first instinct was to start making more detailed wireframes with the remaining time, but we decided to take a break for coffee and come back to see where we needed more detail in the storyboard instead
  • It was useful to keep asking the team what else we needed to define as we drew the storyboard before we started building the prototype the following day
  • Because we read out the different roles in preparation for Thursday, we ended up assigning roles straight away

 

Drawing the storyboardDiscussing what to add to our storyboard

 

Other thoughts on day 3

  • One sprint participant couldn’t attend on Tuesday, but was back on Wednesday, which wasn’t ideal but didn’t impact negatively
  • While setting up for the third day, I wasn’t sure if the ideas from the “Lightning Demos” could be erased from the whiteboard, so I took a photo of them and erased it as, even with the luxury of massive whiteboards, we wouldn’t have had space for the storyboard later on!

By the end of Wednesday we were past the halfway mark of the sprint, and the excitement in anticipation for the Friday tests was palpable. We had some time left before the clock hit 5 and wondered if we should start building the prototype straight away, but decided against it — we needed a good night’s sleep to be ready for day 4.

Day 4

Thursday is all about prototyping. You need to choose which tools you will use, prioritising speed over perfection, and you also need to assign different roles for the team so everyone knows what they need to do throughout the day. The interviewer should write the interview script for Friday’s tests.

  • For the prototype building day, we assigned: two writers, one interviewer, one stitcher, two makers and one asset collector
  • We decided to build the pages we needed with HTML and CSS (instead of using a tool like Keynote or InVision) as we could build upon our existing CSS framework
  • Early in the afternoon we were on track, but we were soon delayed by a wifi outage which lasted for almost 1,5 hours
  • It’s important to keep communication flowing throughout the day to make sure all the assets and content that are needed are created or collected in time for the stitcher to start stitching
  • We were finished by 7pm — if you don’t count the wifi outage, we probably would have been finished by 6pm. The extra hour could have been curtailed if there had been just a little bit more detail in the storyboard page wireframes and in the content delivered to the stitcher, and fewer last minute tiny changes, but all-in-all we did pretty well!

 

Maker and asset collector working on the prototypeJoana and Greg working on the prototype

 

Other thoughts on day 4

  • We had our sprint in our office, so it would have been possible for us to ask for help from people outside of the sprint, but we didn’t know whether this was “allowed”
  • We could have assigned more work to the asset collector: the makers and the stitcher were looking for assets themselves as they created the different components and pages rather than delegating the search to the asset collector, which is how we normally work
  • The makers were finished with their tasks more quickly than expected — not having to go through multiple rounds of reviews that sometimes can take weeks makes things much faster!

By the end of Thursday there was no denying we were tired, but happy about what we had accomplished in such a small amount of time: we had a fully working prototype and five participants lined up for Friday testing. We couldn’t wait for the next day!

Day 5

We were all really excited about the Friday testing. We managed to confirm all five participants for the day, and had an excellent interviewer and solid prototype. As the Facilitator, I was also happy to have a day where I didn’t have a lot to do, for a change!

Thoughts and notes on day 5

On Friday, you test your prototype with five users, taking notes throughout. At the end of the day, you identify patterns within the notes and based on these you decide which should be the next steps for your project.

  • We’re lucky to work in a building with lots of companies who employ our target audience, but we wonder how difficult it would have been to find and book the right participants within just 4 days if we needed different types of users or were based somewhere else
  • We filled up an entire whiteboard with notes from the first interview and had to go get extra boards during the break
  • Throughout the day, we removed duplicate notes from the boards to make them easier to scan
  • Some participants don’t talk a lot naturally and need a lot of constant reminding to think out loud
  • We had the benefit of having an excellent researcher in our team who already knows and does everything the book recommends doing. It might have been harder for someone with less research experience to make sure the interviews were unbiased and ran smoothly
  • At the end of the interviews, after listing the patterns we found, we weren’t sure whether we could/should do more thorough analysis of the testing later or if we should chuck the post-it notes in the bin and move on
  • Our end-of-sprint decision was to have a workshop the following week where we’d plan a roadmap based on the findings — could this be considered “cheating” as we’re only delaying making a decision?

 

The team in the observation roomThe team observing the interviews on Friday

 

A wall of interview notesA wall of interview notes

 

The Sprint Book notes that you can have one of two results at the end of your sprint: an efficient failure, or a flawed success. If your prototype doesn’t go down well with the participants, your team has only spent 5 days working on it, rather than weeks or potentially months — you’ve failed efficiently. And if the prototype receives positive feedback from participants, most likely there will still be areas that can be improved and retested — you’ve succeeded imperfectly.

At the end of Friday we all agreed that we our prototype was a flawed success: there were things we tested that we’d had never think to try before and that received great feedback, but some aspects certainly needed a lot more work to get right. An excellent conclusion to 5 intense days of work!

Final words

Despite the hard work involved in planning and getting the logistics right, running the web team’s trial design sprint was fun.

The web team is small and stretched over many websites and products. We really wanted to test this approach so we could propose it to the other teams we work with as an efficient way to collaborate at key points in our release schedules.

We certainly achieved this goal. The people who participated directly in the sprint learned a great deal during the five days. Those in the web team who didn’t participate were impressed with what was achieved in one week and welcoming of the changes it initiated. And the teams we work with seem eager to try the process out in their teams, now that they’ve seen what kind of results can be produced in such a short time.

How about you? Have you run a design sprint? Do you have any advice for us before we do it again? Leave your thoughts in the comments section.

Read more
Stéphane Graber

Introduction

I maintain a number of development systems that are used as throw away machines to reproduce LXC and LXD bugs by the upstream developers. I use MAAS to track who’s using what and to have the machines deployed with whatever version of Ubuntu or Centos is needed to reproduce a given bug.

A number of those systems are proper servers with hardware BMCs on a management network that MAAS can drive using IPMI. Another set of systems are virtual machines that MAAS drives through libvirt.

But I’ve long had another system I wanted to get in there. That machine is a desktop computer but with a server grade SAS controller and internal and external arrays. That machine also has a Fiber Channel HBA and Infiniband card for even less common setups.

The trouble is that this being a desktop computer, it’s lacking any kind of remote management that MAAS supports. That machine does however have a good PCIe network card which provides reliable wake-on-lan.

Back in the days (MAAS 1.x), there was a wake-on-lan power type that would have covered my use case. This feature was however removed from MAAS 2.x (see LP: #1589140) and the development team suggests that users who want the old wake-on-lan feature, instead install Ubuntu 14.04 and the old MAAS 1.x branch.

Implementing Wake on LAN in MAAS 2.x

I am, however not particularly willing to install an old Ubuntu release and an old version of MAAS just for that one trivial feature, so I instead spent a bit of time to just implement the bits I needed and keep a patch around to be re-applied whenever MAAS changes.

MAAS doesn’t provide a plugin system for power types, so I unfortunately couldn’t just write a plugin and distribute that as an unofficial power type for those who need WOL. I instead had to resort to modifying MAAS directly to add the extra power type.

The code change needed to re-implement a wake-on-lan power type is pretty simple and only took me a few minutes to sort out. The patch can be found here: https://dl.stgraber.org/maas-wakeonlan.diff

To apply it to your MAAS, do:

sudo apt install wakeonlan
wget https://dl.stgraber.org/maas-wakeonlan.diff
sudo patch -p1 -d /usr/lib/python3/dist-packages/provisioningserver/ < maas-wakeonlan.diff
sudo systemctl restart maas-rackd.service maas-regiond.service

Once done, you’ll now see this in the web UI:

After selecting the new “Wake on LAN” power type, enter the MAC address of the network interface that you have WOL enabled on and save the change.

MAAS will then be able to turn the system on, allowing for the normal commissioning and deployment stages. For everything else, this power type behaves like the “Manual” type, asking the user to manually go shutdown or reboot the system as you can’t do that through Wake on LAN.

Note that you’ll have to re-apply part of the patch whenever MAAS is updated. The patch modifies two files and adds a new one. The new file won’t be removed during an upgrade, but the two modified files will get reverted and need patching again.

Conclusion

This is certainly a hack and if your system supports anything better than Wake on LAN, or you’re willing to buy a supported PDU just for that one system, then you should do that instead.

But if the inability to turn a system on is all that stands in your way from adding it to your MAAS, as was the case for me, then that patch may help you.

I hope that in time MAAS will either get that feature back in some way or get a plugin system that I can use to ship that extra power type in its own separate package without needing to alter any of MAAS’ own files.

Read more
UbuntuTouch

我们知道对于python项目来说,我们只需要在我们的snapcraft.yaml中指定plugin为python它即可为python项目下载在snapcraft中指定的python的版本。但是对于有一些项目来说,我们的开发者可能需要一个特定的python的版本,那么我们怎么来实现这个功能呢?在今天的教程中,我们来介绍在snapcraft 2.27中所增添的一个新的功能。


我们首先来看一下我做的一个项目:

https://github.com/liu-xiao-guo/python-plugin

snapcraft.yaml

name: python36
version: '0.1' 
summary: This is a simple example not using python plugin
description: |
  This is a python3 example

grade: stable 
confinement: strict

apps:
  python36:
    command: helloworld_in_python
  python-version:
    command: python3 --version

parts:
  my-python-app:
    source: https://github.com/liu-xiao-guo/python-helloworld.git
    plugin: python
    after: [python]
  python:
    source: https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
    plugin: autotools
    configflags: [--prefix=/usr]
    build-packages: [libssl-dev]
    prime:
      - -usr/include

在这里,针对我们的python项目,它指定了在我们项目python part中所定义的python。这个python的版本是直接从网上直接进行下载的。

我们可以直接打包我们的应用,并并运行我们的应用:

$ python36
Hello, world

显然我们的python是可以正常工作的。我们可以通过命令python36.python-version命令来检查我们的python的版本:

$ python36.python-version 
Python 3.6.0

它显示了,我们目前正在运行的python的版本3.6。它就是我们在snapcraft中所下载的版本。
作者:UbuntuTouch 发表于2017/2/20 9:23:30 原文链接
阅读:477 评论:0 查看评论

Read more
UbuntuTouch

Socket.io可以使得我们的服务器和客户端进行双向的实时的数据交流。它比HTTP来说更具有传输数据量少的优点。同样地,websocket也具有同样的优点。你可以轻松地把你的数据发送到服务器,并收到以事件为驱动的响应,而不用去查询。在今天的教程中,我们来讲一下如何利用socket.io和websocket来做一个双向的通讯。


1)创建一个socket.io的服务器


首先我们先看一下我完成的一个项目:


我们首先看一下我们的snapcraft.yaml文件:

snapcraft.yaml

name: socketio
version: "0.1"
summary: A simple shows how to make use of socket io
description: socket.io snap example

grade: stable
confinement: strict

apps:
  socket:
    command: bin/socketio
    daemon: simple
    plugs: [network-bind]

parts:
  nod:
    plugin: nodejs
    source: .
   

这是一个nodejs的项目。我们使用了nodejs的plugin。我们的package.json文件如下:

package.json

{
  "name": "socketio",
  "version": "0.0.1",
  "description": "Intended as a nodejs app in a snap",
  "license": "GPL-3.0",
  "author": "xiaoguo, liu",
  "private": true,
  "bin": "./app.js",
  "dependencies": {
    "express": "^4.10.2",
    "nodejs-websocket": "^1.7.1",
    "socket.io": "^1.3.7"
  }
}

由于我们需要使用到webserver,所有我们安装了express架构包。另外,我们使用到socket.io及websocket,所有,我们把这些包都打入到我们的snap包中。

再来看看我们的应用app.js的设计:

app.js

#!/usr/bin/env node

var express = require('express');
var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);	

app.get('/', function(req, res){
   res.sendFile(__dirname + '/www/index.html');
});

app.use(express.static(__dirname + '/www'));

//Whenever someone connects this gets executed
io.on('connection', function(socket){
  console.log('A user connected');
  
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  io.emit('light-sensor-value', '' + value);
	  // console.log("value: " + value)
	  
	  // This is another way to send data
	  socket.send(value);
  }, 2000); 

  //Whenever someone disconnects this piece of code executed
  socket.on('disconnect', function () {
    console.log('A user disconnected');
  });

});

http.listen(4000, function(){
  console.log('listening on *:4000');
});

var ws = require("nodejs-websocket")

console.log("Going to create the server")

String.prototype.format = function() {
    var formatted = this;
    for (var i = 0; i < arguments.length; i++) {
        var regexp = new RegExp('\\{'+i+'\\}', 'gi');
        formatted = formatted.replace(regexp, arguments[i]);
    }
    return formatted;
};
 
// Scream server example: "hi" -> "HI!!!" 
var server = ws.createServer(function (conn) {
	
    console.log("New connection")
	var connected = true;
    
    conn.on("text", function (str) {
        console.log("Received "+str)
        conn.sendText(str.toUpperCase()+"!!!")
    })
    
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
        connected = false
    })
        	
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  var data = '{"data":"{0}"}'.format(value)
	  if (connected){
		conn.send(data);
	  }
  }, 2000); 	
}).listen(4001)

在代码的第一部分,我们创建了一个webserver,它使用的端口地址是4000。我们也同时启动了socket.io服务器,等待客户端的连接。一旦有一个连接的话,我们使用如下的代码每过一段时间来发送一些数据:

//Whenever someone connects this gets executed
io.on('connection', function(socket){
  console.log('A user connected');
  
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  io.emit('light-sensor-value', '' + value);
	  // console.log("value: " + value)
	  
	  // This is another way to send data
	  socket.send(value);
  }, 2000); 

  //Whenever someone disconnects this piece of code executed
  socket.on('disconnect', function () {
    console.log('A user disconnected');
  });

});

虽然这些数据是一些随机的,但是我们主要用来展示它是如何工作的。在实际的应用中,这些数据可以是从一些传感器中得到的。在我们的客户端中,我们可以打开webserver运行的地址:


我们可以看到数据不断地进来,并在我们的客户端中显示出来。具体的设计请参考在www目录中的index.html文件。


2)创建一个websocket的服务器


在我们的app.js中,我们利用如下的代码来实现一个websocket的服务器。端口地址为4001。

app.js


var ws = require("nodejs-websocket")

console.log("Going to create the server")

String.prototype.format = function() {
    var formatted = this;
    for (var i = 0; i < arguments.length; i++) {
        var regexp = new RegExp('\\{'+i+'\\}', 'gi');
        formatted = formatted.replace(regexp, arguments[i]);
    }
    return formatted;
};
 
// Scream server example: "hi" -> "HI!!!" 
var server = ws.createServer(function (conn) {
	
    console.log("New connection")
	var connected = true;
    
    conn.on("text", function (str) {
        console.log("Received "+str)
        conn.sendText(str.toUpperCase()+"!!!")
    })
    
    conn.on("close", function (code, reason) {
        console.log("Connection closed")
        connected = false
    })
        	
  setInterval(function(){
	  var value = Math.floor((Math.random() * 1000) + 1);
	  var data = '{"data":"{0}"}'.format(value)
	  if (connected){
		conn.send(data);
	  }
  }, 2000); 	
}).listen(4001)

同样地,一旦有个连接,我们每隔两秒钟发送一个数据到我们的客户端。为了说明问题方便,我们设计了一个QML的客户端。

Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3
import Ubuntu.Components.Pickers 1.3
import Qt.WebSockets 1.0
import QtQuick.Layouts 1.1

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "dialer.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    function interpreteData(data) {
        var json = JSON.parse(data)
        console.log("Websocket data: " + data)

        console.log("value: " + json.data)
        mainHand.value = json.data
    }

    WebSocket {
        id: socket
        url: input.text
        onTextMessageReceived: {
            console.log("something is received!: " + message);
            interpreteData(message)
        }

        onStatusChanged: {
            if (socket.status == WebSocket.Error) {
                console.log("Error: " + socket.errorString)
            } else if (socket.status == WebSocket.Open) {
                // socket.sendTextMessage("Hello World....")
            } else if (socket.status == WebSocket.Closed) {
            }
        }
        active: true
    }

    Page {
        header: PageHeader {
            id: pageHeader
            title: i18n.tr("dialer")
        }

        Item {
            anchors {
                top: pageHeader.bottom
                left: parent.left
                right: parent.right
                bottom: parent.bottom
            }

            Column {
                anchors.fill: parent
                spacing: units.gu(1)
                anchors.topMargin: units.gu(2)

                Dialer {
                    id: dialer
                    size: units.gu(30)
                    minimumValue: 0
                    maximumValue: 1000
                    anchors.horizontalCenter: parent.horizontalCenter

                    DialerHand {
                        id: mainHand
                        onValueChanged: console.log(value)
                    }
                }


                TextField {
                    id: input
                    width: parent.width
                    text: "ws://192.168.1.106:4001"
                }

                Label {
                    id: value
                    text: mainHand.value
                }
            }
        }
    }
}

运行我们的服务器及客户端:



我们可以看到我们数值在不断地变化。这个客户端的代码在:https://github.com/liu-xiao-guo/dialer

在这篇文章中,我们展示了如何利用socket.io及websocket来进行双向的实时的通讯。在很多的物联网的应用中,我们可以充分利用这些通讯协议来更好地设计我们的应用。

作者:UbuntuTouch 发表于2017/2/7 16:07:16 原文链接
阅读:442 评论:3 查看评论

Read more
UbuntuTouch

[原]百度云snap应用

百度云应用可以很方便地帮我们管理我们在云上的应用。这个应用的源码在:

https://github.com/LiuLang/bcloud


Snap版的软件源码在:

https://github.com/liu-xiao-guo/bcloud-snap




你可以使用如下的命令来从商店进行安装:

sudo snap install bcloud —devmode —beta



作者:UbuntuTouch 发表于2017/3/21 12:41:35 原文链接
阅读:570 评论:0 查看评论

Read more