Canonical Voices

facundo

Alcohol


Le puso cuatro cubitos de hielo al vaso, dudó unos instantes y sacó uno con los dedos, volviéndolo a tirar a la hielera. Con la cantidad de whisky no dudó, llenó el vaso hasta casi el borde.

Sin abandonar la cercanía del barcito medio pelo contra la pared del living le dió el primer gran trago, y después sí, se fue contra la ventana.

Yo no sabía si mirarlo a él o a ella, que se cerraba el deshabillé por demás, agarrándolo con fuerza, tensa, marcando su casi ausencia de curvas en el cuerpo demasiado flaco.

- ¡Borracho de mierda! -le gritó, casi con desesperación.

Él la ignoró, seguía mirando por la ventana. Desde mi posición, sentado en el sillón, no llegaba a verle la cara, pero adivinaba que tenía la vista perdida. No miraba por la ventana, suponía yo, más bien la usaba como excusa para no tener que mirar nada más.

Ella, con la voz todavía ronca por el llanto, pero mucho más calma, le dijo:

- El alcohol, esa oscuridad donde los cobardes van a esconderse de si mismos.

Él se dio vuelta, con la sorpresa dibujada en el rostro, en parte porque ella no era de hacer ese tipo de declaraciones filosóficas altisonantes, pero en parte -y cada vez que recuerdo ese día estoy más seguro- porque finalmente le tocó alguna cuerda interior.

Dejó el vaso por la mitad apoyado contra el marco de la ventana, abrió la puerta, y no lo vimos nunca más.

Read more
Leo Arias

Here at Ubuntu we are working hard on the future of free software distribution. We want developers to release their software to any Linux distro in a way that's safe, simple and flexible. You can read more about this at snapcraft.io.

This work is extremely fun because we have to work constantly with a wild variety of free software projects to make sure that the tools we write are usable and that the workflow we are proposing makes sense to developers and gives them a lot of value in return. Today I want to talk about one of those projects: IPFS.

IPFS is the permanent and decentralized web. How cool is that? You get a peer-to-peer distributed file system where you store and retrieve files. They have a nice demo in their website, and you can give it a try on Ubuntu Trusty, Xenial or later by running:

$ sudo snap install ipfs

screenshot of the IPFS peers

So, here's one of the problems we are trying to solve. We have millions of users on the Trusty version of Ubuntu, released during 2014. We also have millions of users on the Xenial version, released during 2016. Those two versions are stable now, and following the Ubuntu policies, they will get only security updates for 5 years. That means that it's very hard, almost impossible, for a young project like IPFS to get into the Ubuntu archives for those releases. There will be no simple way for all those users to enjoy IPFS, they would have to use a Personal Package Archive or install the software from a tarball. Both methods are complex with high security risks, and both require the users to put a lot of trust on the developers, more than what they should ever trust anybody.

We are closing the Zesty release cycle which will go out in April, so it's too late there too. IPFS could make a deb, put it into Debian, wait for it to sync to Ubuntu, and then it's likely that it will be ready for the October release. Aside from the fact that we have to wait until October, there are a few other problems. First, making a deb is not simple. It's not too hard either, but it requires quite some time to learn to do it right. Second, I mentioned that IPFS is young, they are on the 0.4.6 version. So, it's very unlikely that they will want to support this early version for such a long time as Debian and Ubuntu require. And they are not only young, they are also fast. They add new features and bug fixes every day and make new releases almost every week, so they need a feedback loop that's just as fast. A 6 months release cycle is way too slow. That works nicely for some kinds of free software projects, but not for one like IPFS.

They have been kind enough to let me play with their project and use it as a test subject to verify our end-to-end workflow. My passion is testing, so I have been focusing on continuous delivery to get happy early adopters and constant feedback about the most recent changes in the project.

I started by making a snapcraft.yaml file that contains all the metadata required for the snap package. The file is pretty simple and to make the first version it took me just a couple of minutes, true story. Since then I've been slowly improving and updating it with small changes. If you are interested in doing the same for your project, you can read the tutorial to create a snap.

I built and tested this snap locally on my machines. It worked nicely, so I pushed it to the edge channel of the Ubuntu Store. Here, the snap is not visible on user searches, only the people who know about the snap will be able to install it. I told a couple of my friends to give it a try, and they came back telling me how cool IPFS was. Great choice for my first test subject, no doubt.

At this point, following the pace of the project by manually building and pushing new versions to the store was too demanding, they go too fast. So, I started working on continuous delivery by translating everything I did manually into scripts and hooking them to travis-ci. After a few days, it got pretty fancy, take a look at the github repo of the IPFS snap if you are curious. Every day, a new version is packaged from the latest state of the master branch of IPFS and it is pushed to the edge channel, so we have a constant flow of new releases for hardcore early adopters. After they install IPFS from the edge channel once, the package will be automatically updated in their machines every day, so they don't have to do anything else, just use IPFS as they normally would.

Now with this constant stream of updates, me and my two friends were not enough to validate all the new features. We could never be sure if the project was stable enough to be pushed to the stable channel and make it available to the millions and millions of Ubuntu users out there.

Luckily, the Ubuntu community is huge, and they are very nice people. It was time to use the wisdom of the crowds. I invited the most brave of them to keep the snap installed from edge and I defined a simple pipeline that leads to the stable release using the four available channels in the Ubuntu store:

  • When a revision is tagged in the IPFS master repo, it is automatically pushed to edge channel from travis, just as with any other revision.
  • Travis notifies me about this revision.
  • I install this tagged revision from edge, and run a super quick test to make sure that the IPFS server starts.
  • If it starts, I push the snap to the beta channel.
  • With a couple of my friends, we run a suite of smoke tests.
  • If everything goes well, I push the snap to the candidate channel.
  • I notify the community of Ubuntu testers about a new version in the candidate channel. This is were the magic of crowd testing happens.
  • The Ubuntu testers run the smoke tests in all their machines, which gives us the confidence we need because we are confirming that the new version works on different platforms, distros, distro releases, countries, network topologies, you name it.
  • This candidate release is left for some time in this channel, to let the community run thorough exploratory tests, trying to find weird usage combinations that could break the software.
  • If the tag was for a final upstream release, the community also runs update tests to make sure that the users with the stable snap installed will get this new version without issues.
  • After all the problems found by the community have been resolved or at least acknowledged and triaged as not blockers, I move the snap from candidate to the stable channel.
  • All the users following the stable channel will automatically get a very well tested version, thanks to the community who contributed with the testing and accepted a higher level of risk.
  • And we start again, the never-ending cycle of making free software :)

Now, let's go back to the discussion about trust. Debian and Ubuntu, and most of the other distros, rely on maintainers and distro developers to package and review every change on the software that they put in their archives. That is a lot of work, and it slows down the feedback loop a lot, as we have seen. In here we automated most of the tasks of a distro maintainer, and the new revisions can be delivered directly to the users without any reviews. So the users are trusting directly their upstream developers without intermediaries, but it's very different from the previously existing and unsafe methods. The code of snaps is installed read-only, very well constrained with access only to their own safe space. Any other access needs to be declared by the snap, and the user is always in control of which access is permitted to the application.

This way upstream developers can go faster but without exposing their users to unnecessary risks. And they just need a simple snapcraft.yaml file and to define their own continuous delivery pipeline, on their own timeline.

By removing the distro as the intermediary between the developers and their users, we are also making a new world full of possibilities for the Ubuntu community. Now they can collaborate constantly and directly with upstream developers, closing this quick feedback loop. In the future we will tell our children of the good old days when we had to report a bug in Ubuntu, which would be copied to Debian, then sent upstream to the developers, and after 6 months, the fix would arrive. It was fun, and it lead us to where we are today, but I will not miss it at all.

Finally, what's next for IPFS? After this experiment we got more than 200 unique testers and almost 300 test installs. I now have great confidence on this workflow, new revisions were delivered on time, existing Ubuntu testers became new IPFS contributors and I now can safely recommend IPFS users to install the stable snap. But there's still plenty of work ahead. There are still manual steps in the pipeline that can be scripted, the smoke tests can be automated to leave more free time for exploratory testing, we can release also to armhf and arm64 architectures to get IPFS into the IoT world, and well, of course the developers are not stopping, they keep releasing new interesting features. As I said, plenty of opportunities for us as distro contributors.

screenshot of the IPFS snap stats

I'd like to thank everybody who tested the IPFS snap, specially the following people for their help and feedback:

  • freekvh
  • urcminister
  • Carla Sella
  • casept
  • Colin Law
  • ventrical
  • cariboo
  • howefield

<3

If you want to release your project to the Ubuntu store, take a look at the snapcraft docs, the Ubuntu tutorials, and come talk to us in Rocket Chat.

Read more
Stéphane Graber

LXD logo

The LXD demo server

The LXD demo server is the service behind https://linuxcontainers.org/lxd/try-it.
We use it to showcase LXD by leading visitors through an interactive tour of LXD’s features.

Rather than use some javascript simulation of LXD and its client tool, we give our visitors a real root shell using a LXD container with nesting enabled. This environment is using all of LXD’s resource limits as well as a very strict firewall to prevent abuses and offer everyone a great experience.

This is done using lxd-demo-server which can be found at: https://github.com/lxc/lxd-demo-server
The lxd-demo-server is a daemon that offers a public REST API for use from a web browser.
It supports:

  • Creating containers from an existing container or from a LXD image
  • Choose what command to execute in the containers on connection
  • Lets you choose specific profiles to apply to the containers
  • An API to record user feedback
  • An API to fetch usage statistics for reporting
  • A number of resource restrictions:
    • CPU
    • Disk quota (if using btrfs or zfs as the LXD storage backend)
    • Processes
    • Memory
    • Number of sessions per IP
    • Time limit for the session
    • Total number of concurrent sessions
  • Requiring the user to read and agree to terms of service
  • Recording all sessions in a sqlite3 database
  • A maintenance mode

All of it is configured through a simple yaml configuration file.

Setting up your own

The LXD demo server is now available as a snap package and interacts with the snap version of LXD. To install it on your own system, all you need to do is:

Make sure you don’t have the deb version of LXD installed

ubuntu@djanet:~$ sudo apt remove --purge lxd lxd-client
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following packages will be REMOVED:
 lxd* lxd-client*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 25.3 MB disk space will be freed.
Do you want to continue? [Y/n] 
(Reading database ... 59776 files and directories currently installed.)
Removing lxd (2.0.9-0ubuntu1~16.04.2) ...
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
Purging configuration files for lxd (2.0.9-0ubuntu1~16.04.2) ...
Removing lxd-client (2.0.9-0ubuntu1~16.04.2) ...
Processing triggers for man-db (2.7.5-1) ...

Install the LXD snap

ubuntu@djanet:~$ sudo snap install lxd
lxd 2.8 from 'canonical' installed

Then configure LXD

ubuntu@djanet:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=43]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And finally install lxd-demo-server itself

ubuntu@djanet:~$ sudo snap install lxd-demo-server
lxd-demo-server git from 'stgraber' installed
ubuntu@djanet:~$ sudo snap connect lxd-demo-server:lxd lxd:lxd

At that point, you can hit http://127.0.0.1:8080 and will be greeted with this:

To change the configuration, use:

ubuntu@djanet:~$ sudo lxd-demo-server.configure

And that’s it, you have your own instance of the demo server.

Security

As mentioned at the beginning, the demo server comes with a number of options to prevent users from using all the available resources themselves and bringing the whole thing down.

Those should be tweaked for your particular needs and should also update the total number of concurrent sessions so that you don’t end up over-committing on resources.

On the network side of things, the demo server itself doesn’t do any kind of firewalling or similar network restrictions. If you plan on offering sessions to anyone online, you should make sure that the network which LXD is using is severely restricted and that the host this is running on is also placed in a very restricted part of your network.

Containers handed to strangers should never be using “security.privileged” as that’d be a straight route to getting root privileges on the host. You should also stay away from bind-mounting any part of the host’s filesystem into those containers.

I would also very strongly recommend setting up very frequent security updates on your host and kernel live patching or at least automatic reboot when a new kernel is installed. This should avoid a new kernel security issue from being immediately exploited in your environment.

Conclusion

The LXD demo server was initially written as a quick hack to expose a LXD instance to the Internet so we could let people try LXD online and also offer the upstream team a reliable environment we could have people attempt to reproduce their bugs into.

It’s since grown a bit with new features contributed by users and with improvements we’ve made to the original experience on our website.

We’ve now served over 36000 sessions to over 26000 unique visitors. This has been a great tool for people to try and experience LXD and I hope it will be similarly useful to other projects.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
Dustin Kirkland

Mobile World Congress is simply one of the biggest trade shows in the entire world.

It's also, perhaps, the best place in the world to see how encompassing the Ubuntu ecosystem actually is.

Canonical and our partners demonstrated Ubuntu running on dozens of devices -- from robots, to augmented reality headsets, digital signs, vending machines, IoT Gateways, cell tower base stations, phones, tablets, servers, from super computers to tiny, battery powered embedded controllers.

But that was only a tiny fraction of the Ubuntu running at MWC!

We saw Ubuntu at the heart of demos from Dell, AMD, Intel, IBM, Deutsche Telekom, DJI, and hundreds of other booths, running autonomous drones, national telephone networks, self driving cars, smart safety helmets, inflight entertainment systems, and so, so, so much more.

Among the thousands of customers, prospects, fans, competitors, students, and industry executives, we even received a visit from (the somewhat controversial?) King of Spain!

It was an incredible week, with no fewer than 12 hours per day, on our feet, telling the Ubuntu story.
And what a story it is... I hope you enjoy.

Cheers,
Dustin





































Read more
Dustin Kirkland



Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.


As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.



Cheers,
Dustin

Read more
Alan Griffiths

MirAL 1.3

There’s a new MirAL release (1.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04) and the so-called “stable phone overlay” ppa for ‘Xenial Xerus’ (Ubuntu 16.04LTS). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged.

The changes in 1.3.0 fall are:

Support for “workspaces”

This is part of the enabling “workspaces” for Unity8 desktop. MirAL doesn’t provide fancy transitions and spreads, but you can see some basic workspace switching in the miral-shell example program:

$ apt install miral-examples
$ miral-app

There are four workspaces (corresponding to F1-F4) and you can switch using Meta-Alt-[F1|F2|F3|F4], or switch taking the active application to the new workspace using Meta-Ctrl-[F1|F2|F3|F4].

Support for “previous window in application”

You can now use Alt-Shift-` to switch to the previous in an application.

miral-shell adds a background

miral-shell now uses its background for a handy guide to the available keyboard shortcuts.

Bug fixes

Two bug fixes related to shutdown problems: one deals with a possible race in libmiral code, the other works around a bug in Mir.

  • [libmiral] Join internal client threads before server shutdown (LP: #1668651)
  • [miral-shell] Workaround for crash on exit (LP: #1667645)

Read more
Anthony Dillon

Hack day 2

This week, the web team managed to get away for our second hack day. These hack days give us an opportunity to scratch our own itches and work on things we find interesting.

We wrote about our first hack day in August last year.

Getting started

We began by outlining the day and reviewing ideas that had been suggested on a Google Doc throughout the previous week by everyone on the team. We each voted by marking the ideas we would be interested in working. Then we chose the most voted ones and assigned groups of 2 or 3 people to each.

The groups broke up and turned their idea into a formal project with a list of the tasks required to produce an MVP. Below is a list of the ideas and outcomes from each team.

Performance audit of the current websites

Team: Rich, Andrea, Robin

The team discovered a tool called Lighthouse by Google Chrome which analyses a web page and returns a full audit of the dependent assets and accessibility issues.

The team spent some time trying to create a service using Lighthouse to produce an API, then realised that Google Chrome team had done this work already. The service is called Moonlight. Moonlight is a SaaS to test the performance of a page.

As Moonlight takes a single webpage endpoint to test., we need a way to recursively test pages. The team created a profiling script to gather the references endpoints of a site.

Canonical web team dashboard

Team: Luke, Ant, Yaili

The goal of this project was to motivate the team to improve key areas at a glance. The  metrics we wanted to capture were:

  • Whether the site is up or down
  • Live visitors countsMonthly unique visitors
  • Monthly unique visitorsOpen issues on the project
  • Open issues on the projectOpen PRs on the project
  • Open PRs on the project
  • Information about the last commit to the sites code base
  • PageSpeed insights tests results

We gathered a set of sites we would like to collect these metrics on:

  • www.ubuntu.com
  • www.canonical.com
  • maas.io
  • jujucharms.com
  • landscape.canonical.com
  • design.ubuntu.com
  • design.canonical.com
  • insights.ubuntu.com
  • developer.ubuntu.com
  • community.ubuntu.com
  • summit.ubuntu.com

The team used MERN stack (MongoDB, expressjs, React and Nodejs) and modified its sample project to create a interface which could be displayed depending on the state of the data. For example, the up or down card would display all sites as up but once one went down the card would change to an error state and only display the information about the site that is down. By designing for emotion in this way, we can intelligently utilise the limited space available in a dashboard.

The team also used a few plugins to gather some data:

  • ping-monitor to ping our sites to check if they’re up, down or broken
  • node-http-ping to get response times for the same set of Canonical sites

Storing the data in MongoDB to keep historical data and using the /api endpoint to return the response time and status for each site, the team managed to produce a simplified dashboard showing the available state of our list of sites.

Ubuntu.com dev tools

Team: Graham, Karl


As a team, we have been using gulp scripts to lint and test our code locally and in our CI environments for sometime. But we have never got around to applying these checks to our flagship website, www.ubuntu.com.

The plan here was to implement gulp scripts to lint Sass and JavaScript. And, to also look into further options like spell-checking, auto-prefixing and HTML validation.

The team added Sass linting and borrowed the linting tasks from our styling framework vanilla-framework. This produced a long list of lint issues. The team tracked the lint errors and quickly fixed them to get a passing CI run.

Adding JavaScript linting (jsHint)

The team also implemented JavaScript linting using jsHint on the current JavaScript within the sites code base. This produced a number of JavaScript lint errors which were fixed, ignoring the plugin code.

Finally adding the new linting steps to the Travis configuration. So the linting is tested on each pull request.

Vanilla web components prototype

Team: Barry, Will, Robin

To enable Vanilla on a variety of platforms. This would allow people to use Vanilla in modern web apps.

The team  created a base repository using Polymer’s tools and started creating web components for Vanilla.

They discovered that the styling needs tweaking to be compatible with web components. Possibly just by building a shared styles import which is included in each web component.

The team started by importing vanilla-framework from NPM, then built modular scss files containing only relevant parts from Vanilla, and finally imported the modular style file in web component.

Inside the repository there is a vanilla.html which imports all of the components. Components can individually be included as needed.

This work includes a demo system, with API documentation. The demo system displays the component and the markup used to create it. This is accessed by running `polymer serve` and accessing the site.

This work can be used to build solid web components for use in Polymer and we can also use this work to jumpstart React components.

HTTP/2 on vanillaframework.io

In the midst of all this work. Robin found time to tackle the task of hosting our styling frameworks website on HTTP/2. It’s currently a proof of concept but can now be considered as the start of work item to roll out.

Demo site

Conclusion

Again, this was a successful hack day with everyone busy working on things that interest them. Although there were less completed outcomes this time, we did set up a number of good projects which are ready to be continued.




Read more
Robin Winslow

We’ve been making an effort to secure all our websites with HTTPS. While some Canonical sites have enforced HTTPS for a while (e.g.: landscape.canonical.com, jujucharms.com, launchpad.net), it’s been missing from our other sites until now.

Why HTTPS?

The HTTPS movement has been building for years to help secure internet users against black-hat hackers and spies. The movement became more urgent after Edward Snowden revealed significant efforts by government agencies to spy on the world population.

The EFF have helped create two projects: LetsEncrypt – which massively simplifies the free installation of HTTPS certificates; and HTTPS Everywhere – a browser plugin to help you use HTTPS whenever it’s available. The advent of HTTP/2 has helped negate performance concerns when moving to HTTPS.

Google have also made efforts to encourage websites to enable HTTPS: First announcing in 2014 that they would consider HTTPS support in their search ranking algorithm; and last year, that Google Chrome would start visually warning users of “insecure” (non-HTTPS) websites.

Our sites

We made https://www.ubuntu.com HTTPS-only in October of last year, and have since done so on 10 more sites:

We hope to enable HTTPS on our other sites in the coming months.

Although enabling HTTPS can be relatively simple there were a number of specific challenges we had to overcome for some of our websites. I hope to write more about these in a follow-up post.

Read more
Leo Arias

Last Sunday we went to the Poás Volcano to make free maps.

This is the second geek outing of the JaquerEspéis. From the first one we learned that we had to wait until summer because it's not possible to make maps during a storm. And the day was perfect. It wasn't just sunny, but the crater was totally clear and thus we could add a new spot of Costa Rica to the virtual tour.

In addition to that, this time we arrived much better prepared, with multiple phones with mapillary, osmand and OSMTracker, a 360 cam, a Garmin GPS, a drone and even a notebook and two biologists.

The procession of the MapperSpace

Here's how it works. Everybody with the GPS in the phone activated waits until it finds the location. Then, each person uses the application of his preference to collect data: pictures, audio, video, text notes, traces, annotations in the notebook...

Later, in our homes, we upload, publish and share all the collected data. These is useful to improve the free maps of OpenStreetMap. We add from really simple things like the location of a trash bin to really important things like how accessible is the place for a person in a wheelchair, together with the location of all the accesses or the places that have a lack of them. Each person improves the map a little, in the region that he knows or passed by. With more than 3 million users, OpenStreetMap is the best map of the world that exists; and it has a particular importance in regions like ours, without a lot of economic potential for the megacorporations that make and sell closed maps stealing private data from their users.

Because the maps we make are free, what comes next has no limits. There are groups working on the reconstruction of 3D models from the pictures, on the identification and interpretation of signs, on applications to calculate the optimal route to reach any place using any combination of means of transportation, on applications to assist decission making during the design of the future of a city, and many other things. All of this based on shared knowledge and community.

The image above is the virtual tour in Mapillary. As we recorded it with the 360 cam, you can click and drag with the mouse to see all the angles. You can also click above, in the play button to follow the path we took. Or you can click in any of the green dots in the map to follow your own path.

Thank you very much to everybody who joined us, specially to Denisse and Charles for being our guides, and for filling up the trip with interesting information about flora, fauna, geology and historic importance of El Poás.

Miembros del MaperEspeis

(More pictures and videos here)

The next MapperSpace will be on march the 12th.

Read more
UbuntuTouch

在最新的snapd 2.20中,它开始支持一个叫做classic模式的snap 应用开发.这种classic可以使得我们的应用开发者能够快速地开发我们所需要的应用,这是因为我们不必要对我们的现有的应用做太多的改变.在classic模式下的应用,它可以看见host系统的所有的位于"/"下的文件,就像我们目前正常的应用一样.但是在安装我们的应用后,它的所有文件将位于/snap/foo/current下.它的执行文件将位于/snap/bin目录下,就像我们目前的所有其它的snap应用一样.

当我们安装我们的classic模式下的snap应用时,我们需要使用--classic选项.在上传我们的应用到Ubuntu Core商店时,也需要人工检查.它可以看见位于/snap/core/current下的所有的文件,同时也可以对host里的任何位置的文件进行操作.这样做的目的是为了能够使得开发者快速地发布自己的以snap包为格式的应用,并在以后的开发中逐渐运用Ubuntu Core的confinement以得到完善.在目前看来,classic模式下的应用在可以遇见的将来不能够安装到all-snap系统中,比如Ubuntu Core 16.

对于classic模式的应用来说,它的"/"目录对应于host系统的"/".更多的信息可以参阅地址:http://snapcraft.io/docs/reference/confinement


安装

在开发之前,我们在desktop上安装core而不是ubuntu-core.我们可以用snap list命令来查看:

liuxg@liuxg:~$ snap list
Name          Version  Rev  Developer  Notes
core          16.04.1  714  canonical  -
firefox-snap  0.1      x1              classic
hello         1.0      x1              devmode
hello-world   6.3      27   canonical  -

如果你的系统里是安装的ubuntu-core的话,建议大家使用devtool中的reset-state来使得我们的系统恢复到最初的状态(没有任何安装的snap).在以后的snapd发布中,我们将不再有ubuntu-core这个snap了.我们也可以适用如下的方法来删除ubuntu-core snap并安装上core snap:

$ sudo apt purge -y snapd
$ sudo apt install snapd
$ sudo snap install core

另外对于有的开发者来说从stable channel得不到最新的snap 2.20,我们可以在我们的Ubuntu Destkop中,打开"System Settings"/"Software & Updates"/"Developer Options":


我们可以打开上面所示的开关,就可以得到最新的所有关于我们Ubuntu桌面系统的发布的软件.snap 2.20版本目前就在这个xenial-proposed之中.

在今天的教程中,我们来做一个例程来进行将讲解:

https://github.com/liu-xiao-guo/helloworld-classic

在上面的例程中,它的snapcraft.yaml的文件如下:

snapcraft.yaml

name: hello
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: classic
type: app  #it can be gadget or framework

apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo
 createfile:
   command: bin/createfile
 createfiletohome:
   command: bin/createfiletohome
 listhome:
   command: bin/listhome
 showroot:
   command: bin/showroot

parts:
 hello:
  plugin: dump
  source: .    

从上面的例程中,我们可以看出来,我们在confinement的位置定义为:

confinement: classic

这定义了我们的这个snap应用是一个classic的应用.我们安装时也必须使用--classic的选项来进行安装.细心的开发者会发现,在我们的应用中,我们没有定义任何的plug,也就是我们没有使用任何的interface.大家可以和我们的另外一个项目https://github.com/liu-xiao-guo/helloworld-demo进行比较一下.

就像我们之前所说的,我们只希望能尽快把我们的应用以snap形式发布,在classic模式下,我们暂时不考虑安全的问题.

我们可以打包我们的应用,并以如下的命令来进行安装:

$ sudo snap install hello_1.0_amd64.snap --classic --dangerous

我们的脚本showroot内容如下:

#!/bin/bash

cd /
echo "list all of the content in the root:"
ls

echo "show the home content:"
cd home
ls

当我们运行我们的应用showroot时,我们可以看到:

liuxg@liuxg:~/snappy/desktop/helloworld-classic$ hello.showroot 
list all of the content in the root:
bin    core  home	     lib	 media	proc  sbin  sys  var
boot   dev   initrd.img      lib64	 mnt	root  snap  tmp  vmlinuz
cdrom  etc   initrd.img.old  lost+found  opt	run   srv   usr  vmlinuz.old
show the home content:
liuxg  root.ini
liuxg@liuxg:~/snappy/desktop/helloworld-classic$ ls /
bin    core  home            lib         media  proc  sbin  sys  var
boot   dev   initrd.img      lib64       mnt    root  snap  tmp  vmlinuz
cdrom  etc   initrd.img.old  lost+found  opt    run   srv   usr  vmlinuz.old

显然,它可以看到我们整个host系统的文件目录.这个应用时间上可以对它所看到的文件及目录进行操作.
当然,我们也可以运行evil脚本:

#!/bin/sh

set -e
echo "Hello Evil World!"

echo "This example demonstrates the app confinement"
echo "You should see a permission denied error next"

echo "Haha" > /var/tmp/myevil.txt

echo "If you see this line the confinement is not working correctly, please file a bug"
运行结果如下:

liuxg@liuxg:~/snappy/desktop/helloworld-classic$ hello.evil
Hello Evil World!
This example demonstrates the app confinement
You should see a permission denied error next
If you see this line the confinement is not working correctly, please file a bug

显然在我们没有使用interface的情况下,我们可以想其它的任何目录进行操作,并写入我们想要的数据.confinement在classic模式下不起任何的作用.对于我们开发者来说,我们只需要快速地把我的应用打包为snap即可.

最后,作为一个速成的例子,我们通过classic模式来快速地把Firefox打包为一个snap:

Firefox snapcraft.yaml

name: firefox-snap
version: '0.1'
summary: "A Firefox snap"
description: "Firefox in a classic confined snap"

grade: devel
confinement: classic

apps:
  firefox-snap:
    command: firefox
    aliases: [firefox]

parts:
  firefox:
    plugin: dump
    source: https://download.mozilla.org/?product=firefox-50.1.0-SSL&os=linux64&lang=en-US
    source-type: tar

在这里,我们直接下载我们需要的版本,并进行打包.安装并运行我们的Firefox应用:



整个项目的源码在地址:https://github.com/liu-xiao-guo/firefox-snap





作者:UbuntuTouch 发表于2017/1/6 13:48:04 原文链接
阅读:553 评论:2 查看评论

Read more
UbuntuTouch

我们知道在一个snap包里,我们可以定义任何数量的app.针对desktop应用来说,那么我们如何使得我们的每个应用都有自己的icon及desktop文件呢?在今天的文章中,我们将介绍如何实现这个.特别注意的是,这个新的feature只有在snapcraft 2.25+版本中才可以有.


首先,我们来看一下我已经做好的一个项目:

https://github.com/liu-xiao-guo/helloworld-desktop

整个应用的文件架构如下:

liuxg@liuxg:~/snappy/desktop/helloworld-desktop$ tree -L 3
.
├── bin
│   ├── createfile
│   ├── createfiletohome
│   ├── echo
│   ├── env
│   ├── evil
│   ├── sh
│   └── writetocommon
├── echo.desktop
├── README.md
├── setup
│   └── gui
│       ├── echo.png
│       ├── helloworld.desktop
│       └── helloworld.png
└── snapcraft.yaml

从上面我们可以看出来,我们已经有一个叫做setup/gui的目录.它里面包含了一个叫做helloworld.desktop的文件:

helloworld.desktop

[Desktop Entry]
Type=Application
Name=Hello
GenericName=Hello world
Comment=A hello world Ubuntu Desktop
Keywords=hello;world;
Exec=hello-xiaoguo.env
Icon=${SNAP}/meta/gui/helloworld.png
Terminal=true
X-Ubuntu-Touch=false
X-Ubuntu-Default-Department-ID=accessories
X-Ubuntu-Splash-Color=#F5F5F5
StartupNotify=true

在这里它指定了这个应用的icon及执行的脚本hello-xiaoguo.env.

我们再来看看我们的snapcraft.yaml文件:

snapcraft.yaml

name: hello-xiaoguo
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: strict
type: app  #it can be gadget or framework

apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh
 hello-world:
   command: bin/echo
   desktop: usr/share/applications/echo.desktop
 createfile:
   command: bin/createfile
 createfiletohome:
   command: bin/createfiletohome
 writetocommon:
   command: bin/writetocommon

plugs:
    home:
        interface: home

parts:
 hello:
  plugin: dump
  source: .
  organize:
    echo.desktop: usr/share/applications/echo.desktop

在这个文件中,我们也定义了其它的应用,比如hello-world.那么我们如何为它也定义自己的desktop文件呢?答案是:

 hello-world:
   command: bin/echo
   desktop: usr/share/applications/echo.desktop

我们可以在它的command下面指定一个属于自己的desktop文件.在这里我们的echo.desktop文件如下:

echo.desktop

[Desktop Entry]
Type=Application
Name=Echo
GenericName=Hello world
Comment=A hello world Ubuntu Desktop
Keywords=hello;world;
Exec=hello-xiaoguo.hello-world
Icon=${SNAP}/meta/gui/echo.png
Terminal=true
X-Ubuntu-Touch=false
X-Ubuntu-Default-Department-ID=accessories
X-Ubuntu-Splash-Color=#F5F5F5
StartupNotify=true

在这里它指定了自己的执行文件及一个属于自己的icon.我们打包我们的应用,并安装.在Ubuntu Desktop的dash中,我们可以看到:



运行"Hello World"应用显示:



运行我们的"echo"应用:








作者:UbuntuTouch 发表于2017/1/23 10:43:39 原文链接
阅读:473 评论:0 查看评论

Read more
UbuntuTouch

在很多的时候,我们想把一个website变为一个snap应用,从而我们可以直接从商店里进行下载它,并直接使用.我们不需要在浏览器中输入这个网站的地址.也有很多的时候,我们的游戏就在一个网站上,比如http://hexgl.bkcore.com/play/,我们可以直接把该网址打包进我们的snap应用,从而使得它直接可以从商店下载并运行.在今天的教程中,我们来展示如何把网址的url打包到我们的应用中.

为了说明问题方便,我们使用www.sina.com.cn来进行展示:

snapcraft.yaml


name: sina-webapp
version: '1.0'
summary: Sina webapp
description: |
  Webapp version of the Sina web application.

grade: stable
confinement: strict

apps:
  sina-webapp:
    command: webapp-launcher --enable-back-forward --webappUrlPatterns=http?://www.sina.com/* http://www.sina.com/ %u
    plugs:
      - browser-sandbox
      - camera
      - network
      - network-bind
      - opengl
      - pulseaudio
      - screen-inhibit-control
      - unity7
      - network-control
      - mount-observe

plugs:
  browser-sandbox:
    interface: browser-support
    allow-sandbox: false
  platform:
    interface: content
    content: ubuntu-app-platform1
    target: ubuntu-app-platform
    default-provider: ubuntu-app-platform

parts:
  webapp-container:
    after: [desktop-ubuntu-app-platform,webapp-helper]
    stage-packages:
      - fonts-wqy-zenhei
      - fcitx-frontend-qt5
    plugin: nil

在这里,我们使用里desktop-ubuntu-app-platform cloud part.注意在这里,我们也加入了对中文字体及输入法的支持:

      - fonts-wqy-zenhei
      - fcitx-frontend-qt5

我们可以参考我先前的文章"利用ubuntu-app-platform提供的platform接口来减小Qt应用大小"来安装并使用ubuntu-app-platform:platform.具体来说,我们必须进行如下的安装:

$ sudo snap install ubuntu-app-platform

我们在terminal中打入:
$ snapcraft
就可以把我们的应用打包为snap格式.我们使用如下的命令来进行安装:
$ sudo snap install sina-webapp_1.0_amd64.snap --dangerous
等我们安装好后,我们可以发现:

liuxg@liuxg:~$ snap list
Name                 Version  Rev  Developer  Notes
amazon-webapp        1.3      x1              -
azure                0.1      x2              -
core                 16.04.1  714  canonical  -
hello-world          6.3      27   canonical  -
sina-webapp          1.0      x1              -
snappy-debug         0.26     25   canonical  -
ubuntu-app-platform  1        22   canonical  -

我们的sian-webapp已经被成功安装好了.在这里面,我们也可以发现ubuntu-app-platform及core两个snap应用.在我们的应用中,由于我们定义了如下的plug:
  • camera
  • mount-observe
  • network-control
  • content
根据在网址http://snapcraft.io/docs/reference/interfaces里的介绍,我们发现这些接口必须是手动连接的,所以我们必须使用如下的命令:

$ sudo snap connect sina-webapp:platform ubuntu-app-platform:platform
$ sudo snap connect sina-webapp:camera core:camera
$ sudo snap connect sina-webapp:network-control core:network-control
$ sudo snap connect sina-webapp:mount-observe core:mount-observe

通过上面的命令,我们进行了手动的连接.如果由于我们重新安装或其它原因在我们运行我们的应用时出现诸如:

You need to connect the ubuntu-app-platform package with your application   
to reuse shared assets, please run:  
snap install ubuntu-app-platform  
snap connect sina-webapp:platform ubuntu-app-platform:platform  

这样的错误信息,我们需要使用如下的工具:
$ sudo /usr/lib/snapd/snap-discard-ns sina-webapp  
来清除先前的设置,然后在重新运行我们的应用之前,我们再手动连接我们上面所列出来的接口.

我们可以在我们的Desktop的dash中找到我们的应用的图标,并运行.





整个项目的源码在:https://github.com/liu-xiao-guo/sina-webapp.我们可以使用如下的命令从商店来下载这个应用:

$ sudo snap install sina-webapp --beta




作者:UbuntuTouch 发表于2017/1/22 14:09:32 原文链接
阅读:510 评论:0 查看评论

Read more
UbuntuTouch

在我们编译打包snap应用时,我们时常会发现在我们的代码或snapcraft.yaml中每次做一次小的改动后,重新运行snapcraft命令时,都会从Ubuntu archive中重新下载所需要的包.如果一个包很大的话,这需要很长的时间才可以完成.如果是在Desktop的情况下,我们有时可以使用VPN来解决这个问题.这种情况特别是发生在我们需要使用ARM板子进行编译打包的时候,因为我在这些板子上甚至不能运行VPN.那么我们如何来解决这个问题呢?

很幸运的是,我们的同事ogra帮我们设计了一个叫做packageproxy的snap包.我们可以通过如下的命令来安装:

$ sudo snap install packageproxy

安装完后,我们可以通过snap list来发现:

liu-xiao-guo@localhost:~$ snap list
Name            Version       Rev  Developer  Notes
classic         16.04         17   canonical  devmode
core            16.04.1       716  canonical  -
grovepi-server  1.0           x1              devmode
packageproxy    0.1           3    ogra       -
pi2             16.04-0.17    29   canonical  -
pi2-kernel      4.4.0-1030-3  22   canonical  -

在我们的ARM板子,比如树莓派中,我们通过安装classic应用,进入到classic的环境中:

$ sudo snap install classic --devmode --edge
$ sudo classic

当然具体的步骤,我们可以参照文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".在进入到我们的classic环境后,我们需要多/etc/apt中的sources.list文件进行修改.为了保险起见,我们首先可以通过如下的命令来保存原先的sources.list文件

(classic)liu-xiao-guo@localhost:/etc/apt$ sudo cp sources.list sources.list.bak

这样以前的文件被保存于sources.list.bak文件中.如果我们打开sources.list文件,我们可以看见它的内容如下:

sources.list

# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial main restricted
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main restricted
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial universe
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial universe
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial multiverse
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates multiverse

## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-backports main restricted universe multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-backports main restricted universe multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu xenial partner
# deb-src http://archive.canonical.com/ubuntu xenial partner

deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security main restricted
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security main restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security universe
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security universe
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security multiverse
# deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security multiverse

显然,在上面的文件中,所有的源都指向http://ports.ubuntu.com/ubuntu-ports/.也就是说每次我们重新编译我们的snap应用时,它都会从上面的地址进行下载.如果一个包很大的话,它就会造成我们的编译的时间过长.这显然不是我们所期望的.如果,我们把上面的http://ports.ubuntu.com/ubuntu-ports/换成http://localhost:9999/ubuntu-ports/,那么整个sources.list文件的内容如下:

sources.list

# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://localhost:9999/ubuntu-ports/ xenial main restricted
# deb-src http://localhost:9999/ubuntu-ports/ xenial main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://localhost:9999/ubuntu-ports/ xenial-updates main restricted
# deb-src http://localhost:9999/ubuntu-ports/ xenial-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://localhost:9999/ubuntu-ports/ xenial universe
# deb-src http://localhost:9999/ubuntu-ports/ xenial universe
deb http://localhost:9999/ubuntu-ports/ xenial-updates universe
# deb-src http://localhost:9999/ubuntu-ports/ xenial-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://localhost:9999/ubuntu-ports/ xenial multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial multiverse
deb http://localhost:9999/ubuntu-ports/ xenial-updates multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial-updates multiverse

## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://localhost:9999/ubuntu-ports/ xenial-backports main restricted universe multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial-backports main restricted universe multiverse

## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu xenial partner
# deb-src http://archive.canonical.com/ubuntu xenial partner

deb http://localhost:9999/ubuntu-ports/ xenial-security main restricted
# deb-src http://localhost:9999/ubuntu-ports/ xenial-security main restricted
deb http://localhost:9999/ubuntu-ports/ xenial-security universe
# deb-src http://localhost:9999/ubuntu-ports/ xenial-security universe
deb http://localhost:9999/ubuntu-ports/ xenial-security multiverse
# deb-src http://localhost:9999/ubuntu-ports/ xenial-security multiverse

也就是说,每当我们重新下载我们的包的时候,它将从我们的本地地址http://localhost:9999/ubuntu-ports/进行下载.
  • 如果这个包曾经被下载过,那么packageproxy将帮我们从本地的cache中直接提取,从而不需要重新下载
  • 如果这个包从来没有被现在过,那么packageproxy将帮助我们从网上进行下载,并保存于本地以备以后重复使用
为了能够在命令行中进行修改sources.list文件,我们可以使用如下的命令:

sudo sed -i 's/http:\/\/ports.ubuntu.com\/ubuntu-ports/http:\/\/localhost:9999\/ubuntu-ports/g' /etc/apt/sources.list

这显然是一种非常好的方法.在第一编译的时候,它可能需要一些时间.但是以后的编译,它可以直接从本地提取从而加速我们的编译的速度.
在我们设置完上面的步骤后,我们可以通过如下的命令来进行系统的更新及安装:

$ sudo apt-get update
$ sudo apt install snapcraft git-core build-essential

这样我们就安装好我们的编译的环境了.通过这样的配置过后,在第一次编译我的应用时,如果需要的包从来没有下载过,就会慢一些.第二次编译我们的应用时,就会发现速度快很多.当然,我们可以把我们的地址指向某一个设备的IP地址,而不使用localhost,从而使大家从同一个设备中提取所需要的包.这种方法适合于网路环境比较差的时候.特别适合一些hackathon活动.作为一个展示的例子http://paste.ubuntu.com/23789982/,我们可以看到在clean项目后,编译的速度大大提高了.

如果在使用过程中,出现如下的错误:

(classic)liu-xiao-guo@localhost:~$ sudo apt-get update
Err:1 http://localhost:9999/ubuntu-ports xenial InRelease
  Could not connect to localhost:9999 (127.0.0.1). - connect (111: Connection refused) [IP: 127.0.0.1 9999]
Err:2 http://localhost:9999/ubuntu-ports xenial-updates InRelease
  Unable to connect to localhost:9999: [IP: 127.0.0.1 9999]
Err:3 http://localhost:9999/ubuntu-ports xenial-backports InRelease
  Unable to connect to localhost:9999: [IP: 127.0.0.1 9999]
Err:4 http://localhost:9999/ubuntu-ports xenial-security InRelease
  Unable to connect to localhost:9999: [IP: 127.0.0.1 9999]

这种情况可能是由于packageproxy在运行时出现一些问题,我们可以通过删除在如下地址下的文件来解决:

liu-xiao-guo@localhost:/var/snap/packageproxy/3$ ls
approx.conf  config.yaml  hosts.allow  hosts.deny  lockfile.lock  var

我们可以删除上面的lockfile.lock来解决这个问题.

另外,这种方法也适合在Ubuntu Desktop下的snap编译打包,我们只需要把上面的"ubuntu-ports"修改为"ubuntu"即可.这个练习就留给开发者.

如果大家想删除所有已经下载的包以减少存储空间:

  • 使用snap remove packageproxy命令来删除这个应用
  • 删除rm -rf /var/snap/packageproxy/3/var/cache/approx/*所有的文件









作者:UbuntuTouch 发表于2017/1/13 10:04:38 原文链接
阅读:731 评论:4 查看评论

Read more
UbuntuTouch

LXD作为一容器的hypervisor,它对LXC提供了更多的新的用户体验.在今天的教程中,我们来介绍如何利用LXD来在不同的Ubuntu Desktop版本下编译我们的snap应用.


1)安装LXD及命令行工具


我们可以参照链接来安装我们的LXD:https://linuxcontainers.org/lxd/getting-started-cli/.为了方便,我们可以利用已经做好的Ubuntu Image:

liuxg@liuxg:~$ lxc launch ubuntu:yakkety
Creating flying-snake
Starting flying-snake

在这里,我们创建了一个叫做flying-snake的容器.这个名字是自动生产的.它是基于Ubuntu 16.10的yakkety.
如果你想有一个自己的容器的名称,你也可以使用如下的命令来生产:

$ lxc launch ubuntu:yakkety foobar

这里的foobar将是我们生成的容器的名称而不是像上面自动生成的flying-snake.

我们可以利用如下的命令来查看:

liuxg@liuxg:~$ lxc list
+----------------------+---------+-------------------+------+------------+-----------+
|         NAME         |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+----------------------+---------+-------------------+------+------------+-----------+
| flying-snake         | RUNNING | 10.0.1.143 (eth0) |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+
| immortal-feline      | STOPPED |                   |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+
| vivid-x86-armhf      | STOPPED |                   |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+
| xenial-desktop-amd64 | STOPPED |                   |      | PERSISTENT | 0         |
+----------------------+---------+-------------------+------+------------+-----------+

2)创建一个用户


我们可以利用如下的命令来创建一个属于自己的用户:

liuxg@liuxg:~$ lxc exec flying-snake -- adduser liuxg
Adding user `liuxg' ...
Adding new group `liuxg' (1001) ...
Adding new user `liuxg' (1001) with group `liuxg' ...
Creating home directory `/home/liuxg' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for liuxg
Enter the new value, or press ENTER for the default
	Full Name []: liuxg
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] y

请注意这里的flying-snake为我们刚才创建的container的名称.开发者必须根据自己的名称进行选择.我为这个container创建了一个叫做liuxg的用户.为用户添加管理员权限:

liuxg@liuxg:~$ lxc exec flying-snake -- adduser liuxg sudo
Adding user `liuxg' to group `sudo' ...
Adding user liuxg to group sudo
Done.

$ lxc exec flying-snake -- visudo
通过上面的命令,启动编辑器,并在文件的最后,加入:

<username>   ALL=(ALL) NOPASSWD: ALL



注意这里的liuxg是我们刚才创建的用户名.开发者需要替换为自己的用户名.

更新系统并安装所需要的工具:

$ lxc exec flying-snake -- apt update -qq
$ lxc exec flying-snake -- apt upgrade -qq
$ lxc exec flying-snake -- apt install -qq -y snapcraft build-essential


3)登陆并编译我们的应用


我们可以通过如下的命令来登陆:

$ lxc exec flying-snake -- sudo -iu liuxg

注意这里的liuxg是我们之前创建的用户.

liuxg@liuxg:~$ lxc exec flying-snake -- sudo -iu liuxg
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

liuxg@flying-snake:~$ 
liuxg@flying-snake:~$ ls -al
total 20
drwxr-xr-x 2 liuxg liuxg 4096 Jan  4 02:52 .
drwxr-xr-x 4 root  root  4096 Jan  4 02:52 ..
-rw-r--r-- 1 liuxg liuxg  220 Jan  4 02:52 .bash_logout
-rw-r--r-- 1 liuxg liuxg 3771 Jan  4 02:52 .bashrc
-rw-r--r-- 1 liuxg liuxg  655 Jan  4 02:52 .profile
liuxg@flying-snake:~$ mkdir apps
liuxg@flying-snake:~$ cd apps/
liuxg@flying-snake:~/apps$ git clone https://github.com/liu-xiao-guo/alias
Cloning into 'alias'...
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 4 (delta 0), pack-reused 0
Unpacking objects: 100% (4/4), done.
Checking connectivity... done.
liuxg@flying-snake:~/apps$ ls
alias
liuxg@flying-snake:~/apps$ cd alias/
liuxg@flying-snake:~/apps/alias$ ls
hello.sh  snapcraft.yaml
liuxg@flying-snake:~/apps/alias$ snapcraft 
Preparing to pull aliases 
Pulling aliases 
Preparing to build aliases 
Building aliases 
Staging aliases 
Priming aliases 
Snapping 'my-alias' |                                                                
Snapped my-alias_0.1_amd64.snap

我们可以看到我们已经在yakkety (16.10)的环境中把我们的应用打包为一个snap.

我们可以利用 lxc file pull命令来把我们的容器里的文件拷入到我们的host:

lxc file pull first/etc/hosts .
我们可以利用:

$ lxc stop flying-snake

来停止我们的container.

liuxg@liuxg:~/tmp$ lxc stop flying-snake
liuxg@liuxg:~/tmp$ lxc list
+----------------------+---------+------+------+------------+-----------+
|         NAME         |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+----------------------+---------+------+------+------------+-----------+
| flying-snake         | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+
| immortal-feline      | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+
| vivid-x86-armhf      | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+
| xenial-desktop-amd64 | STOPPED |      |      | PERSISTENT | 0         |
+----------------------+---------+------+------+------------+-----------+

具体的操作可以参阅文章:https://linuxcontainers.org/lxd/getting-started-cli/








作者:UbuntuTouch 发表于2017/1/4 11:50:37 原文链接
阅读:569 评论:0 查看评论

Read more
UbuntuTouch

在先前的文章"利用snapweb来管理我们的Ubuntu Core应用"中,我们可以看到有些应用可以显示一个自己独特的应用图标,而另外一些应用只显示一个缺省的Ubuntu Logo图标.这其中的原因是因为我们在snapcraft.yaml文件中缺少定义icon.


我们来看一下我已经创建好的一个项目:

https://github.com/liu-xiao-guo/helloworld-icon

它的snapcraft.yaml的定义如下:

snapcraft.yaml


name: hello-icon
version: "1.0"
summary: The 'hello-world' of snaps
description: |
    This is a simple snap example that includes a few interesting binaries
    to demonstrate snaps and their confinement.
    * hello-world.env  - dump the env of commands run inside app sandbox
    * hello-world.evil - show how snappy sandboxes binaries
    * hello-world.sh   - enter interactive shell that runs in app sandbox
    * hello-world      - simply output text
grade: stable
confinement: strict
type: app  #it can be gadget or framework
icon: icon.png

apps:
 env:
   command: bin/env
 evil:
   command: bin/evil
 sh:
   command: bin/sh

parts:
 hello:
  plugin: dump
  source: .

在这里我们定义了:

icon: icon.png

 

在我们打包完我们的应用,并安装好它.我们重新打开snapweb来查看:



在这里,我们可以看到最新的图标.通过这样的方法,我们可以为我们的snap应用创建一个属于自己的图标.


作者:UbuntuTouch 发表于2017/2/6 9:02:57 原文链接
阅读:362 评论:0 查看评论

Read more
UbuntuTouch

对于有些snap应用来说,我们很希望在snap安装时能够运行我们的一段脚本来做一些我们想要做的事,比如创建一个文件夹等.那么我们如何能得到这个事件呢?在我们的先前的文章"如何为我们的Ubuntu Core应用进行设置"中,我们已经展示了如何设置我们的snap应用.在那里面的configure脚本在设置时会被调用.事实上,它在安装时也会被自动调用.下面,我们以如下的例子来说明:

https://github.com/liu-xiao-guo/helloworld-install

在上面的例子中,我们的configure脚本如下:

configure

#!/bin/sh

echo "This is called during the installation!"
exit 1

这是一个非常简单的脚本程序.在我们的安装过程中,它返回的值是"1",表明它是失败的.那么这个应用将不被成功安装:

liu-xiao-guo@localhost:~/apps/helloworld-install$ sudo snap install *.snap --dangerous
error: cannot perform the following tasks:
- Run configure hook of "hello-install" snap if present (This is called during the installation!)
liu-xiao-guo@localhost:~/apps/helloworld-install$ snap list
Name            Version       Rev  Developer  Notes
classic         16.04         17   canonical  devmode
core            16.04.1       716  canonical  -
grovepi-server  1.0           x1              devmode
packageproxy    0.1           3    ogra       -
pi2             16.04-0.17    29   canonical  -
pi2-kernel      4.4.0-1030-3  22   canonical  -
snapweb         0.21.2        25   canonical  -

显然通过上面的展示,helloworld-install没有被安装到我们的系统中去.
如果我们把configure脚本修改为:

configure

#!/bin/sh

echo "This is called during the installation!"
exit 0

这个脚本的返回值为"0",表明它的安装是成功的.

liu-xiao-guo@localhost:~/apps/helloworld-install$ sudo snap install *.snap --dangerous
hello-install 1.0 installed
liu-xiao-guo@localhost:~/apps/helloworld-install$ snap list
Name            Version       Rev  Developer  Notes
classic         16.04         17   canonical  devmode
core            16.04.1       716  canonical  -
grovepi-server  1.0           x1              devmode
hello-install   1.0           x1              -
packageproxy    0.1           3    ogra       -
pi2             16.04-0.17    29   canonical  -
pi2-kernel      4.4.0-1030-3  22   canonical  -
snapweb         0.21.2        25   canonical  -
liu-xiao-guo@localhost:~/apps/helloworld-install$ vi /var/log/syslog
liu-xiao-guo@localhost:~/apps/helloworld-install$ sudo vi /var/log/syslog

我们可以在系统的/var/log/syslog中找到这个脚本运行时的输出:



显然脚本在安装时有被正常运行.我们可以通过运行这样一个hook来对我们的应用做一些初始化,从而为接下来的应用的运行铺好基础.

作者:UbuntuTouch 发表于2017/1/16 10:30:29 原文链接
阅读:364 评论:0 查看评论

Read more
UbuntuTouch

在先前的文章"如何为我们的Ubuntu Core应用进行设置 "中,我们通过copy plugin的方法把我们想要的congfigure文件拷入到我们所需要的目录中.具体的实现是这样的:

snapcraft.yaml

parts:  
 hello:  
  plugin: copy  
  files:  
    ./bin: bin  
 config:  
  plugin: dump  
  source: .  
  organize:  
    configure: meta/hooks/configure  

由于在snapcraft 2.25版本以后,它提供了对hook的支持,所有,我们只需要要在我们的项目的根目录中建立一个叫做snap/hooks的目录,并把我们的configure文件拷入即可:

liuxg@liuxg:~/snappy/desktop/helloworld-hook$ tree -L 4
.
├── bin
│   ├── createfile
│   ├── createfiletohome
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
├── setup
│   ├── gui
│   │   ├── helloworld.desktop
│   │   └── helloworld.png
│   └── license.txt
├── snap
│   └── hooks
│       └── configure
└── snapcraft.yaml

有了这样的文件架构后,snapcraft会自动帮我们把configure文件考入到meta/hooks文件目录下.下面是我们的prime目录里的内容:

liuxg@liuxg:~/snappy/desktop/helloworld-hook/prime$ tree -L 3
.
├── bin
│   ├── createfile
│   ├── createfiletohome
│   ├── echo
│   ├── env
│   ├── evil
│   └── sh
├── command-createfiletohome.wrapper
├── command-createfile.wrapper
├── command-env.wrapper
├── command-evil.wrapper
├── command-hello-world.wrapper
├── command-sh.wrapper
├── meta
│   ├── gui
│   │   ├── helloworld.desktop
│   │   └── helloworld.png
│   ├── hooks
│   │   └── configure
│   └── snap.yaml
└── snap
    └── hooks
        └── configure

我们必须记住这个功能只是在snapcraft 2.25以上的版本中才有的.我们可以看到在meta/hooks/中有一个叫做configure的文件.

我们安装好这个snap应用,并执行如下的命令:

$ sudo snap set hello username=foo password=bar

我们可以通过如下的命令来获得这个值:

$ sudo snap get hello username
foo

显然,我们得到我们设置的值.整个源码在:https://github.com/liu-xiao-guo/helloworld-hook.另外一个例程也可以在我们的snapcraft项目中的hooks找到.

更多阅读:https://github.com/snapcore/snapcraft/blob/master/docs/hooks.md.就想文章中介绍的那样,我们也可以利用另外一种方法来实现.具体的例子见pyhooks.这种方法的好处是可以使用python语言来进行设置.运行结果如下:

liuxg@liuxg:~$ sudo snap set pyhooks fail=true
error: cannot perform the following tasks:
- Run configure hook of "pyhooks" snap (Failing as requested.)
liuxg@liuxg:~$ sudo snap set pyhooks fail=false


更多阅读:https://snapcraft.io/docs/build-snaps/hooks




作者:UbuntuTouch 发表于2017/1/20 14:05:49 原文链接
阅读:326 评论:0 查看评论

Read more
UbuntuTouch

在今天的教程中,我们来展示如何在Ubuntu Core中使用azure的IoT hub来开发我们的应用.Azure IoT Hub目前提供了一个框架对我们的IoT设备进行管理,并可以通过预置解决方案来展示我们的数据.在今天的文章中,我们将介绍如何把我们的设备连接到远程监视预配置解决方案中.


1)预配远程监视预配置解决方案


我们可以按照在微软的官方文档:


来创建一个我们的预配置解决方案,并最终形成向如下的配置:



这里我的解决方案的名称叫做"sensors".







如果在我们的IoT设备中有数据传上来的话,我们可以在右边的"遥测历史记录"中看到这些数据.这些数据通常是以曲线的形式来表现出来的.
在创建设备的过程中,我们需要记录下来在下面画面中的数据,以便我们在代码实现中使用:



我们也可以打开azure.cn来查看我们已经创建的所有的资源:









这里显示的"连接字符串-主秘钥"对我们以后的编程是非常有用的.需要特别留意一下.


2)生成以C语言开发的snap应用


在这一节中,我们将展示如下如何使用C语言来开发一个客户端,并最终形成一个snap应用.这个应用将和我们在上一节中所形成的远程监视预配置解决方案进行通信.我们在Ubuntu 16.04的Desktop中开发snap应用.如果大家对snap开发的安装还是不很熟悉的话,请参阅我的文章来安装好自己的环境.就像文章中介绍的那样,我们先安装必要的组件包:

$ sudo apt-get install cmake gcc g++

将 AzureIoT 存储库添加到计算机:
$ sudo add-apt-repository ppa:aziotsdklinux/ppa-azureiot
$ sudo apt-get update
安装 azure-iot-sdk-c-dev 包:
$ sudo apt-get install -y azure-iot-sdk-c-dev
这样,我们就安装好了我们所需要的组件包.由于一些原因,在我们编译我们的例程中,会出现一些的错误,所以我们必须做如下的手动修改:
/usr/include/azureiot/inc$ sudo mv azure_c_shared_utility ..
有了这个改动,就可以确保我们在如下的编译中不会出现头文件找不到的情况.

我们先来看看我已经做好的一个项目:

snapcraft.yaml

name: remote-monitor 
version: '0.1' 
summary: This is a remote-monitor snap for azure
description: |
  This is a remote-monitor sample snap for azure

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  remote-monitor:
    command: bin/sample_app
    plugs: [network]

parts:
  remote:
    plugin: cmake
    source: ./src

这个项目是一个cmake项目.由于我们的包的名称和我们的应用名称是一样的,所在我们运行我们的应用时,我们可以直接打入remote-monitorming来运行我们的应用.在做任何改变之前,我们打开remote_monitoring.c文件,并注意一下的代码:

static const char* deviceId = "mydevice";
static const char* deviceKey = "[Device Key]";
static const char* hubName = "sensorsf8f61";
static const char* hubSuffix = "azure-devices.cn";

这里的解释是:

static const char* deviceId = "[Device Id]";
static const char* deviceKey = "[Device Key]";
static const char* hubName = "[IoTHub Name]";
static const char* hubSuffix = "[IoTHub Suffix, i.e. azure-devices.net]";

我们需要根据我们自己的账号的情况替换这些值.在实际运用中,我们可以修改在remote_monitoring.c中的如下的代码:

while (1)
{
	unsigned char*buffer;
	size_t bufferSize;
	
	srand(time(NULL));
	int r = rand() % 50;  
	int r1 = rand() % 55;
	int r2 = rand() % 50;
	printf("r: %d, r1: %d, r2: %d\n", r, r1, r2);
	thermostat->Temperature = r;
	thermostat->ExternalTemperature = r1;
	thermostat->Humidity = r2;
	
	(void)printf("Sending sensor value Temperature = %d, Humidity = %d\r\n", thermostat->Temperature, thermostat->Humidity);

	if (SERIALIZE(&buffer, &bufferSize, thermostat->DeviceId, thermostat->Temperature, thermostat->Humidity, thermostat->ExternalTemperature) != CODEFIRST_OK)
	{
		(void)printf("Failed sending sensor value\r\n");
	}
 ...
}
来把我们所需要的数据传上去.在这里,我们随意写了一些随机的数据.

注意这里的"deviceKey"是我们在上节中图片中所展示的那个"Device Key".
我们在termnial中直接打入如下的命令:
$ snapcraft
这样就形成了我们的项目的snap包.我们可以通过如下的命令来进行安装:

liuxg@liuxg:~/snappy/desktop/azure/remote-monitor$ sudo snap install remote-monitor_0.1_amd64.snap --dangerous
[sudo] password for liuxg: 
remote-monitor 0.1 installed

liuxg@liuxg:~$ snap list
Name            Version  Rev  Developer  Notes
azure           0.1      x1              -
core            16.04.1  714  canonical  -
hello           1.0      x1              devmode
hello-world     6.3      27   canonical  -
hello-xiaoguo   1.0      x1              -
remote-monitor  0.1      x2    
显然我们的remote-monitor已经被成功安装.我们在terminal中打入如下的命令:
liuxg@liuxg:~$ remote-monitor 
IoTHubClient accepted the message for delivery
r: 30, r1: 37, r2: 4
Sending sensor value Temperature = 30, Humidity = 4
IoTHubClient accepted the message for delivery
r: 45, r1: 23, r2: 35
Sending sensor value Temperature = 45, Humidity = 35
IoTHubClient accepted the message for delivery
r: 16, r1: 39, r2: 25
Sending sensor value Temperature = 16, Humidity = 25
IoTHubClient accepted the message for delivery
r: 16, r1: 33, r2: 14
Sending sensor value Temperature = 16, Humidity = 14
IoTHubClient accepted the message for delivery
r: 20, r1: 29, r2: 32

显然我们的客户端应用在不断地向azure IoT Hub发送数据.我们可以通过https://www.azureiotsuite.cn/来查看我们已经收到的数据.



在下面我们可以看到设备数据的最大值及最小值的变化.
如果大家想把这个应用在树莓派等的ARM设备上运行的话,请参阅我的文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".具体的安装和这里介绍的是一样的.请开发者自己去试.


3)生成以nodejs开发的snap应用


在这一节中,我们将介绍如何使用nodejs来开发我们的snap应用.我们可以参阅文章"适用于 Node.js 的 Azure IoT 中心入门".就像这篇文章中所介绍的那样,我们最感兴趣的是它里面介绍的第三个控制台应用程序SimulatedDevice.js.

SimulatedDevice.js

#!/usr/bin/env node

var clientFromConnectionString = require('azure-iot-device-amqp').clientFromConnectionString;
var Message = require('azure-iot-device').Message;

var connectionString = 'HostName=sensorsf8f61.azure-devices.cn;DeviceId=mydevice;SharedAccessKey={Device Key}';

var client = clientFromConnectionString(connectionString);

function printResultFor(op) {
  return function printResult(err, res) {
    if (err) console.log(op + ' error: ' + err.toString());
    if (res) console.log(op + ' status: ' + res.constructor.name);
  };
}

var connectCallback = function (err) {
  if (err) {
    console.log('Could not connect: ' + err.amqpError);
  } else {
    console.log('Client connected');

    // Create a message and send it to the IoT Hub every second
    setInterval(function(){
        var temp = 10 + (Math.random() * 4);
        var windSpeed = 10 + (Math.random() * 4);
        var data = JSON.stringify({ deviceId: 'mydevice', temp: temp, windSpeed: windSpeed});
        var message = new Message(data);
        console.log("Sending message: " + message.getData());
        client.sendEvent(message, printResultFor('send'));

    }, 5000);
  }
};

client.open(connectCallback);

注意在上面的代码中,我们需要手动修改如下的connectionString:
var connectionString = 'HostName=sensorsf8f61.azure-devices.cn;DeviceId=mydevice;SharedAccessKey={yourdevicekey}';
就像在文章中介绍的那样,它的定义为:

var connectionString = 'HostName={youriothostname};DeviceId=myFirstNodeDevice;SharedAccessKey={yourdevicekey}';

我们需要根据我们在第一节中设置的那些参数来修改上面的字符串.大家可以参阅我的项目:

snapcraft.yaml

name: azure 
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: This is an azure snap app
description: |
  This is an azure client snap to send a message

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  azure:
    command: bin/send
    plugs: [network]

parts:
  node:
    plugin: nodejs
    source: .

同样我们可以打入snapcraft命令来生产相应的包,并进行安装:

liuxg@liuxg:~/snappy/desktop/azurenode-snap$ snap list
Name            Version  Rev  Developer  Notes
azure           0.1      x2              -
core            16.04.1  714  canonical  -
hello           1.0      x1              devmode
hello-world     6.3      27   canonical  -
hello-xiaoguo   1.0      x1              -
remote-monitor  0.1      x2         

我们可以直接运行azure命令:

liuxg@liuxg:~/snappy/desktop/azurenode-snap$ azure
Client connected
Sending message: {"deviceId":"mydevice","temp":11.826184131205082,"windSpeed":11.893792165443301}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":10.594819721765816,"windSpeed":10.54138664342463}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.27814894542098,"windSpeed":10.962828870862722}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":13.068702490068972,"windSpeed":10.28670579008758}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.723079251125455,"windSpeed":12.173830625601113}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":12.595101269893348,"windSpeed":12.120747512206435}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":11.431507185101509,"windSpeed":11.76255036983639}
send status: MessageEnqueued
Sending message: {"deviceId":"mydevice","temp":12.488932724110782,"windSpeed":13.200456796213984}
send status: MessageEnqueued

我们可以通过https://www.azureiotsuite.cn/来查看我们已经收到的数据:



我们在上面可以看到Temp及Wind Speed的曲线.同样地,如果大家想把这个应用在树莓派等的ARM设备上运行的话,请参阅我的文章"如何为树莓派安装Ubuntu Core并在Snap系统中进行编译".请开发者自己去试.


作者:UbuntuTouch 发表于2017/1/19 16:59:05 原文链接
阅读:317 评论:0 查看评论

Read more