Canonical Voices

Nicholas Skaggs

Prepping for a Summer of Code!

The time to apply is here! Ubuntu has applied for GSOC 2016, but we need project ideas for prospective students, and mentors to mentor them.

What is GSOC?
GSOC stands for Google Summer of Code. The event brings together university students and open source organizations like Ubuntu. It happens over the course of the summer, and mentors mentor students on a one to one basis. Mentors give project ideas, and students select them, pairing up with the mentor to make the idea a reality.

I'll be a mentor!
Mentors need to be around to help a student from May - August. You'll be mentoring a student on the project you propose, so you'll need to be capable of completing the project. As the time commitment is long, it's helpful to have a friend who can pitch in if needed. We've put together all the information you need to know as a mentor on community.u.c, including links to some mentoring guides. This will help give you more details about what to expect.

I'm in. What do I need to do?
To make sure you ideas are included in our application, you need to have them on the Ideas wiki by February 19th, 2016. When you are ready, simply add your idea. It's that simple. Assuming we are accepted as an organization, students will read our ideas, and we'll have a period of time to finalize the details with interested students.

I have a question!
If you have questions about what all this mentoring might entail, feel free to reach out to myself or anyone on the community team. This is a great way to make some needed ideas a reality and grow the community at the same time!

Read more
Nicholas Skaggs

Google Code In 2015: Complete!

Google Code In 2015 is now complete! Overall, we had a total of 215 students finish more than 500 tasks for ubuntu! The students made contributions to documentation, created wallpapers and other art, fixed Unity 7 issues, hacked on the core apps for the phone, performed tests, wrote automated and manual tests, and worked on tools like the qatracker. A big thank you to all of the students and mentors who helped out.

Here's our winners!

 * Daniyaal Rasheed
 * Matthew Allen

And our Finalists

 * Evan McIntire
 * Girish Rawat
 * Malena Vasquez Currie

The students amazed everyone, myself included, with the level and skill they displayed in there work. You all should be very proud. It was lovely to have you as part of the community, and I've been delighted to see some of your faces sticking around and still contributing! Thank you, and welcome to the community!

    Read more
    Dustin Kirkland

    There's no shortage of excitement, controversy, and readership, any time you can work "Docker" into a headline these days.  Perhaps a bit like "Donald Trump", but for CIO tech blogs and IT news -- a real hot button.  Hey, look, I even did it myself in the title of this post!

    Sometimes an article even starts out about CoreOS, but gets diverted into a discussion about Docker, like this one, where shykes (Docker's founder and CTO) announced that Docker's default image would be moving away from Ubuntu to Alpine Linux.

    I have personally been Canonical's business and technical point of contact with Docker Inc, since September of 2013, when I co-presented at an OpenStack Meetup in Austin, Texas, with Ben Golub and Nick Stinemates of Docker.  I can tell you that, along with most of the rest of the Docker community, this casual declaration in an unrelated Hacker News thread, came as a surprise to nearly all of us!

    Docker's default container image is certainly Docker's decision to make.  But it would be prudent to examine at a few facts:

    (1) Check DockerHub and you may notice that while Busybox (Alpine Linux) has surpassed Ubuntu in the number downloads (66M to 40M), Ubuntu is still by far the most "popular" by number of "stars" -- likes, favorites, +1's, whatever, (3.2K to 499).

    (2) Ubuntu's compressed, minimal root tarball is 59 MB, which is what is downloaded over the Internet.  That's different from the 188 MB uncompressed root filesystem, which has been quoted a number of times in the press.

    (3) The real magic of Docker is such that you only ever download that base image, one time!  And you only store one copy of the uncompressed root filesystem on your disk! Just once, sudo docker pull ubuntu, on your laptop at home or work, and then launch thousands of images at a coffee shop or airport lounge with its spotty wifi.  Build derivative images, FROM ubuntu, etc. and you only ever store the incremental differences.

    Actually, I encourage you to test that out yourself...  I just launched a t2.micro -- Amazon's cheapest instance type with the lowest networking bandwidth.  It took 15.938s to sudo apt install  And it took 9.230s to sudo docker pull ubuntu.  It takes less time to download Ubuntu than to install Docker!

    ubuntu@ip-172-30-0-129:~⟫ time sudo apt install -y
    real 0m15.938s
    user 0m2.146s
    sys 0m0.913s

    As compared to...

    ubuntu@ip-172-30-0-129:~⟫ time sudo docker pull ubuntu
    latest: Pulling from ubuntu
    f15ce52fc004: Pull complete
    c4fae638e7ce: Pull complete
    a4c5be5b6e59: Pull complete
    8693db7e8a00: Pull complete
    ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
    Digest: sha256:457b05828bdb5dcc044d93d042863fba3f2158ae249a6db5ae3934307c757c54
    Status: Downloaded newer image for ubuntu:latest
    real 0m9.230s
    user 0m0.021s
    sys 0m0.016s

    Now, sure, it takes even less than that to download Alpine Linux (0.747s by my test), but again you only ever do that once!  After you have your initial image, launching Docker containers take the exact same amount of time (0.233s) and identical storage differences.  See:

    ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run alpine /bin/true
    real 0m0.233s
    user 0m0.014s
    sys 0m0.001s
    ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run ubuntu /bin/true
    real 0m0.234s
    user 0m0.012s
    sys 0m0.002s

    (4) I regularly communicate sincere, warm congratulations to our friends at Docker Inc, on its continued growth.  shykes publicly mentioned the hiring of the maintainer of Alpine Linux in that Hacker News post.  As a long time Linux distro developer myself, I have tons of respect for everyone involved in building a high quality Linux distribution.  In fact, Canonical employs over 700 people, in 44 countries, working around the clock, all calendar year, to make Ubuntu the world's most popular Linux OS.  Importantly, that includes a dedicated security team that has an outstanding track record over the last 12 years, keeping Ubuntu servers, clouds, desktops, laptops, tablets, and phones up-to-date and protected against the latest security vulnerabilities.  I don't know personally Natanael, but I'm intimately aware of what a spectacular amount of work it is to maintain and secure an OS distribution, as it makes its way into enterprise and production deployments.  Good luck!

    (5) There are currently 5,854 packages available via apk in Alpine Linux (sudo docker run alpine apk search -v).  There are 8,862 packages in Ubuntu Main (officially supported by Canonical), and 53,150 binary packages across all of Ubuntu Main, Universe, Restricted, and Multiverse, supported by the greater Ubuntu community.  Nearly all 50,000+ packages are updated every 6 months, on time, every time, and we release an LTS version of Ubuntu and the best of open source software in the world every 2 years.  Like clockwork.  Choice.  Velocity.  Stability.  That's what Ubuntu brings.

    Docker holds a special place in the Ubuntu ecosystem, and Ubuntu has been instrumental in Docker's growth over the last 3 years.  Where we go from here, is largely up to the cross-section of our two vibrant communities.

    And so I ask you honestly...what do you want to see?  How would you like to see Docker and Ubuntu operate together?

    I'm Canonical's Product Manager for Ubuntu Server, I'm responsible for Canonical's relationship with Docker Inc, and I will read absolutely every comment posted below.


    p.s. I'm speaking at Container Summit in New York City today, and wrote this post from the top of the (inspiring!) One World Observatory at the World Trade Center this morning.  Please come up and talk to me, if you want to share your thoughts (at Container Summit, not the One World Observatory)!

    Read more
    Daniel Holbach

    It’s been a while since our last Snappy Clinic, so we asked for your input on which topics to cover. Thanks for the feedback so far.

    In our next session Sergio Schvezov is going to talk about what’s new in Snapcraft and the changes in the 2.x series. Be there and you are going to be up-to-date on how to publish your software on Snappy Ubuntu Core. There will be time for questions afterwards.

    Join us on the 12th February 2016 at 16:00 UTC on

    Read more

    mgo r2016.02.04

    This is one of the most packed releases of the mgo driver for Go in recent times. There are new features, important fixes, and relevant
    internal reestructuring to support the on-going server improvements.

    As usual for the driver, compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

    Release r2016.02.04 of mgo includes the following changes which were requested, proposed, and performed by a great community.


    Exposed access to individual bulk error cases

    Accessing the individual errors obtained while attempting a set of bulk operations is now possible via the new mgo.BulkError error type and its Cases method which returns a slice of mgo.BulkErrorCases which are properly indexed according to the operation order used. There are documented server limitations for MongoDB version 2.4 and older.

    This change completes the bulk API. It can now perform optimized bulk queries when communicating with recent servers (MongoDB 2.6+), perform the same operations in older servers using compatible but less performant options, and in both cases provide more details about obtained errors.

    Feature first requested by pjebs.

    New fields in CollectionInfo

    The CollectionInfo type has new fields for dealing with the recently introduced document validation MongoDB feature, and also the storage engine-specific options.

    Features requested by nexcode and pjebs.

    New Find and GetMore command support

    MongoDB is moving towards replacing the old wire protocol with a command-based based implementation, and every recent release introduced changes around that. This release of mgo introduces support for the find and getMore commands which were added to MongoDB 3.2. These are exercised whenever querying the database or iterating over queried results.

    Previous server releases will continue to use the classical mechanism, and the two approaches should be compatible. Please report any issues in that regard.

    Do not fallback to Monotonic mode improperly

    Recent driver changes adapted the Pipe.Iter, Collection.Indexes, and Database.CollectionNames methods to work with recent server releases. These changes also introduced a bug that could cause the driver to talk to a secondary server improperly, when that operation was the first operation performed on the session. This has been fixed.

    Problem reported by Sundar.

    Fix crash in new bulk update API

    The new methods introduced in the bulk update API in the last release were crashing when a connection error occurred.

    Fix contributed by Maciej Galkowski.

    Enable TCP keep-alives for all connections

    As requested by developers, TCP keep-alives are now enabled for all connections. No timing is specified, so the default operating system setting will be used.

    Feature requested by Hunor Kovács, Berni Varga, and Martin Garton.

    ChangeInfo.Updated now behaves as documented

    The ChangeInfo.Updated field is documented to report the number of documents that were changed, but in fact that was not possible in old releases of the driver, since the server did not provide that information. Instead, the server only reported the number of documents matched by the selection document.

    This has been fixed, so starting with MongoDB 2.6, the driver will behave as documented, and inform the number of documents that were indeed updated. This is related to the next driver change:

    New ChangeInfo.Matched field

    The new ChangeInfo.Matched field will report the number of documents that matched the selection document, whether the performed change was a removal, an update, or an upsert.

    Feature requested by Žygimantas and other list members.

    ObjectId now supports TextMarshaler/TextUnmarshaler

    ObjectId now knows how to marshal/unmarshal itself as text in hex format when using its encoding.TextMarshaler and TextUnmarshaler interfaces.

    Contributed by Jack Spirou.

    Created GridFS index is now unique

    The index on {“files_id”, “n”} automatically created for GridFS chunks when a file write completes now enforces the uniqueness of the key.

    Contributed by Wisdom Omuya.

    Use SIGINT in dbtest.DBServer

    The dbtest.DBServer was stopping the server with SIGKILL, which would not give it enough time for a clean shutdown. It will now stop it with SIGINT.

    Contributed by Haijun Wang.

    Ancient field tag logic dropped

    The very old field tag format parser, in use several years back even before Go 1 was released, was still around in the code base for no benefit.

    This has been removed by Alexandre Cesaro.

    Documentation improvements

    Documentation improvements were contributed by David Glasser, Ryan Chipman, and Shawn Smith.

    Fixed BSON skipping of incorrect slice types

    The BSON parser was collapsing when an array value was unmarshaled into an existing field that was not of an appropriate type for such values. This has been fixed so that the the bogus field is ignored and the value skipped.

    Fix contributed by Gabriel Russel.

    Read more
    Inayaili de León Persson

    A new look for tablet

    Today we launched a new and redesigned tablet section on that introduces all the cool features of the upcoming BQ Aquaris M10 Ubuntu Edition tablet.

    Breaking out of the box

    In this redesign, we have broken out of the box, removing the container that previously held the content of the pages. This makes each page feel more spacious, giving the text and the images plenty of room to shine.

    This is something we’ve wanted to do for a while across the entire site, so we thought that having the beautiful, large tablet photos to work with gave us a good excuse to try out this new approach.


    The overview page of the tablet section of, before (left) and after


    For most of the section, we’ve used existing patterns from our design framework, but the removal of the container box allowed us to play with how the images behave across different screen sizes. You will notice that if you look at the tablet pages on a medium to small screen, some of the images will be cropped by the edge of the viewport, but if you see the same image in a large screen, you can see it in its entirety.


    From the top: the same row on a large, medium and small screen


    How we did it

    This project was a concerted effort across the design, marketing, and product management teams.

    To understand the key goals for this redesign, we collected the requirements and messaging from the key stakeholders of the project. We then translated all this information into wireframes that guide the reader through what Ubuntu Tablet is. These went through a few rounds of testing and iteration with both users and stakeholders. Finally, we worked with a copywriter to refine the words of each section of the tablet pages.


    Some of the wireframes


    To design the pages, we started with exploring the flow of each page in large and small screens in flat mockups, which were quickly built into a fully functioning prototype that we could keep experimenting and testing on.


    Some of the flat mockups created for the redesign


    This design process, where we start with flat mockups and move swiftly into a real prototype, is how we design and develop most of our projects, and it is made easier by the existence of a flexible framework and design patterns, that we use (and sometimes break!) as needed.


    Testing the new tablet section on real devices


    To showcase the beautiful tablet screen designs on the new BQ tablet, we coordinated with professional photographers to deliver the stunning images of the real device that you can enjoy along every new page of the section.


    One of the many beautiful device photos used across the new tablet section of


    Many people were involved in this project, making it possible to deliver a redesign that looks great, and is completed on time — which is always good a thing :)

    In the future

    In the near future, we want to remove the container box from the other sections of, although you may see this change being done gradually, section by section, rather than all in one go. We will also be looking at redesigning our navigation, so lots to look forward to.

    Now go experience tablet for yourself and let us know what you think!

    Read more
    Joseph Williams

    Embeddable cards for Juju

    Juju is a cloud orchestration tool with a lot of unique terminology. This is not so much of a problem when discussing or explaining terms or features within the site or the GUI, but, when it comes to external sources, the context is sometimes lost and everything can start to get a little confusing.

    So a project was started to create embeddable widgets of information to not only give context to blog posts mentioning features of Juju, but also to help user adoption by providing direct access to the information on

    This project was started by Anthony Dillon, one of the developers, to create embeddable information cards for three topics in particular. Charms, bundles and user profiles. These cards would function similarly to embedded YouTube videos, or embedding a song from Soundcloud on your own site as seen bellow:



    Multiple breakpoints of the cards were established (small, 300px and below. medium: 301px to 625px and large: 626px and up) so that they would work responsively and therefore work in a breadth of different situations and compliment the user’s content referring to a charm, bundle or a user profile without any additional effort for the user.

    We started the process by determining what information we would want to include within the card and then refining that information as we went through the different breakpoints. Here are some of the initial ideas that we put together:

    charm  bundle  profile

    We wrote down all the information there could be related to each type of card and then discussed how that might carry down to smaller card sizes and removed the unnecessary information as we went through the process. For the profile cards, we felt there was not enough information to display a profile card above 625px break point so we limited the card to the medium size.

    Just enter the bundle or the charm name and the card will be generated for you to copy the code snippet to embed into your own content.

    embed card thing

    You can create your own here:

    Below are some examples of the responsive cards are different widths:


    .juju-card h1 { clear: none !important; display: block !important; } .juju-card .charm-card__image { margin-left: 0 !important; margin-right: 20px !important; float: left !important; } .juju-card .charm-card__footer-logo, .juju-card .bundle-card__footer-logo { margin-left: 0 !important; margin-right: 10px !important; margin-bottom: 0; display: inline; } .juju-card footer { padding: 0; }

    Read more
    Dustin Kirkland

    People of earth, waving at Saturn, courtesy of NASA.
    “It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

    Why the negativity?!? Are you sure? Did you count all of them?

    No one has.

    How many people in the world use Ubuntu?

    Actually, no one can count all of the Ubuntu users in the world!

    Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

    Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

    In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

    But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

    Let's look at some facts...
    • Docker users have launched Ubuntu images over 35.5 million times.
    • HashiCorp's Vagrant images of Ubuntu 14.04 LTS 64-bit have been downloaded 10 million times.
    • At least 20 million unique instances of Ubuntu have launched in public clouds, private clouds, and bare metal in 2015 itself.
      • That's Ubuntu in clouds like AWS, Microsoft Azure, Google Compute Engine, Rackspace, Oracle Cloud, VMware, and others.
      • And that's Ubuntu in private clouds like OpenStack.
      • And Ubuntu at scale on bare metal with MAAS, often managed with Chef.
    • In fact, over 2 million new Ubuntu cloud instances launched in November 2015.
      • That's 67,000 new Ubuntu cloud instances launched per day.
      • That's 2,800 new Ubuntu cloud instances launched every hour.
      • That's 46 new Ubuntu cloud instances launched every minute.
      • That's nearly one new Ubuntu cloud instance launched every single second of every single day in November 2015.
    • And then there are Ubuntu phones from Meizu.
    • And more Ubuntu phones from BQ.
    • Of course, anyone can install Ubuntu on their Google Nexus tablet or phone.
    • Or buy a converged tablet/desktop preinstalled with Ubuntu from BQ.
    • Oh, and the Tesla entertainment system?  All electric Ubuntu.
    • Google's self-driving cars?  They're self-driven by Ubuntu.
    • George Hotz's home-made self-driving car?  It's a homebrewed Ubuntu autopilot.
    • Snappy Ubuntu downloads and updates for Raspberry Pi's and Beagle Bone Blacks -- the response has been tremendous.  Download numbers are astounding.
    • Drones, robots, network switches, smart devices, the Internet of Things.  More Snappy Ubuntu.
    • How about Walmart?  Everyday low prices.  Everyday Ubuntu.  Lots and lots of Ubuntu.
    • Are you orchestrating containers with Kubernetes or Apache Mesos?  There's plenty of Ubuntu in there.
    • Kicking PaaS with Cloud Foundry?  App instances are Ubuntu LXC containers.  Pivotal has lots of serious users.
    • And Heroku?  You bet your PaaS those hosted application containers are Ubuntu.  Plenty of serious users here too.
    • Tianhe-2, the world's largest super computer.  Merely 80,000 Xeons, 1.4 TB of memory, 12.4 PB of disk, all number crunching on Ubuntu.
    • Ever watch a movie on Netflix?  You were served by Ubuntu.
    • Ever hitch a ride with Uber or Lyft?  Your mobile app is talking to Ubuntu servers on the backend.
    • Did you enjoy watching The Hobbit?  Hunger Games?  Avengers?  Avatar?  All rendered on Ubuntu at WETA Digital.  Among many others.
    • Do you use Instagram?  Say cheese!
    • Listen to Spotify?  Music to my ears...
    • Doing a deal on Wall Street?  Ubuntu is serious business for Bloomberg.
    • Paypal, Dropbox, Snapchat, Pinterest, Reddit. Airbnb.  Yep.  More Ubuntu.
    • Wikipedia and Wikimedia, among the busiest sites on the Internet with 8 - 18 billion page views per month, are hosted on Ubuntu.
    How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
    • More people use Ubuntu than we know.
    • More people use Ubuntu than you know.
    • More people use Ubuntu than they know.
    More people use Ubuntu than anyone actually knows.

    Because of who we all are.


    Read more
    David Henningsson

    13 ways to PulseAudio

    All roads lead to Rome, but PulseAudio is not far behind! In fact, how the PulseAudio client library determines how to try to connect to the PulseAudio server has no less than 13 different steps. Here they are, in priority order:

    1) As an application developer, you can specify a server string in your call to pa_context_connect. If you do that, that’s the server string used, nothing else.

    2) If the PULSE_SERVER environment variable is set, that’s the server string used, and nothing else.

    3) Next, it goes to X to check if there is an x11 property named PULSE_SERVER. If there is, that’s the server string, nothing else. (There is also a PulseAudio module called module-x11-publish that sets this property. It is loaded by the start-pulseaudio-x11 script.)

    4) It also checks client.conf, if such a file is found, for the default-server key. If that’s present, that’s the server string.

    So, if none of the four methods above gives any result, several items will be merged and tried in order.

    First up is trying to connect to a user-level PulseAudio, which means finding the right path where the UNIX socket exists. That in turn has several steps, in priority order:

    5) If the PULSE_RUNTIME_PATH environment variable is set, that’s the path.

    6) Otherwise, if the XDG_RUNTIME_DIR environment variable is set, the path is the “pulse” subdirectory below the directory specified in XDG_RUNTIME_DIR.

    7) If not, and the “.pulse” directory exists in the current user’s home directory, that’s the path. (This is for historical reasons – a few years ago PulseAudio switched from “.pulse” to using XDG compliant directories, but ignoring “.pulse” would throw away some settings on upgrade.)

    8) Failing that, if XDG_CONFIG_HOME environment variable is set, the path is the “pulse” subdirectory to the directory specified in XDG_CONFIG_HOME.

    9) Still no path? Then fall back to using the “.config/pulse” subdirectory below the current user’s home directory.

    Okay, so maybe we can connect to the UNIX socket inside that user-level PulseAudio path. But if it does not work, there are still a few more things to try:

    10) Using a path of a system-level PulseAudio server. This directory is /var/run/pulse on Ubuntu (and probably most other distributions), or /usr/local/var/run/pulse in case you compiled PulseAudio from source yourself.

    11) By checking client.conf for the key “auto-connect-localhost”. If so, also try connecting to tcp4:…

    12) …and tcp6:[::1], too. Of course we cannot leave IPv6-only systems behind.

    13) As the last straw of hope, the library checks client.conf for the key “auto-connect-display”. If it’s set, it checks the DISPLAY environment variable, and if it finds a hostname (i e, something before the “:”), then that host will be tried too.

    To summarise, first the client library checks for a server string in step 1-4, if there is none, it makes a server string – out of one item from steps 5-9, and then up to four more items from steps 10-13.

    And that’s all. If you ever want to customize how you connect to a PulseAudio server, you have a smorgasbord of options to choose from!

    Read more
    Colin Ian King

    One issue when running parallel processes is contention of shared resources such as the Last Level Cache (aka LLC or L3 Cache).  For example, a server may be running a set of Virtual Machines with processes that are memory and cache intensive hence producing a large amount of cache activity. This can impact on the other VMs and is known as the "Noisy Neighbour" problem.

    Fortunately the next generation Intel processors allow one to monitor and also fine tune cache allocation using Intel Cache Monitoring Technology (CMT) and Cache Allocation Technology (CAT).

    Intel kindly loaned me a 12 thread development machine with CMT and CAT support to experiment with this technology using the Intel pqos tool.   For my experiment, I installed Ubuntu Xenial Server on the machine. I then installed KVM and an VM instance of Ubuntu Xenial Server.   I then loaded the instance using stress-ng running a memory bandwidth stressor:

     stress-ng --stream 1 -v --stream-l3-size 16M  
    ..which allocates 16MB in 4 buffers and performs various read/compute and writes to these, hence causing a "noisy neighbour".

    Using pqos,  one can monitor and see the cache/memory activity:
    sudo apt-get install intel-cmt-cat
    sudo modprobe msr
    sudo pqos -r
    TIME 2016-02-04 10:25:06
    0 0.59 168259k 9144.0 12195.0 0.0
    1 1.33 107k 0.0 3.3 0.0
    2 0.20 2k 0.0 0.0 0.0
    3 0.70 104k 0.0 2.0 0.0
    4 0.86 23k 0.0 0.7 0.0
    5 0.38 42k 24.0 1.5 0.0
    6 0.12 2k 0.0 0.0 0.0
    7 0.24 48k 0.0 3.0 0.0
    8 0.61 26k 0.0 1.6 0.0
    9 0.37 11k 144.0 0.9 0.0
    10 0.48 1k 0.0 0.0 0.0
    11 0.45 2k 0.0 0.0 0.0
    Now to run a stress-ng stream stressor on the host and see the performance while the noisy neighbour is also running:
    stress-ng --stream 4 --stream-l3-size 2M --perf --metrics-brief -t 60
    stress-ng: info: [2195] dispatching hogs: 4 stream
    stress-ng: info: [2196] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code
    stress-ng: info: [2196] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results
    stress-ng: info: [2196] stress-ng-stream: Using L3 CPU cache size of 2048K
    stress-ng: info: [2196] stress-ng-stream: memory rate: 1842.22 MB/sec, 736.89 Mflop/sec (instance 0)
    stress-ng: info: [2198] stress-ng-stream: memory rate: 1847.88 MB/sec, 739.15 Mflop/sec (instance 2)
    stress-ng: info: [2199] stress-ng-stream: memory rate: 1833.89 MB/sec, 733.56 Mflop/sec (instance 3)
    stress-ng: info: [2197] stress-ng-stream: memory rate: 1847.16 MB/sec, 738.86 Mflop/sec (instance 1)
    stress-ng: info: [2195] successful run completed in 60.01s (1 min, 0.01 secs)
    stress-ng: info: [2195] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
    stress-ng: info: [2195] (secs) (secs) (secs) (real time) (usr+sys time)
    stress-ng: info: [2195] stream 22101 60.01 239.93 0.04 368.31 92.10
    stress-ng: info: [2195] stream:
    stress-ng: info: [2195] 547,520,600,744 CPU Cycles 9.12 B/sec
    stress-ng: info: [2195] 69,959,954,760 Instructions 1.17 B/sec (0.128 instr. per cycle)
    stress-ng: info: [2195] 11,066,905,620 Cache References 0.18 B/sec
    stress-ng: info: [2195] 11,065,068,064 Cache Misses 0.18 B/sec (99.98%)
    stress-ng: info: [2195] 8,759,154,716 Branch Instructions 0.15 B/sec
    stress-ng: info: [2195] 2,205,904 Branch Misses 36.76 K/sec ( 0.03%)
    stress-ng: info: [2195] 23,856,890,232 Bus Cycles 0.40 B/sec
    stress-ng: info: [2195] 477,143,689,444 Total Cycles 7.95 B/sec
    stress-ng: info: [2195] 36 Page Faults Minor 0.60 sec
    stress-ng: info: [2195] 0 Page Faults Major 0.00 sec
    stress-ng: info: [2195] 96 Context Switches 1.60 sec
    stress-ng: info: [2195] 0 CPU Migrations 0.00 sec
    stress-ng: info: [2195] 0 Alignment Faults 0.00 sec
    .. so about 1842 MB/sec memory rate and 736 Mflop/sec per CPU across 4 CPUs.  And pqos shows the cache/memory actitivity as:
    sudo pqos -r
    TIME 2016-02-04 10:35:27
    0 0.14 43060k 1104.0 2487.9 0.0
    1 0.12 3981523k 2616.0 2893.8 0.0
    2 0.26 320k 48.0 18.0 0.0
    3 0.12 3980489k 1800.0 2572.2 0.0
    4 0.12 3979094k 1728.0 2870.3 0.0
    5 0.12 3970996k 2112.0 2734.5 0.0
    6 0.04 20k 0.0 0.3 0.0
    7 0.04 29k 0.0 1.9 0.0
    8 0.09 143k 0.0 5.9 0.0
    9 0.15 0k 0.0 0.0 0.0
    10 0.07 2k 0.0 0.0 0.0
    11 0.13 0k 0.0 0.0 0.0
    Using pqos again, we can find out how much LLC cache the processor has:
    sudo pqos -v
    NOTE: Mixed use of MSR and kernel interfaces to manage
    CAT or CMT & MBM may lead to unexpected behavior.
    INFO: Monitoring capability detected
    INFO: CPUID.0x7.0: CAT supported
    INFO: CAT details: CDP support=0, CDP on=0, #COS=16, #ways=12, ways contention bit-mask 0xc00
    INFO: LLC cache size 9437184 bytes, 12 ways
    INFO: LLC cache way size 786432 bytes
    INFO: L3CA capability detected
    INFO: Detected PID API (perf) support for LLC Occupancy
    INFO: Detected PID API (perf) support for Instructions/Cycle
    INFO: Detected PID API (perf) support for LLC Misses
    ERROR: IPC and/or LLC miss performance counters already in use!
    Use -r option to start monitoring anyway.
    Monitoring start error on core(s) 5, status 6
    So this CPU has 12 cache "ways", each of 786432 bytes (768K).  One or more  "Class of Service" (COS)  types can be defined that can use one or more of these ways.  One uses a bitmap with each bit representing a way to indicate how the ways are to be used by a COS.  For example, to use all the 12 ways on my example machine, the bit map is 0xfff  (111111111111).   A way can be exclusively mapped to a COS or shared, or not used at all.   Note that the ways in the bitmap must be contiguously allocated, so a mask such as 0xf3f (111100111111) is invalid and cannot be used.

    In my experiment, I want to create 2 COS types, the first COS will have just 1 cache way assigned to it and CPU 0 will be bound to this COS as well as pinning the VM instance to CPU 0  The second COS will have the other 11 cache ways assigned to it, and all the other CPUs can use this COS.

    So, create COS #1 with just 1 way of cache, and bind CPU 0 to this COS, and pin the VM to CPU 0:
    sudo pqos -e llc:1=0x0001
    sudo pqos -a llc:1=0
    sudo taskset -apc 0 $(pidof qemu-system-x86_64)
    And create COS #2, with 11 ways of cache and bind CPUs 1-11 to this COS:
    sudo pqos -e "llc:2=0x0ffe"
    sudo pqos -a "llc:2=1-11"
    And let's see the new configuration:
    sudo pqos  -s
    NOTE: Mixed use of MSR and kernel interfaces to manage
    CAT or CMT & MBM may lead to unexpected behavior.
    L3CA COS definitions for Socket 0:
    L3CA COS0 => MASK 0xfff
    L3CA COS1 => MASK 0x1
    L3CA COS2 => MASK 0xffe
    L3CA COS3 => MASK 0xfff
    L3CA COS4 => MASK 0xfff
    L3CA COS5 => MASK 0xfff
    L3CA COS6 => MASK 0xfff
    L3CA COS7 => MASK 0xfff
    L3CA COS8 => MASK 0xfff
    L3CA COS9 => MASK 0xfff
    L3CA COS10 => MASK 0xfff
    L3CA COS11 => MASK 0xfff
    L3CA COS12 => MASK 0xfff
    L3CA COS13 => MASK 0xfff
    L3CA COS14 => MASK 0xfff
    L3CA COS15 => MASK 0xfff
    Core information for socket 0:
    Core 0 => COS1, RMID0
    Core 1 => COS2, RMID0
    Core 2 => COS2, RMID0
    Core 3 => COS2, RMID0
    Core 4 => COS2, RMID0
    Core 5 => COS2, RMID0
    Core 6 => COS2, RMID0
    Core 7 => COS2, RMID0
    Core 8 => COS2, RMID0
    Core 9 => COS2, RMID0
    Core 10 => COS2, RMID0
    Core 11 => COS2, RMID0
    ..showing Core 0 bound to COS1, and Cores 1-11 bound to COS2, with COS1 with 1 cache way and COS2 with the remaining 11 cache ways.
    Now re-run the stream stressor and see if the VM has less impact on the LL3 cache:
    stress-ng --stream 4 --stream-l3-size 1M --perf --metrics-brief -t 60
    stress-ng: info: [2232] dispatching hogs: 4 stream
    stress-ng: info: [2233] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code
    stress-ng: info: [2233] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results
    stress-ng: info: [2233] stress-ng-stream: Using L3 CPU cache size of 1024K
    stress-ng: info: [2235] stress-ng-stream: memory rate: 2616.90 MB/sec, 1046.76 Mflop/sec (instance 2)
    stress-ng: info: [2233] stress-ng-stream: memory rate: 2562.97 MB/sec, 1025.19 Mflop/sec (instance 0)
    stress-ng: info: [2234] stress-ng-stream: memory rate: 2541.10 MB/sec, 1016.44 Mflop/sec (instance 1)
    stress-ng: info: [2236] stress-ng-stream: memory rate: 2652.02 MB/sec, 1060.81 Mflop/sec (instance 3)
    stress-ng: info: [2232] successful run completed in 60.00s (1 min, 0.00 secs)
    stress-ng: info: [2232] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
    stress-ng: info: [2232] (secs) (secs) (secs) (real time) (usr+sys time)
    stress-ng: info: [2232] stream 62223 60.00 239.97 0.00 1037.01 259.29
    stress-ng: info: [2232] stream:
    stress-ng: info: [2232] 547,364,185,528 CPU Cycles 9.12 B/sec
    stress-ng: info: [2232] 97,037,047,444 Instructions 1.62 B/sec (0.177 instr. per cycle)
    stress-ng: info: [2232] 14,396,274,512 Cache References 0.24 B/sec
    stress-ng: info: [2232] 14,390,808,440 Cache Misses 0.24 B/sec (99.96%)
    stress-ng: info: [2232] 12,144,372,800 Branch Instructions 0.20 B/sec
    stress-ng: info: [2232] 1,732,264 Branch Misses 28.87 K/sec ( 0.01%)
    stress-ng: info: [2232] 23,856,388,872 Bus Cycles 0.40 B/sec
    stress-ng: info: [2232] 477,136,188,248 Total Cycles 7.95 B/sec
    stress-ng: info: [2232] 44 Page Faults Minor 0.73 sec
    stress-ng: info: [2232] 0 Page Faults Major 0.00 sec
    stress-ng: info: [2232] 72 Context Switches 1.20 sec
    stress-ng: info: [2232] 0 CPU Migrations 0.00 sec
    stress-ng: info: [2232] 0 Alignment Faults 0.00 sec
    Now with the noisy neighbour VM constrained to use just 1 way of LL3 cache, the stream stressor on the host now can achieve about 2592 MB/sec and about 1030 Mflop/sec per CPU across 4 CPUs.

    This is a relatively simple example.  With the ability to monitor cache and memory bandwidth activity with one can carefully tune a system to make best use of the limited LL3 cache resource and maximise throughput where needed.

    There are many applications where Intel CMT/CAT can be useful, for example fine tuning containers or VM instances, or pinning user space networking buffers to cache ways in DPDK for improved throughput.

    Read more
    Barry McGee

    Maybe, like me, you seen more of the inside of your gym in January than you had for the six months previous. New year, new diet, new me.. or something like that.

    A big creeping problem in recent years is that websites have been on an all out binge, and not just over the winter holidays — big videos, big images, fancy fonts, third-party libraries — they just can’t get enough of ’em.

    Average page weights increased by 15% in 2014 and although I haven’t yet seen any similar research done for 2015 yet, I’m willing to bet that trend did not reverse.

    Last week I was tasked with making some performance optimisations to the Ubuntu online tour.

    This legacy codebase stretches all the way back to 2012, and as such was not benefitting from some of the modern tools we now have at our disposal as web developers.

    We have been maintaining our largest codebases such as and to ensure they are as performant as they can be but this Ubuntu tour repository slipped through the cracks somewhat.

    We have users all over the world and many of them don’t enjoy the luxury of fat internet pipes that we enjoy in our London office. Time to trim the fat…

    At first look, I noted on load of the site it required 235 HTTP requests to download 2.7MB of data. Chunky Charlie!


    Network waterfall screenshot


    Delving into the codebase, I immediately spotted some big areas ripe for improvement:

    • The CSS files were not being concatenated nor were they minified.
    • The Javascript was also being loaded in separate files, also un-minified.
    • The image assets were uncompressed.
    • The HTML was un-minified.

    Beyond that – I ran the site URL through Google’s PageSpeed Insights and also discovered;

    • Browser cacheing was not being being leveraged as static assets did not have any Expires headers specified
    • There were quite a few CSS and javascript dependancies blocking rendering of the page.

    As you see, the site was only scoring a lowly 46/100, not great.


    Google Page Speed Insights screenshot


    For jobs such as this, my first weapon of choice is the task runner, Gulp. It’s quick and easy to drop Gulp on top of any existing site and use some of it’s wide array of plugins to optimise source assets for performance.

    For this job I used gulp-concat, gulp-htmlmin, gulp-imagemin, gulp-minify-css, gulp-renamegulp-uglify, gulp with critical & gulp-rev.

    Explaining how to use each of them is beyond the scope of this article but you can view my Gulpfile.js and accompanying package.json file to see what I did.

    When retro-optimising a site, you might find you have to make certain compromises such as placing “src” folders inside folders you are optimising to store the original documents, then output the optimised versions into the original folder to ensure everything is backwards compatible and you haven’t broken any relative links. You should also be careful when globbing Javascript files as they may need to be loaded in a certain order to prevent race conditions. This is also true when concatenating and including Javascript libraries such as jQuery.

    In an ideal world, you would not deploy any files from the repository you have compiled locally. They should be ignored by version control and compiled on the fly by running your task runner on the server using a continuous integration engine such as Jenkins or Travis CI. This is much cleaner and will prevent merge conflicts when multiple developers are working on the same codebase.

    So — when we have all of the above configured and then run it over our legacy codebase, how much weight did it shave?


    Network Waterfall - After


    Good news! Now to load the site, we only need 166 HTTP (-29%) requests to download 2.2MB(-18%) of data. Slim(mer) Jim for the win!

    This should mean our users with slower connections will have a much improved experience.

    When we run the leaner site now deployed through Google Pagespeed Insights – we now get a much healthier score also.


    Google Pagespeed - After


    This was a valuable exercise for our team and reminded us we not only have a responsibility to keep all our new and upcoming work performant but we should also address any legacy sites still currently in use wherever possible.

    A leaner web is a faster web and I’m sure that’s something we can all get behind.


    Read more
    Zsombor Egri

    UI Toolkit for OTA9

    Hello folks, it’s been a while since the last update came from our busy toolkit ants. As OTA9 came out recently, it is the time for a refreshment from our side to show you the latest and greatest cocktail of features our barmen have prepared. Beside the bugfixes we’ve provided, here is a list of the big changes we’ve introduced in OTA9. Enjoy!


    One of the most awaited components is the PageHeader. This now makes it possible to have a detached header component which then can be used in a Page, a Rectangle, an Item, wherever you wish. It is composed of a base, plain Header component, which does not have any layout, but handles the default behavior like showing, hiding the header and dealing with the auto-hiding when an attached Flickable is moved. Some part of that API has been introduced in OTA8, but because it wasn’t yet polished enough, we decided not to announce it there and provide more distilled functionality now.

    The PageHeader then adds the navigation and the trailing actions through the - hopefully - well known ActionBar component.


    Yes, it’s back. Voldemort is back! But this time it is back as a detached component :) The API is pretty similar to PageHeader (it contains a leading and trailing ActionBar), and you can place it wherever you wish. The only restriction so far is that its layout only supports horizontal orientation.

    Facelifted Scrollbar

    Yes, finally we got a loan headcount to help us out in creating some nice facelift for the Scrollbar. The design follows the same principles we have for the upcoming 16.04 desktop, with the scroll handler residing inside the bar, and having two pointers to drive page up/down scrolling.

    This guy also convinced us that we need a Scrollview, like in QtQuick Controls v1, so we can handle the “buddy” scrollbars, the situation when horizontal and vertical scrollbars are needed at the same time and their overlapping should be dealt with. So, we have that one too :) And let's name the barman: Andrea Bernabei aka faenil is the one!

    The unified BottomEdge experience

    Finally we got a complete design pattern ready for the bottom edge behavior, so it was about the time to get a component around the pattern. It can be placed within any component, and its content can be staged, meaning it can be changed while the content is dragged. The content is always loaded asynchronously for now, we will add support to force synchronous loading in the upcoming releases.

    Focus handling in CheckBox, Switch, Button ActionBar

    Starting now, pressing Tab and Shift+Tab on a keyboard will show a focus ring on components that support it. CheckBox, Switch, Button and ActionBar have this right now, others will follow soon.

    Action mnemonics

    As we are heading towards the implementation of contextual menus, we are preparing a few features as prerequisite work for the menus. For one adding mnemonic handling to Action.

    So far there was only one way to define shortcuts for an Action, through the shortcut property. This now can be achieved by specifying the mnemonic in the text property of the Action using the ‘&’ character. This character will then be converted into a shortcut and, if there is a hardware keyboard attached, it will underline the mnemonic.

    Read more





    • ParticleSystem - manages shared time-line between emitters
    • Emitter - emits logical particles into the system
    • ParticlePainter - particles are visualized by a particle painter
    • Direction - vector space for emitted particles
    • ParticleGroup - every particle is a member of a group
    • Affector - manipulates particles after they have been emitted




    import QtQuick 2.0
    import QtQuick.Particles 2.0
    import Ubuntu.Components 1.1
    MainView {
        // objectName for functional testing purposes (autopilot-qt5)
        objectName: "mainView"
        // Note! applicationName needs to match the "name" field of the click manifest
        applicationName: "particle1.liu-xiao-guo"
        Page {
            ParticleSystem {
                id: particle
                anchors.fill: parent
                running: true
                ImageParticle {
                    anchors.fill: parent
    //                source: "qrc:///particleresources/star.png"
                    source: "images/starfish_1.png"
                    alpha: 0.5
                    alphaVariation: 0.2
                    colorVariation: 1.0
                Emitter {
                    anchors.centerIn: parent
                    emitRate: 400
                    lifeSpan: 5000
                    size: 20
                    sizeVariation: 8
                    velocity: AngleDirection {angleVariation: 180; magnitude: 60}
                Turbulence {
                    anchors.fill: parent
                    strength: 2



    上面的例程的源码在 我们上面的代码也可以使用如下的格式:

        Page {
            ParticleSystem {
                id: particleSystem
            Emitter {
                id: emitter
                anchors.centerIn: parent
                anchors.fill: parent
                system: particleSystem
                emitRate: 10
                lifeSpan: 2000
                lifeSpanVariation: 500
                size: 54
                endSize: 32
            ImageParticle {
                source: "images/realLeaf1.png"
                system: particleSystem


    我们修改我们上面的例子.我们使用Gravity Affector. 在Gravity中,我们可以使用加速度及角度.整个例程的代码为:

    import QtQuick 2.0
    import Ubuntu.Components 1.1
    import QtQuick.Particles 2.0
        \brief MainView with a Label and Button elements.
    MainView {
        // objectName for functional testing purposes (autopilot-qt5)
        objectName: "mainView"
        // Note! applicationName needs to match the "name" field of the click manifest
        applicationName: "particle2.liu-xiao-guo"
        Page {
                anchors.centerIn: parent
                running: true
                ImageParticle {
                    anchors.fill: parent
                    // source: "qrc:///particleresources/star.png"
                    source: "images/starfish_0.png"
                    alpha: 0.5
                    alphaVariation: 0.2
                    colorVariation: 1.0
                    emitRate: 20
                    size: 50
                    lifeSpan: 5000
                    velocity: AngleDirection { magnitude: 100; angleVariation: 360  }
                    angle: 90
                    magnitude: 100
                Turbulence {
                    anchors.fill: parent
                    strength: 2



    作者:UbuntuTouch 发表于2016/2/2 11:08:08 原文链接
    阅读:133 评论:0 查看评论

    Read more

    [原]在QML应用中显示image tag




    text : string
    The text to display. Text supports both plain and rich text strings.

    从上面的描述中,我们可以看出Text是支持plain及rich text的.也就是说,它支持像HTML格式的文本输出.基于这样的特性,我们可以设计我们的程序如下:


    import QtQuick 2.0
    Text {
        width: parent.width
        font.pointSize: 30
        wrapMode: Text.WordWrap
        textFormat: Text.StyledText
        horizontalAlignment: main.hAlign


    import QtQuick 2.0
    import Ubuntu.Components 1.1
        \brief MainView with a Label and Button elements.
    MainView {
        // objectName for functional testing purposes (autopilot-qt5)
        objectName: "mainView"
        // Note! applicationName needs to match the "name" field of the click manifest
        applicationName: "imagetag.liu-xiao-guo"
         This property enables the application to change orientation
         when the device is rotated. The default is false.
        //automaticOrientation: true
        // Removes the old toolbar and enables new features of the new header.
        useDeprecatedToolbar: false
        Page {
            id: main
            title:"Image Tags")
            property var hAlign: Text.AlignLeft
            Flickable {
                anchors.fill: parent
                contentWidth: parent.width
                contentHeight: col.height + 20
                Column {
                    id: col
                    x: 10; y: 10
                    spacing: 20
                    width: parent.width - 20
                    TextWithImage {
                        text: "This is a <b>happy</b> face<img src=\"images/face-smile.png\">"
                    TextWithImage {
                        text: "This is a <b>very<img src=\"images/face-smile-big.png\" align=\"middle\"/>happy</b> face vertically aligned in the middle."
                    TextWithImage {
                        text: "This is a tiny<img src=\"images/face-smile.png\" width=\"15\" height=\"15\">happy face."
                    TextWithImage {
                        text: "This is a<img src=\"images/starfish_2.png\" width=\"50\" height=\"50\" align=\"top\">aligned to the top and a<img src=\"images/heart200.png\" width=\"50\" height=\"50\">aligned to the bottom."
                    TextWithImage {
                        text: "Qt logos<img src=\"images/qtlogo.png\" width=\"55\" height=\"60\" align=\"middle\"><img src=\"images/qtlogo.png\" width=\"37\" height=\"40\" align=\"middle\"><img src=\"images/qtlogo.png\" width=\"18\" height=\"20\" align=\"middle\">aligned in the middle with different sizes."
                    TextWithImage {
                        text: "Some hearts<img src=\"images/heart200.png\" width=\"20\" height=\"20\" align=\"bottom\"><img src=\"images/heart200.png\" width=\"30\" height=\"30\" align=\"bottom\"> <img src=\"images/heart200.png\" width=\"40\" height=\"40\"><img src=\"images/heart200.png\" width=\"50\" height=\"50\" align=\"bottom\">with different sizes."
                    TextWithImage {
                        text: "Resized image<img width=\"48\" height=\"48\" align=\"middle\" src=\"\">from the internet."
                    TextWithImage {
                        text: "Image<img align=\"middle\" src=\"\">from the internet."
                    TextWithImage {
                        height: 120
                        verticalAlignment: Text.AlignVCenter
                        text: "This is a <b>happy</b> face<img src=\"images/face-smile.png\"> with an explicit height."
            Keys.onUpPressed: main.hAlign = Text.AlignHCenter
            Keys.onLeftPressed: main.hAlign = Text.AlignLeft
            Keys.onRightPressed: main.hAlign = Text.AlignRight
            Row {
                id: buttons
                anchors.bottom: parent.bottom
                anchors.horizontalCenter: parent.horizontalCenter
                Button {
                    text: "Align Left"
                    onClicked: {
                        main.hAlign = Text.AlignLeft
                Button {
                    text: "Align Center"
                    onClicked: {
                        main.hAlign = Text.AlignHCenter
                Button {
                    text: "Align Right"
                    onClicked: {
                        main.hAlign = Text.AlignRight



    作者:UbuntuTouch 发表于2016/1/14 15:18:16 原文链接
    阅读:243 评论:0 查看评论

    Read more



           Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
                font.pixelSize: size

    具体的使用说明可以参考连接QML Text.当然,我们也可以使用一种简洁的格式:

           Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
                horizontalAlignment: Text.AlignHCenter
                font { family: "Times"; pixelSize: size; capitalization: Font.AllUppercase }



        FontLoader { id: fixedFont; name: "Courier" }
        FontLoader { id: localFont; source: "content/fonts/tarzeau_ocr_a.ttf" }
        FontLoader { id: webFont; source: "" }




    import QtQuick 2.0
    Rectangle {
        property string myText: "The quick brown fox jumps over the lazy dog."
        width: 320; height: 480
        color: "steelblue"
        FontLoader { id: fixedFont; name: "Courier" }
        FontLoader { id: localFont; source: "content/fonts/tarzeau_ocr_a.ttf" }
        FontLoader { id: webFont; source: "" }
        property int size: 40
        Column {
            anchors { fill: parent; leftMargin: 10; rightMargin: 10; topMargin: 10 }
            spacing: 15
            Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
                font.pixelSize: size
            Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
                horizontalAlignment: Text.AlignHCenter
                font { family: "Times"; pixelSize: size; capitalization: Font.AllUppercase }
            Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                horizontalAlignment: Text.AlignRight
                wrapMode: Text.WordWrap
                font { family:; pixelSize: size; weight: Font.Bold; capitalization: Font.AllLowercase }
            Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
                font { family:; pixelSize: size; italic: true; capitalization: Font.SmallCaps }
            Text {
                text: myText
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
                font { family:; pixelSize: size; capitalization: Font.Capitalize }
            Text {
                text: {
                    if (webFont.status == FontLoader.Ready) myText
                    else if (webFont.status == FontLoader.Loading) "Loading..."
                    else if (webFont.status == FontLoader.Error) "Error loading font"
                color: "lightsteelblue"
                width: parent.width
                wrapMode: Text.WordWrap
      ; font.pixelSize: size



    作者:UbuntuTouch 发表于2016/1/4 8:27:17 原文链接
    阅读:317 评论:0 查看评论

    Read more

    在这篇文章中,我们来显示在Ubuntu 手机中所有的已经有的字体.大家可以根据自己的需求来选择自己所需要的字体.我们已经在先前的文章" 如何在QML中使用不同的字体(font)"已经展示了如何使用font来显示不同的字体.



    import QtQuick 2.0
    import Ubuntu.Components 1.1
    Rectangle {
        color: "steelblue"
        property int size: 60
        ListView {
            clip: true
            anchors.fill: parent
            model: Qt.fontFamilies()
            delegate: Item {
                width: ListView.view.width
                Row {
                    height: parent.height
                    width: parent.width
                    Text {
                        anchors.verticalCenter: parent.verticalCenter
                        text: "I love you!"
                        font { family: modelData; pixelSize: size }
                    Text {
                        anchors.verticalCenter: parent.verticalCenter
                        text: modelData
                        color: "white"
                Rectangle {
                    color: "red"
                    height: 2
                    width: parent.width



    import QtQuick 2.0
    import Ubuntu.Components 1.1
        \brief MainView with a Label and Button elements.
    MainView {
        // objectName for functional testing purposes (autopilot-qt5)
        objectName: "mainView"
        // Note! applicationName needs to match the "name" field of the click manifest
        applicationName: "fontlist.liu-xiao-guo"
         This property enables the application to change orientation
         when the device is rotated. The default is false.
        //automaticOrientation: true
        // Removes the old toolbar and enables new features of the new header.
        useDeprecatedToolbar: false
        Page {
            title:"Font list")
            Text {
                id: txt
                anchors.horizontalCenter: parent.horizontalCenter
                text: "我爱你 " + + " " + font.pixelSize
            AvailableFonts {
                anchors.fill: parent


    作者:UbuntuTouch 发表于2016/1/4 9:25:26 原文链接
    阅读:257 评论:0 查看评论

    Read more

    [原]snappy ubuntu core 演示

    基于对snappy ubuntu core的理解,我做了几个展示的应用.

    1)利用snappy ubuntu core来控制piglow

    2)利用snappy ubuntu core来收集传感器数据及控制LED灯

    3)利用snappy ubuntu core来监控webcam

    更多关于snappy ubuntu core的介绍,可以参阅文章"到底Snappy Ubuntu是什么?".也可以参考我们的全球网站来了解更多信息.
    作者:UbuntuTouch 发表于2016/1/12 14:09:45 原文链接
    阅读:265 评论:0 查看评论

    Read more

    [原]利用Javascript来创建Ubuntu Scope

    在先前的教程"在Ubuntu OS上创建一个dianping Scope (Qt JSON)",我们知道如何使用C++来在Ubuntu平台上开发一个Scope;我们也在文章"使用golang来设计我们的Ubuntu Scope"里展示了如何使用go语言来在Ubuntu上开发一个Scope.在今天的文章中,我们来展示如何利用Javascript语言来开发一个Scope.这对于一些网页开发的开发者来说,无疑是一个天大的好消息,因为你们不需要学习另外一种语言就可以轻松地开发一个属于你们自己的Scope.更多关于Scope开发的知识可以在网址


    首先我们必须强调的是Javascrip支持Scope的开发始于Ubuntu 15.04(vivid)系统及以后的版本.在开发之前,开发者必须按照文章"Ubuntu SDK 安装"安装好自己的SDK.同时,必须做如下的JS Scope开发工具的安装:

    $ sudo apt install unity-js-scopes-dev
    $ unity-js-scopes-tool setup

    在这里必须注意的是,我们必须在安装完我们的Ubuntu SDK后才可以执行上面的安装,并在SDK的安装中chroots必须安装完整.经过上面的安装,我们基本上已经完成了我们所有的工具的安装.

    2)JS Scope开发文档

    所有的开发离不开我们所需要的技术文档.JS Scope的开发文档的地址可以在early build找到.当然你们也可以通过安装unity-js-scopes-doc包来得到帮助.


    Webservice API



    {"error":0,"status":"success","date":"2016-01-18","results":[{"currentCity":"北京","pm25":"13","index":[{"title":"穿衣","zs":"寒冷","tipt":"穿衣指数","des":"天气寒冷,建议着厚羽绒服、毛皮大衣加厚毛衣等隆冬服装。年老体弱者尤其要注意保暖防冻。"},{"title":"洗车","zs":"较适宜","tipt":"洗车指数","des":"较适宜洗车,未来一天无雨,风力较小,擦洗一新的汽车至少能保持一天。"},{"title":"旅游","zs":"一般","tipt":"旅游指数","des":"天气较好,温度稍低,而且风稍大,让您感觉有些冷,会对外出有一定影响,外出注意防风保暖。"},{"title":"感冒","zs":"极易发","tipt":"感冒指数","des":"天气寒冷,昼夜温差极大且空气湿度较大,易发生感冒,请注意适当增减衣服,加强自我防护避免感冒。"},{"title":"运动","zs":"较不宜","tipt":"运动指数","des":"天气较好,但考虑天气寒冷,风力较强,推荐您进行室内运动,若在户外运动请注意保暖并做好准备活动。"},{"title":"紫外线强度","zs":"弱","tipt":"紫外线强度指数","des":"紫外线强度较弱,建议出门前涂擦SPF在12-15之间、PA+的防晒护肤品。"}],"weather_data":[{"date":"周一 01月18日 (实时:-8℃)","dayPictureUrl":"","nightPictureUrl":"","weather":"晴","wind":"北风3-4级","temperature":"-4 ~ -11℃"},{"date":"周二","dayPictureUrl":"","nightPictureUrl":"","weather":"晴转多云","wind":"微风","temperature":"-1 ~ -8℃"},{"date":"周三","dayPictureUrl":"","nightPictureUrl":"","weather":"多云转阴","wind":"微风","temperature":"0 ~ -7℃"},{"date":"周四","dayPictureUrl":"","nightPictureUrl":"","weather":"阴转多云","wind":"微风","temperature":"-3 ~ -6℃"}]}]}



    在这一节中,我们来创建一个JS Scope.我们可以利用在Ubuntu SDK中所提供的template来轻松地创建一个Scope.首先,我们打开我们的SDK,并且选择"New File or Project":






    显示如下.基本上没有什么特别的东西.它在默认的情况下显示的是一个天气的Scope,我们可以在它里面输入一些我们所感兴趣的城市的名称来得到当前城市的天气情况.我们可以选择SDK屏幕做下角的Desktop或Ubuntu Desktop SDK kit来在Desktop的环境下运行.当我们需要在手机上运行时,我们必须选择Ubuntu SDK for armhf来运行:




    liuxg@liuxg:~/release/chinaweatherjs$ tree
    ├── chinaweatherjs.apparmor
    ├── CMakeLists.txt
    ├── CMakeLists.txt.user
    ├── po
    │   ├── chinaweatherjs.pot
    │   ├── CMakeLists.txt
    │   ├──
    │   ├──
    │   └──
    └── src
        ├── chinaweatherjs.js
        ├── CMakeLists.txt
        ├── data
        │   ├──
        │   ├──
        │   ├── icon.png
        │   └── logo.png
        ├── etc
        └── node_modules
            ├── last-build-arch.txt
            └── unity-js-scopes
                ├── bin
                │   └── unity-js-scopes-launcher
                ├── index.js
                ├── lib
                │   └── scope-core.js
                └── unity_js_scopes_bindings.node
    8 directories, 20 files



    细心的开发者可能已经注意到一个叫做node_modules的目录.JS Scope使用的框架就是npm + Scope.我们可以很方便地使用unity-js-scopes-tool来加入我们所需要的npm包到我们的Scope项目中去.运行的命令如下:

    $ unity-js-scopes-tool install <path/to/project/src/node_modules> <npm package> 




    Javascript Scope的基本架构

    • 导入 Javascript Scope模块到你的代码中
    • 设置你的Scope的runtime上下文

    var scopes = require('unity-js-scopes')
    scopes.self.initialize({}, {});

    一旦被导入,unity-js-scopes核心模块即是和Scope runtime交互的入口点.runtime将帮我们设置好我们的Scope,和我们的Dash进行交互及显示用户在Scope交互所生产的结果等.


            get: function() {
                if (! self) {
                    self = new Scope();
                return self;

    除了定义一些你的Scope在运行时的一下runtime元素以外,你的runtime上下文还允许你来检查当前Scope的设置及接受scope runtime环境变化时所生产的变化等.

    Runtime 元素

    一旦我们的Scope和runtime建立起连接并被用户所启动,scope runtime将发送来所有的由用户所产生的动作.最终这些动作将被发送到有我们的Scope在Initialize过程中所定义的API函数中.


    • run: 当一个scope准备运行时,这个回调函数将被调用.
    • start: 当一个scope准备启动时,这个函数将被调用
    • stop: 当一个scope准备停止时,这个函数将被调用
    • search: 当用户请求一个搜索时,这个函数将被调用.runtime将将提供所有的关于搜索所需要的信息给这个函数的调用.开发者的任务就是通过和runtime的交互把所有可能的结果push给runttime.你也可以控制如何显示这些结果
    • preview: 显示一个在上面search中显示结果的preview.runtime将提供关于这个preview所需要的所有的信息

    var scopes = require('unity-js-scopes')
    scopes.self.initialize({}, {
        run: function() {
        start: function(scope_id) {
            console.log('Starting scope id: ' + scope_id + ', ' + scopes.self.scope_config)
        search: function(canned_query, metadata) {
            return null
        preview: function(result, metadata) {
            return null

    对于每一个scope runtime的回调函数来说,它相应于一个用户的交互.scope runtime希望你的scope发送回一个描述各个关键交互所需要的对象.
    SearchQuery object可以定义一个run回调函数.当搜索发生时,该函数将被调用.同时它也可以定义一个cancel的回调函数.当一个搜索被停止时,该函数将被调用.
    Scope runtime同时也传入一个叫做SearchReply的object.这个object可以被用来push一些结果到scope runtime.

    上面的这种交互模式是贯穿了整个scope及scope rumtime设计的核心交互模式.


    上面讲到的一个最核心的搜索交互就是我们的scope可以把我们所需要的结果推送到scope runtime.这些结果是通过SearchReply来完成推送的.这个函数希望一个叫做CategorisedResult类型的数据被创建,并被推送到scope runtime.这个result对象将让我们的scope来定义诸如title, icon,uri等信息.


    var query_host = ""
    var weather_path = "/telematics/v3/weather?output=json&ak=DdzwVcsGMoYpeg5xQlAFrXQt&location="
    var URI = "";


                    search: function(canned_query, metadata) {
                        return new scopes.lib.SearchQuery(
                                    // run
                                    function(search_reply) {
                                        var qs = canned_query.query_string();
                                        if (!qs) {
                                            qs = "北京"
                                        console.log("query string: " + qs);
                                        var weather_cb = function(response) {
                                            var res = '';
                                            // Another chunk of data has been recieved, so append it to res
                                            response.on('data', function(chunk) {
                                                res += chunk;
                                            // The whole response has been recieved
                                            response.on('end', function() {
                                                // console.log("res: " + res);
                                                r = JSON.parse(res);
                                                // Let's get the detailed info
                                                var request_date =
                                                console.log("date: " + date);
                                                var city = r.results[0].currentCity;
                                                console.log("city: " + city);
                                                var pm25 = r.results[0].pm25
                                                console.log("pm25: " + pm25)
                                                var category_renderer = new scopes.lib.CategoryRenderer(JSON.stringify(WEATHER_TEMPLATE));
                                                var category = search_reply.register_category("Chineweather", city, "", category_renderer);
                                                try {
                                                    r = JSON.parse(res);
                                                    var length = r.results[0].weather_data.length
                                                    console.log("length: " + length)
                                                    for (var i = 0; i < length; i++) {
                                                        var categorised_result = new scopes.lib.CategorisedResult(category);
                                                        var date = r.results[0].weather_data[i].date
                                                        console.log("date: "+  date);
                                                        var dayPictureUrl = r.results[0].weather_data[i].dayPictureUrl;
                                                        console.log("dayPictureUrl: " + dayPictureUrl);
                                                        var nightPictureUrl = r.results[0].weather_data[i].nightPictureUrl;
                                                        console.log("nightPictureUrl: " + nightPictureUrl);
                                                        var weather = r.results[0].weather_data[i].weather;
                                                        console.log("weather: " + weather);
                                                        var wind = r.results[0].weather_data[i].wind;
                                                        console.log("wind: " + wind);
                                                        var temperature = r.results[0].weather_data[i].temperature;
                                                        console.log("temperature: " + temperature);
                                                        categorised_result.set("weather", weather);
                                                        categorised_result.set("wind", wind);
                                                        categorised_result.set("temperature", temperature);
                                                        categorised_result.set_title("白天: " + date );
                                                        categorised_result.set("subtitle", weather);
                                                        categorised_result.set_title("夜晚: " + date );
                                                    // We are done, call finished() on our search_reply
    //                                              search_reply.finished();
                                                catch(e) {
                                                    // Forecast not available
                                                    console.log("Forecast for '" + qs + "' is unavailable: " + e)
                                        console.log("request string: " + query_host + weather_path + qs);
                                        http.request({host: query_host, path: weather_path + encode_utf8(qs)}, weather_cb).end();
                                    // cancelled
                                    function() {


    一旦我们的搜索结果被推送到scope runtime并被显示,用户可以点击显示的结果并请求一个关于该结果的preview.Scope runtime将通过你的scope中所定义的preview回调来显示所需要的结果.

    就像我们上面对search所描述的那样,scope runtime希望你的scope返回一个PreViewQuery的对象来作为一个交互的桥梁.这个对象必须指定一个run及一个cancel的函数.这两个函数和我们上面介绍的search中的语义是一样的.这里不再累述.

    对Preview来说,有两个最重要的元素:column layout及Preview Widgets.就像它们的名字所描述的那样,column layout元素是用来定义Preview页面中Preview Component的layout的.Preview Widget是用来在Preview页面中组成页面的.


      preview: function(result, action_metadata) {
                        return new scopes.lib.PreviewQuery(
                                    // run
                                    function(preview_reply) {
                                        var layout1col = new scopes.lib.ColumnLayout(1);
                                        var layout2col = new scopes.lib.ColumnLayout(2);
                                        var layout3col = new scopes.lib.ColumnLayout(3);
                                        layout1col.add_column(["imageId", "headerId", "temperatureId", "windId"]);
                                        layout2col.add_column(["headerId", "temperatureId", "windId"]);
                                        layout3col.add_column(["headerId", "temperatureId", "windId"]);
                                        preview_reply.register_layout([layout1col, layout2col, layout3col]);
                                        var header = new scopes.lib.PreviewWidget("headerId", "header");
                                        header.add_attribute_mapping("title", "title");
                                        header.add_attribute_mapping("subtitle", "subtitle");
                                        var image = new scopes.lib.PreviewWidget("imageId", "image");
                                        image.add_attribute_mapping("source", "art");
                                        var temperature = new scopes.lib.PreviewWidget("temperatureId", "text");
                                        temperature.add_attribute_mapping("text", "temperature");
                                        var wind = new scopes.lib.PreviewWidget("windId", "text");
                                        wind.add_attribute_mapping("text", "wind");
                                        preview_reply.push([image, header, temperature, wind ]);
                                    // cancelled
                                    function() {






    $ bzr branch lp:~davidc3/+junk/github-js-scope
    $ liuxg@liuxg:~/scope/github-js-scope/src$ unity-js-scopes-tool install ./node_modules github 


    作者:UbuntuTouch 发表于2016/1/18 15:25:35 原文链接
    阅读:218 评论:0 查看评论

    Read more



    url resolvedUrl(url url)

    Returns url resolved relative to the URL of the caller.


            Image {
                anchors.fill: parent
                source: "images/girl.jpg"
                Component.onCompleted: {
                    // This prints 'false'. Although "pics/logo.png" was the input string,
                    // it's been converted from a string to a URL, so these two are not the same.
                    console.log(source == "images/girl.jpg")
                    // This prints 'true' as Qt.resovledUrl() converts the string into a
                    // URL with the correctly resolved path
                    console.log("resolvedurl: " + Qt.resolvedUrl("images/girl.jpg"))
                    console.log(source == Qt.resolvedUrl("images/girl.jpg"))
                    // This prints the absolute path, e.g. "file:///path/to/pics/logo.png"


    Starting /usr/ubuntu-sdk-dev/bin/qmlscene...
    qml: false
    qml: resolvedurl: file:///home/liuxg/qml/resolveurl/images/girl.jpg
    qml: true
    qml: file:///home/liuxg/qml/resolveurl/images/girl.jpg



    作者:UbuntuTouch 发表于2016/1/25 16:24:22 原文链接
    阅读:138 评论:0 查看评论

    Read more