Canonical Voices

David Planella

Announcing the Scope Showdown winners

We're thrilled to announce the results of the Ubuntu Scope Showdown: a contest to develop a scope in 6 weeks and win exciting prizes. Fleshing out Ubuntu's innovative take at the content and services experience, participants had the opportunity to use the new development tools to create a complete scope from scratch and publish it to the store in just a few weeks.

Contest submissions were reviewed by an international panel of judges including Canonical employees and members of the wider Ubuntu community:

The winner is: Cinema scope

The Cinema Scope by Daniele Laudani is the winner of this Scope Showdown edition. Daniele's scope scored the best judge ratings for its visual appeal, usability, general interest and use of scope features. To him goes the Dell XPS 13 developer laptop preloaded with Ubuntu. Enjoy!

Outstanding runner ups

The quality of all winning scopes was impressive, which resulted in a draw in some of the runner ups. In the end, rather than a tie break, we decided to include an extra prize as a recognition to their outstanding work. And without further ado, we're proud to present the additional winners.

Discerning Duck

Riccardo Padovani, of Ubuntu Core Apps fame, takes home with the Discerning Duck scope the Logitech UE Boom Bluetooth speakers, compatible with the Ubuntu phone.

Mixcloud

Developer Bogdan Cuza, with the Mixcloud scope scores a Nexus 7 tablet running Ubuntu with all winner scopes preinstalled.

Places

Sam Segers, with the Places scope, is the winner of an Ubuntu bundle, including:

  • An Ubuntu messenger bag
  • An Ubuntu infographic T-shirt
  • An Ubuntu neoprene laptop sleeve

RSS Feeds

Matthew Jump, with the RSS Feeds scope also wins an Ubuntu bundle including:

  • An Ubuntu backpack
  • An Ubuntu Circle of Friends Dot Design T-shirt
  • An Ubuntu Neoprene Laptop Sleeve

Your Shot

Kunal Parmar, another Core App Developer, wins with the Your Shot scope yet another Ubuntu bundle:

  • An Ubuntu backpack
  • An Ubuntu Circle of Friends Dot Design T-shirt
  • An Ubuntu Neoprene Laptop Sleeve

Scopes everywhere

Congratulations to all winners, and to all participants: everyone did a fantastic job. Given the early adoption of scopes and developer tools, judges and reviewers were particularly impressed by the quality of the submissions.

Remember you can install any of the winner scopes and more from the Ubuntu Software Store. It's now time to start thinking beyond the apps grid and bringing interesting scopes that enable Ubuntu phone users to get the data that matters to them. Looking forward to seeing scopes everywhere!

Get started writing a scope today >

Read more
Daniel Holbach

Building a great community

Xubuntu
In last night’s Community Council meeting, we met up with the Xubuntu team. They have done a great job inviting new members into their part of the community. Just read this:

<pleia2> elfy notes all contributors in his announcements
<dholbach> that's really really nice
<pleia2> we do blog posts, emails directly to all the testing members and to -devel list
<dholbach> wow
<pleia2> this cycle we're giving out stickers to some of our top testers
<elfy> if we get that sorted
<pleia2> share on social media too

This is just fantastic. I’m very happy with what the Xubuntu folks are doing and I’m glad to be part of such an open and welcoming community as well.

If you think that’s great too and want to get involved, have a look at their “Get involved” page. They particularly need testers for the new release.

Xubuntu team: keep up the great work! :-)

Read more
bmichaelsen

No-no-no, light speed is too slow!
Yes, we’ll have to go right to… ludicrous speed!

– Dark Helmet, Spaceballs

So, I recently brought up the topic of writers notes in the LibreOffice ESC call. More specifically: the SwNodeIndex class, which is, if one broadly simplifies an iterator over the container holding all the paragraphs of a text document. Before my modifications, the SwNodes container class had all these SwNodeIndices in a homegrown intrustive double linked list, to be able to ensure these stay valid e.g. if a SwNode gets deleted/removed. Still — as usual with performance topics — wild guesses arent helpful, and measurements should trump over intuition. I used valgrind for that, and measured the number of instructions needed for loading the ODF spec. Since I did the same years and years ago on the old OpenOffice.org performance project, I just checked if we regressed against that. Its comforting that we did not at all — we were much faster, but that measurement has to be taken with a few pounds of salt, as a lot of other things differ between these two measurements (e.g. we now have a completely new build system, compiler versions etc.). But its good we are moving in the right direction.

implementation SwNodes SwNodeIndex total instructions performance linedelta
DEV300_m45 71,727,655 73,784,052 9,823,158,471 ? ?
master@fc93c17a 84,553,232 60,987,760 6,170,762,825 0% 0
std::list 18,461,317 103,461,317 14,502,230,571 -5,725%
(-235% of total)
+12/-70
std::vector 18,986,848 3,707,286,032 9,811,541,380 -2,502% +22/-70
std::unordered_map 18,984,984 82,843,000 7,083,620,244 -627%
(-15% of total)
+16/-70
std::vector rbegin 18,986,848 143,851,229 6,214,602,532 -30%
(-0.7% of total)
+23/-70
sw::Ring<> 23,447,256 inlined 6,154,660,709 11%
(2.6% of total)
+108/-229

With that comforting knowledge, I started to play around with the code. The first thing I did was to replace the handcrafted intrusive list with a std::list pointing to the SwNodeIndex instances as a member in the SwNodes class. This is expected to slow down things, as now two allocs are needed: one for the SwNodeIndex class and one for the node entry in the std::list. To be honest though, I didnt expect this to slow down the code handling the nodes by a factor of ~57 for the loading of the example document. This whole document loading time (not just the node handling) slows by a factor of ~2.4. So ok, this establishes for certain that this part of the code is highly performance sensitive.

The next thing I tried to get a feel for how the performance reacts was using a std::vector in the SwNodes class. When reserving some memory early, this should severely reduce the amount of allocs needed. And indeed this was quicker than the std::list even with a naive approach just doing a push_back() for insertion and a std::find()/std::erase() for removal. However, the node indices are often temporarily created and quickly destroyed again. Thus adding new indices at the end and searching from the start certainly is not ideal: Thus this is also slower than the intrusive list that was on master by a factor of ~25 for the code doing the node handling.

Searching for a SwNodeIndex from the end of the vector, where we likely just inserted it and then swapping it with the last entry makes the std::vector almost compatitive with the original implementation: but still 30% slower than the original implementation. (The total loading time would only have increased by 0.7% using the vector like this.)

For completeness, I also had a look at a std::unordered_map. It did a bit better than I expected, but still would have slowed down loading by 15% for the example experiment.

Having ruled out that standard containers would do much good here without lots of tweaking, I tried the sw::Ring<> class that I recently rewrote based on Boost.Intrusive as a inline header class. This was 11% quicker than the old implementation, resulting in 2.6% quicker loading for the whole document. Not exactly a heroic archivement, but also not too bad for just some 200 lines touched. So this is now on master.

Why do this linked list outperform the old linked list? Inlining. Especially, the non-inlined constructors and the destructor calling a trivial non-inlined member function. And on top of that, the contructors and the function called by the destructor called two non-inlined friend functions from a different compilation unit, making it extra hard for a compiler to optimize that. Now, link time optimization (LTO) could maybe do something about that someday. However, with LTO being in different states on different platforms and with developers possibly building without LTO for build time performance for some time, requiring the compiler/linker to be extra clever might be a mixed blessing: The developers might run into “the map is not the territory” problems.

my personal take-aways:

  • The SwNodeIndex has quite a relevant impact on performance. If you touch it, handle with care (and with valgrind).
  • The current code has decent performance, further improvement likely need deeper structual work (see e.g. Kendys bplustree stuff).
  • Intrusive linked lists might be cumbersome, but for some scenarios, they are really fast.
  • Inlining can really help (doh).
  • LTO might help someday (or not).
  • friend declarations for non-inline functions across compilation units can be a code smell for possible performance optimization.

Please excuse the extensive writing for a meager 2.6% performance improvement — the intention is to avoid somebody (including me) to redo some or all of the work above just to come to the same conclusion.


Note: Here is how this was measured:

  • gcc 4.8.3
  • boost 1.55.0
  • test document: ODF spec
  • valgrind --tool=callgrind "--toggle-collect=*LoadOwnFormat*" --callgrind-out-file=somefilename.cg ./instdir/program/soffice.bin
  • ./autogen.sh --disable-gnome-vfs --disable-odk --disable-postgresql-sdbc --disable-report-builder --disable-scripting-beanshell --enable-gio --enable-symbols --with-external-tar=... --with-junit=... --with-hamcrest=... --with-system-libs --without-doxygen --without-help --without-myspell-dicts --without-system-libmwaw --without-system-mdds --without-system-orcus --without-system-sane --without-system-vigra --without-system-libodfgen --without-system-libcmis --disable-firebird-sdbc --without-system-libebook --without-system-libetonyek --without-system-libfreehand --without-system-libabw --disable-gnome-vfs --without-system-glm --without-system-glew --without-system-librevenge --without-system-libcdr --without-system-libmspub --without-system-libvisio --without-system-libwpd --without-system-libwps --without-system-libwpg --without-system-libgltf --without-system-libpagemaker --without-system-coinmp --with-jdk-home=...


Read more
Prakash

8 Free Online Tech Courses

From Introduction to Linux to Web development, CIO magazine covers 8 online courses which are completely free.

Read More: http://www.cio.com/article/2847396/it-skills/8-free-online-courses-to-grow-your-tech-skills.html

Read more
Ben Howard


Snappy Launches


When we launched Snappy, we introduced it on Microsoft Azure [1], Google’s GCE [2], Amazon’s AWS [3] and our KVM images [4]. Immediately our developers were asking questions like, “where’s the Vagrant images”, which we launched yesterday [5].

The one final remaining question was “where are the images for <insert hypervisor>”. We had inquiries about Virtualbox, VMware Desktop/Fusion, interest in VMware Air, Citrix XenServer, etc.

OVA to the rescue

OVA is an industry standard for cross-hypervisor image support. The OVA spec [6] allows you to import a single image to:

  • VMware products
    • ESXi
    • Desktop
    • Fusion
    • VSphere
  • Virtualbox
  • Citrix XenServer
  • Microsoft SCVMM
  • Red Hat Enterprise Virtualization
  • SuSE Studio
  • Oracle VM

Okay, so where can I get the OVA images?

To get the latest OVA image, you can get it from here [7]. From there, you will need to follow your hypervisor instructions on importing OVA images. 

Or if you want a short URL, http://goo.gl/xM89p7


---

[1] http://www.ubuntu.com/cloud/tools/snappy#snappy-azure
[2] http://www.ubuntu.com/cloud/tools/snappy#snappy-google
[3] http://www.ubuntu.com/cloud/tools/snappy#snappy-amazon
[4] http://www.ubuntu.com/cloud/tools/snappy#snappy-local
[5] https://blog.utlemming.org/2015/01/snappy-images-for-vagrant.htm
[6] https://en.wikipedia.org/wiki/Open_Virtualization_Format
[7] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-cloud.ova

Read more
niemeyer

This post provides the background for a deliberate and important decision in the design of gopkg.in that people often wonder about: while the service does support full versions in tag and branch names (as in “v1.2″ or “v1.2.3″), the URL must contain only the major version (as in “gopkg.in/mgo.v2″) which gets mapped to the best matching version in the repository.

As will be detailed, there are multiple reasons for that behavior. The critical one is ensuring all packages in a build tree that depend on the same API of a given dependency (different majors means different APIs) may use the exact same version of that dependency. Without that, an application might easily get multiple copies unnecessarily and perhaps incorrectly.

Consider this example:

  • Application A depends on packages B and C
  • Package B depends on D 3.0.1
  • Package C depends on D 3.0.2

Under that scenario, when someone executes go get on application A, two independent copies of D would be embedded in the binary. This happens because both B and C have exact control of the version in use. When everybody can pick their own preferred version, it’s easy to end up with multiple of these.

The current gopkg.in implementation solves that problem by requiring that both B and C necessarily depend on the major version which defines the API version they were coded against. So the scenario becomes:

  • Application A depends on packages B and C
  • Package B depends on D 3.*
  • Package C depends on D 3.*

With that approach, when someone runs go get to import the application it would get the newest version of D that is still compatible with both B and C (might be 3.0.3, 3.1, etc), and both would use that one version. While by default this would just pick up the most recent version, the package might also be moved back to 3.0.2 or 3.0.1 without touching the code. So the approach in fact empowers the person assembling the binary to experiment with specific versions, and gives package authors a framework where the default behavior tends to remain sane.

This is the most important reason why gopkg.in works like this, but there are others. For example, to encode the micro version of a dependency on a package, the import paths of dependent code must be patched on every single minor release of the package (internal and external to the package itself), and the code must be repositioned in the local system to please the go tool. This is rather inconvenient in practice.

It’s worth noting that the issues above describe the problem in terms of minor and patch versions, but the problem exists and is intensified when using individual source code revision numbers to refer to import paths, as it would be equivalent in this context to having a minor version on every single commit.

Finally, when you do want exact control over what builds, godep may be used as a complement to gopkg.in. That partnership offers exact reproducibility via godep, and gives people stable APIs they can rely upon over longer periods with gopkg.in. Good match.

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant box add snappy http://goo.gl/6eAAoX
  • vagrant init snappy
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-core-devel-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-core-devel-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Prakash

The Indian smartphone market grew 82% from a year ago and 27% over the preceding quarter, making it the second consecutive quarter of more than 80% year-on-year shipment growth for smartphones.

There were 23.3 million smartphone handsets shipped in the reporting quarter, comprising 32.1% of the overall mobile phone market that touched 72.5 million units in the September quarter of 2014, recording a 9% growth from a year ago and 15% rise from the preceding quarter.

Read more at: http://www.livemint.com/Consumer/jssXBoP3nMkZOOGbXOCdeL/India-fastest-growing-smartphone-mkt-in-Asia-Pacific-IDC.html

Read more
niemeyer

Improvements on gopkg.in

Early last year the gopkg.in service was introduced with the goal of encouraging Go developers to establish strategies that enable existent software to remain working while package APIs evolve. After the initial discussions and experimentation that went into defining the (simple) design and feature set of the service, it’s great to see that the approach is proving reasonable in practice, with steady growth in usage. Meanwhile, the service has been up and unchanged for several months while we learned more about which areas needed improvement.

Now it’s time to release some of these improvements:

Source code links

Thanks to Gary Burd, godoc.org was improved to support custom source code links, which means all packages in gopkg.in can now properly reference, for any given package version, the exact location of functions, methods, structs, etc. For example, the function name in the documentation at gopkg.in/mgo.v2#Dial is clickable, and redirects to the correct source code line in GitHub.

Unstable releases

As detailed in the gopkg.in documentation, a major version must not have any breaking changes done to it so that dependent packages can rely on the exposed API once it goes live. Often, though, there’s a need to experiment with the upcoming changes before they are ready for prime time, and while small packages can afford to have that testing done locally, it’s usual for non-trivial software to have external validation with experienced developers before the release goes fully public.

To support that scenario properly, gopkg.in now allows the version string in exposed branch and tag names to be suffixed with “-unstable”. For example:

Such unstable versions are hidden from the version list in the package page, except for the specific version being looked at, and their use in released software is also explicitly discouraged via a warning message.

For the package to work properly during development, any imports (both internal and external to the package) must be modified to import the unstable version. While doing that by hand is easy, thanks to Roger Peppe’s govers there’s a very convenient way to do that.

For example, to use mgo.v2-unstable, run:

govers gopkg.in/mgo.v2-unstable

and to go back:

govers gopkg.in/mgo.v2

Repositories with no master branch

Some people have opted to omit the traditional “master” branch altogether and have only versioned branches and tags. Unfortunately, gopkg.in did not accept such repositories as valid. This was fixed.

These changes are all live right now at gopkg.in.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150113 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Both the master and master-next branches of our Vivid kernel have been
rebased to the v3.18.2 upstream stable kernel. This has also be
uploaded to the archive, ie. 3.18.0-9.10. Please test and let us
know your results. We are also starting to track the v3.19 kernel on
our unstable branch and have pushed preliminary packages to our ppa.
—–
Important upcoming dates:
Thurs Jan 22 – Vivid Alpha 2 (~1 week away)
Thurs Feb 5 – 14.04.2 Point Release (~3 weeks away)
Thurs Feb 26 – Beta 1 Freeze (~6 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Kernel prep week.
  • Precise – Kernel prep week.
  • Trusty – Kernel prep week.
  • Utopic – Kernel prep week.

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 09-Jan through 31-Jan
    ====================================================================
    09-Jan Last day for kernel commits for this cycle
    11-Jan – 17-Jan Kernel prep week.
    18-Jan – 31-Jan Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Prakash

Amazon EC2 was the most reliable compute and Google Cloud Storage was the most reliable storage.

The Laguna Beach, California company tracks status of more than 30 clouds from AWS to Zettagrid:

Service provider Outrages Downtime
Amazon EC2  20  2.41 Hours
Amazon S3  23  2.69 Hours
Google Compute Engine  72  4.41 Hours
Google Cloud Storage 8  14.23 Minutes
Microsoft Azure Virtual machines  92  40 Hours
Microsoft Azure Object Storage  141  10.97 Hours

Read More: https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/

Read more
Nicholas Skaggs

PSA: Community Requests

As you plan your ubuntu related activities this year, I wanted to highlight an opportunity for you to request materials and funds to help make your plans reality. The funds are donations made by other ubuntu enthusiasts to support ubuntu and specifically to enable community requests. In other words, if you need help traveling to a conference to support ubuntu, planning a local event, holding a hackathon, etc, the community donations fund can help.

Check out the funding page for more information on how to apply and the requirements. In short, if you are a ubuntu member and want to do something to further ubuntu, you can request materials and funding to help. Global Jam is less than a month away, is your loco ready? Flavors, trying to plan events or hold other activities? I'd encourage all of you to submit requests if money or materials can help enable or enhance your efforts to spread ubuntu. Here's to sharing the joy of ubuntu this year!

Read more
facundo

Sueño fatal


El otro día soñé que me moría.

Bah, el sueño no era sobre que me moría... me moría durante el sueño, como un detalle de los sucesos. Era un nudo argumental, digamos, no lo central de la historia.

Quiero recalcar que no fue una pesadilla, sino un sueño normal. Bah, normal...

La historia iba de que había un virus suelto, o una maldición, o algo así, que se contagiaba por contacto y si en un determinado tiempo no contagiabas a alguien más, te morías. Recuerdo que alguien me contagiaba, y luego de contagiar a algunos, me descuidé y se me pasó el tiempo en cuestión. Me dí cuenta de eso, me dí cuenta que me iba a morir... y en el momento de la muerte fue como un "ffffssssup!" y nada más, yo seguía estando ahí (tenía conciencia de mí, sabía donde estaba, qué pasaba), sólo que no estaba vinculado a mi cuerpo. La historia seguía más bizarra aún: por un motivo que no recuerdo quería comunicarme con alguien que estaba vivo, entonces aprendía y practicaba (con la ayuda de un par más) a mover objetos "del mundo físico", hasta que en un momento alguien nos convocaba para ir a no sé donde, y ya no me acuerdo más.

En fin, lo que resalto fue que es la primera vez que me muero durante el sueño, y que no fue para nada traumático :p

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150106 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Both the master and master-next branches of our Vivid kernel have been
rebased to the v3.18.1 upstream stable kernel. We have also uploaded
our first 3.18 based kernel to the archive (3.18.0-8.9). Please test and let us
know your results. We are also starting to track the v3.19 kernel on
our unstable branch.
—–
Important upcoming dates:
Fri Jan 9 – 14.04.2 Kernel Freeze (~3 days away)
Thurs Jan 22 – Vivid Alpha 2 (~2 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~4 weeks away)
Thurs Feb 26 – Beta 1 Freeze (~7 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Verification & Testing
  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 12-Dec through 10-Jan
    ====================================================================
    12-Dec Last day for kernel commits for this cycle
    14-Dec – 20-Dec Kernel prep week.
    21-Dec – 10-Jan Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Michael Hall

Whenever a user downloads Ubuntu from our website, they are asked if they would like to make a donation, and if so how they want their money used. When the “Community” option is chosen, that money is made available to members of our community to use in ways that they feel will benefit Ubuntu.

I’m a little late getting this report published, but it’s finally done. You can read the report here: https://docs.google.com/a/canonical.com/document/d/1HrBqGjqKe-THdK7liXFDobDU2LVW9JWtKxoa8IywUU4/edit#heading=h.yhstkxnvuk7s

We pretty consistently spend less than we get in each quarter, which means we have money sitting around that could be used by the community. If you want to travel to an event, would like us to sponsor an event, need hardware for development or testing, or anything else that you feel will make Ubuntu the project and the community better, please go and fill out the request form.

 

Read more
bmichaelsen

Auld Lang Syne

we’ll tak’ a cup o’ kindness yet,
for auld lang syne.

– Die Roten Rosen, Auld Lang Syne

Eike already greeted the Year of Our Lady of Discord 3181 four days ago, but I’d like to take this opportunity to have a look at the state of the LibreOffice project — the bug tracker status that is.

By the end of 2014:

unconfirmed

And a special “Thank You!” goes out to everyone who created one of the over 100 Easy Hacks written for LibreOffice in 2014, and everyone who helped, mentored or reviewed patches by new contributors to the LibreOffice project. Easy Hacks are a good way someone curious about the code of LibreOffice to get started in the project with the help of more experienced developers. If you are interested in that, you find more information on Easy Hacks on the TDF wiki. Note that there are also Easy Hacks about Design related topics and on topics related to QA.

If “I should contribute to LibreOffice once in 2015″ wasnt part of your new years resolutions yet, you are invited to add this as Easy Hacks might convince you that its worthwhile and … easy.


Read more
Colin Ian King

During idle moments in the final few weeks of 2014 I have been adding some more stressors and features to stress-ng as well as tidying up the code and fixing some small bugs that have crept in during the last development spin.   Stress-ng aims to stress a machine with various simple torture tests to trip overheating and kernel race conditions.

The mmap stressor now has the '--mmap-file' to use synchronous file backed memory mapping instead of the default anonymous mapping, and the '--mmap-async' option enables asynchronous file mapping if desired.

For socket stressing, the '--sock-domain unix' option now allows AF_UNIX (aka AF_LOCAL) sockets to be used. This compliments the existing AF_INET and AF_INET6 IPv4 and IPv6 protocols available with this stress test.

The CPU stressor now includes mixed integer and floating point stressors, covering 32 and 64 bit integer mixes with the float, double and long double floating point types. The generated object code contains a nice mix of operations which should exercise various functional units in the CPU.  For example, when running on a hyper-threaded CPU one notices a performance hit because these cpu stressor methods heavily contend on the CPU math functional blocks.

File based locking has been extended with the new lockf stressor, this stresses multiple locking and unlocking on portions of a small file and the default blocking mode can be turned into a CPU consuming rapid polling retry with the '--lockf-nonblock' option.

The dup(2) system call is also now stressed with the new dup stressor. This just repeatedly dup's a file opened on /dev/zero until all the free file slots are full, and then closes these. It is very much like the open stressors.

The fcntl(2) F_SETLEASE command is stress tested with the new lease stressor. This has a parent process that rapidly locks and unlocks a file based lease and 1 or more child processes try to open this file and cause lease breaking signal notifications to the parent.

For x86 CPUs, the cache stressor includes two new cache specific options. The '--cache-fence' option forces write serialization on each store operation, while the '--cache-flush' option forces flush cache on each store operation. The code has been specifically written to not incur any overhead if these options are not enabled or are not available on non-x86 CPUs.

This release also includes the stress-ng project mascot too; a humble CPU being tortured and stressed by a rather angry flame.

For more information, visit the stress-ng project page, or check out the stress-ng manual.

Read more
Colin Ian King

Controlling data flow using sluice

Earlier this year I was instrumenting wifi power consumption and needed a way to produce and also consume data at a specific rate to pipe over netcat for my measurements.  I had some older crufty code around to do this but it needed some polishing up so I eventually got around to turning this into a more usable tool called "sluice".  (A sluice gate controls flow of water,  the sluice tool controls rate of data though a pipe).

The sluice package is currently available in PPA:colin-king/white and is built for Ubuntu Trusty, Utopic and Vivid, but there the git repository and tarballs are available too if one wants to build this from source.

The following starts a netcat 'server' to send largefile at a rate of 1MB a second using 1K buffers and reports the transfer stats to stderr with the -v verbose mode enabled:

cat largefile | sluice -r 1MB -i 1K -v | nc -l 127.0.0.1 1234 

Sluice also allows one to adjust the read/write buffer sizes dynamically to try to avoid buffer underflow or overflows while trying to match the specified transfer rate.

Sluice can be used as a data sink on the receiving side and also has an option to throw the data away if one just wants to stream data and test the data link rates, e.g., get data from somehost.com on port 1234 and throw it away at 2MB a second:

nc somehost.com 1234 | sluice -d -r 2MB -i 8K

And finally, sluice as a "tee" mode, where data is copied to stdout and to a specified output file using the -t option.

For more details, refer to the sluice project page.

Read more
facundo


Hace rato que debía este post, pero no fue hasta la semana pasada que pasé a buscar todos los audios por la biblioteca de ETER.

El curso estuvo bueno, dio un pantallazo general y superficial de mucho contenido que se profundiza en la carrera de Locución. La mayoría de mis compañeros estaban ahí porque no se decidían si encarar la carrera o no, o (como yo) porque les interesaba la temática en sí, con actividades en mayor o menor medida relacionadas al hablar en público, o la radio.

Estaba dividido en dos patas principales: la de Foniatría (dada por Ariel Aguirre), donde básicamente uno aprende a preparar y usar el instrumento (que no es sólo la boca, o cuerdas vocales, sino todo el cuerpo), y la de Locución (dada por Cristina Taboada), donde íbamos realizando distintas actividades en un estudio de verdad, aprendiendo no sólo a locutar sino también a interactuar con el operador, usar el estudio, y muchos detalles más.

De la parte de Foniatría no me queda más que algunas fotocopias a modo de apunte, pero de la parte de Locución tengo todos los audios que fuimos realizando, porque el operador los grababa:

  • Presentación: la primera vez que entramos en un estudio, con la idea de contar un poco de uno mismo y por qué estábamos ahí.
  • Cuento para niños: había que llevar un cuento infantil, para practicar lo que es una lectura muy colorida, con muchas inflexiones, apuntando a un público infantil.
  • Hablando raro: un juego donde había que leer un determinado texto con distintas voces (a mí me tocó 'gangoso', 'sensual' y 'neutro', pero habían muchos más como 'riéndose', o 'gritando', o 'enojada/o', etc.)
  • Que profesional: las mismas palabras con muchos significados distintos.
  • Texto difícil: un texto bastante complicado de leer... no llega a ser un trabalenguas, pero hay mil trampitas por ahí.
  • Leyenda Sioux: Un radioteatro, donde entre varios representábamos una historia bastante conocida; a mí me tocó ser el Relator.
  • Vecinos: Otro radioteatro, pero este sólo entre dos personas, representando una situación en el pasillo de un departamento.
  • Publicidades: Varios textos cortos que había que leer con el punto neutro pero alegre de las publicidades, hecho a dúo con la profesora.
  • Recitado: la consignar era llevar la letra de una canción con la idea de recitarla, y también una música (que no sea de esa canción) para poner de fondo.

Este último se lo dedico a mi hermana, porque es una letra que a nosotros siempre nos gustó mucho, ella hizo un grabado con esa temática (tengo una impresión colgada en la pared, y tuve una remera mucho tiempo), y también porque ella está cumpliendo un poco con la consigna de la canción, lo que me llena de alegría.

Read more
rvr

In the last years I've been using an iPhone and I recently wanted to transfer the contact list to my phone with Ubuntu Touch. These are the steps I followed.

What is required:

  • iPhone.
  • Mac OSX.
  • Ubuntu Desktop.
  • Ubuntu Touch device.

How to do it:

  1. Export Contacts from the iPhone.
  2. Transfer the contact file to Ubuntu Touch.
  3. Import the contact file.

Let's go step by step.

Exporting the Contact list

There are different ways to export the Contact list from the iPhone to elsewhere. One option is to install an application like My Contacts Backup, and then to transfer the file containing the contacts to the computer. This option is preferred because avoids the need of a Mac desktop.

In my case, I already had my iPhone contacts synchronized with iCloud (option that I don't remember to have activated) and had the MacBook Air at hand. So these are the steps that I followed. First, the iCloud account must be setup to synchronize Contacts. Open Contacts in OSX, go to application menu and press to Accounts...

Screen Shot 2014-12-19 at 15.57.11Then select iCloud and introduce your account credentials.

Screen Shot 2014-12-19 at 15.57.21Finally, enable Contacts synchronization.

Screen Shot 2014-12-19 at 15.59.15After sync is done, the contacts are available in the application. Next step is to export all the list to a vCard file. To do that, select all contacts.

Screen Shot 2014-12-19 at 11.42.39

And export them.

Screen Shot 2014-12-19 at 11.43.07

Give a name to the file, and we are done in OSX.

Screen Shot 2014-12-19 at 11.43.41

The /Users/<user>/Documents/phone-contacts.vcf is a text file in vCard format that contains all the details of our contacts. Use a USB key to copy that file and copy it to an Ubuntu desktop.

Re-formatting

Once the file is in Ubuntu desktop, we need to tweak it a little bit, since the import tool in Ubuntu Touch doesn't like the vcf file as exported directly by OSX. The problem is that each contact entry in the file is not separated by a new line, so it must be added afterward. Open a Terminal and type this command:

Screenshot from 2014-12-19 16:38:49

In text:

$ cat phone-contacts.vcf | sed -e "s/END:VCARD/END:VCARD\n/" > phone-contacts-touch.vcf

A new (and correct) file will be created, phone-contacts-touch.vcf.

Transfer to Ubuntu Touch

Now connect the Ubuntu Touch device to Ubuntu desktop and install some packages:

Screenshot from 2014-12-19 16:49:19

$ sudo apt-get install phablet-tools android-tools-adb

In the Ubuntu Touch phone, go to System Settings > About this phone > Developer Mode and  enable the developer mode. Connect the phone to Ubuntu via USB and type this command to transfer the file to Ubuntu Touch:

Screenshot from 2014-12-19 16:56:31

$ adb push phone-contacts-touch.vcf /home/phablet/Documents

And log into Ubuntu Touch:

$ phablet-shell

Importing the contacts

Screenshot from 2014-12-19 17:01:58

And now, the final step that will import the contacts. That will be done using SyncEvolution tool that comes by default in Ubuntu Touch.

Screenshot from 2014-12-19 17:04:52

$ syncevolution --import ~/Documents/phone-contacts-touch.vcf backend=evolution-contacts database=Personal

And that's it. Now, all the contacts are imported and available in the Address Book app in Ubuntu Touch. Enjoy!

Wrap up

Of course, with some development effort this task could be much easier. A path that I don't know whether it is possible are direct calls to iCloud API's from Ubuntu Touch, but I doubt it. I've took a quick look to  libimobiledevice library. Many years ago there was a tool, python-idevicesync, that was based on it and provided a contact export feature. It's now out of sync with the library API. Another good step would be an Ubuntu Touch app that could read vCard files, and avoid the need to call SyncEvolution using the command line. I'm sure the community will be looking into this features soon.

Read more