Canonical Voices

Posts tagged with 'ubuntu'

Dustin Kirkland

It's hard for me to believe that I have sat on a this draft blog post for almost 6 years.  But I'm stuck on a plane this evening, inspired by Elon Musk and Tesla's (cleverly titled) announcement, "All Our Patents Are Belong To You."  Musk writes:

Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.
When I get home, I'm going to take down a plaque that has proudly hung in my own home office for nearly 10 years now.  In 2004, I was named an IBM Master Inventor, recognizing sustained contributions to IBM's patent portfolio.

Musk continues:
When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors. After Zip2, when I realized that receiving a patent really just meant that you bought a lottery ticket to a lawsuit, I avoided them whenever possible.
And I feel the exact same way!  When I was an impressionable newly hired engineer at IBM, I thought patents were wonderful expressions of my own creativity.  IBM rewarded me for the work, and recognized them as important contributions to my young career.  Remember, in 2003, IBM was defending the Linux world against evil SCO.  (Confession: I think I read Groklaw every single day.)

Yeah, I filed somewhere around 75 patents in about 4 years, 47 of which have been granted by the USPTO to date.

I'm actually really, really proud of a couple of them.  I was the lead inventor on a couple of early patents defining the invention you might know today as Swype (Android) or Shapewriter (iPhone) on your mobile devices.  In 2003, I called it QWERsive, as the was basically applying "cursive handwriting" to a "qwerty keyboard."  Along with one of my co-inventors, we actually presented a paper at the 27th UNICODE conference in Berlin in 2005, and IBM sold the patent to Lenovo a year later.  (To my knowledge, thankfully that patent has never been enforced, as I used Swype every single day.)

QWERsive

But that enthusiasm evaporated very quickly between 2005 and 2007, as I reviewed thousands of invention disclosures by my IBM colleagues, and hundreds of software patents by IBM competitors in the industry.

I spent most of 2005 working onsite at Red Hat in Westford, MA, and came to appreciate how much more efficiently innovation happened in a totally open source world, free of invention disclosures, black out periods, gag orders, and software patents.  I met open source activists in the free software community, such as Jon maddog Hall, who explained the wake of destruction behind, and the impending doom ahead, in a world full of software patents.

Finally, in 2008, I joined an amazing little free software company called Canonical, which was far too busy releasing Ubuntu every 6 months on time, and building an amazing open source software ecosystem, to fart around with software patents.  To my delight, our founder, Mark Shuttleworth, continues to share the same enlightened view, as he states in this TechCrunch interview (2012):
“People have become confused,” Shuttleworth lamented, “and think that a patent is incentive to create at all.” No one invents just to get a patent, though — people invent in order to solve problems. According to him, patents should incentivize disclosure. Software is not something you can really keep secret, and as such Shuttleworth’s determination is that “society is not benefited by software patents at all.”Software patents, he said, are a bad deal for society. The remedy is to shorten the duration of patents, and reduce the areas people are allowed to patent. “We’re entering a third world war of patents,” Shuttleworth said emphatically. “You can’t do anything without tripping over a patent!” One cannot possibly check all possible patents for your invention, and the patent arms race is not about creation at all.
And while I'm still really proud of some of my ideas today, I'm ever so ashamed that they're patented.

If I could do what Elon Musk did with Tesla's patent portfolio, you have my word, I absolutely would.  However, while my name is listed as the "inventor" on four dozen patents, all of them are "assigned" to IBM (or Lenovo).  That is to say, they're not mine to give, or open up.

What I can do, is speak up, and formally apologize.  I'm sorry I filed software patents.  A lot of them.  I have no intention on ever doing so again.  The system desperately needs a complete overhaul.  Both the technology and business worlds are healthier, better, more innovative environment without software patents.

I do take some consolation that IBM seems to be "one of the good guys", in so much as our modern day IBM has not been as litigious as others, and hasn't, to my knowledge, used any of the patents for which I'm responsible in an offensive manner.

No longer hanging on my wall.  Tucked away in a box in the attic.
But there are certainly those that do.  Patent trolls.

Another former employer of mine, Gazzang was acquired earlier this month (June 3rd) by Cloudera -- a super sharp, up-and-coming big data open source company with very deep pockets and tremendous market potential.  Want to guess what happened 3 days later?  A super shady patent infringement lawsuit is filed, of course!
Protegrity Corp v. Gazzang, Inc.
Complaint for Patent InfringementCivil Action No. 3:14-cv-00825; no judge yet assigned. Filed on June 6, 2014 in the U.S. District Court for the District of Connecticut;Patents in case 7,305,707: “Method for intrusion detection in a database system” by Mattsson. Prosecuted by Neuner; George W. Cohen; Steven M. Edwards Angell Palmer & Dodge LLP. Includes 22 claims (2 indep.). Was application 11/510,185. Granted 12/4/2007.
Yuck.  And the reality is that happens every single day, and in places where the stakes are much, much higher.  See: Apple v. Google, for instance.

Musk concludes his post:
Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.
What a brave, bold, ballsy, responsible assertion!

I've never been more excited to see someone back up their own rhetoric against software patents, with such a substantial, palpable, tangible assertion.  Kudos, Elon.

Moreover, I've also never been more interested in buying a Tesla.   Coincidence?

Maybe it'll run an open source operating system and apps, too.  Do that, and I'm sold.

:-Dustin

Read more
bmichaelsen

Free Four

The memories of a man in his old age

are the deeds of a man in his prime

– Free Four, Obscured by Clouds, Pink Floyd

I just donated to:

  • the Wikimedia Foundation
  • the OpenBSD project

and became:

Being involved in a project that is heavily driven by donations, I keep remembering myself of the importance of putting my money were my mouth is.

Some of these donations were triggered by recent events and initiatives in these projects. GNOME’s outreach for women program for example. Or OpenBSDs bold initiative in starting LibreSSL, which is doing what needed to be done and vitalizing an overlooked area of open source development. Watching them explain the status quo and how they are attacking it remembers me of LibreOffice — beyond the name. Plus, I dont want to be compared with a My little Pony character again.

goals of LibreSSL -- they remind me of something

goals of LibreSSL — they remind me of something

Others are already working examples of the long tail, crowd funding and the meshed society (Wikipedia) or tailblazing to be one (Krautreporter) beyond the world of software. The latter might also have been influenced by one of the last wishes of a man that unexpectedly died way to early. May he rest in peace.


Read more
jdstrand

Last time I discussed AppArmor, I talked about new features in Ubuntu 13.10 and a bit about ApplicationConfinement for Ubuntu Touch. With the release of Ubuntu 14.04 LTS, several improvements were made:

  • Mediation of signals
  • Mediation of ptrace
  • Various policy updates for 14.04, including new tunables, better support for XDG user directories, and Unity7 abstractions
  • Parser policy compilation performance improvements
  • Google Summer of Code (SUSE sponsored) python rewrite of the userspace tools

Signal and ptrace mediation

Prior to Ubuntu 14.04 LTS, a confined process could send signals to other processes (subject to DAC) and ptrace other processes (subject to DAC and YAMA). AppArmor on 14.04 LTS adds mediation of both signals and ptrace which brings important security improvements for all AppArmor confined applications, such as those in the Ubuntu AppStore and qemu/kvm machines as managed by libvirt and OpenStack.

When developing policy for signal and ptrace rules, it is important to remember that AppArmor does a cross check such that AppArmor verifies that:

  • the process sending the signal/performing the ptrace is allowed to send the signal to/ptrace the target process
  • the target process receiving the signal/being ptraced is allowed to receive the signal from/be ptraced by the sender process

Signal(7) permissions use the ‘signal’ rule with the ‘receive/send’ permissions governing signals. PTrace permissions use the ‘ptrace’ rule with the ‘trace/tracedby’ permissions governing ptrace(2) and the ‘read/readby’ permissions governing certain proc(5) filesystem accesses, kcmp(2), futexes (get_robust_list(2)) and perf trace events.

Consider the following denial:

Jun 6 21:39:09 localhost kernel: [221158.831933] type=1400 audit(1402083549.185:782): apparmor="DENIED" operation="ptrace" profile="foo" pid=29142 comm="cat" requested_mask="read" denied_mask="read" peer="unconfined"

This demonstrates that the ‘cat’ binary running under the ‘foo’ profile was unable to read the contents of a /proc entry (in my test, /proc/11300/environ). To allow this process to read /proc entries for unconfined processes, the following rule can be used:

ptrace (read) peer=unconfined,

If the receiving process was confined, the log entry would say ‘peer=”<profile name>”‘ and you would adjust the ‘peer=unconfined’ in the rule to match that in the log denial. In this case, because unconfined processes implicitly can be readby all other processes, we don’t need to specify the cross check rule. If the target process was confined, the profile for the target process would need a rule like this:

ptrace (readby) peer=foo,

Likewise for signal rules, consider this denial:

Jun 6 21:53:15 localhost kernel: [222005.216619] type=1400 audit(1402084395.937:897): apparmor="DENIED" operation="signal" profile="foo" pid=29069 comm="bash" requested_mask="send" denied_mask="send" signal=term peer="unconfined"

This shows that ‘bash’ running under the ‘foo’ profile tried to send the ‘term’ signal to an unconfined process (in my test, I used ‘kill 11300′) and was blocked. Signal rules use ‘read’ and ‘send to determine access, so we can add a rule like so to allow sending of the signal:

signal (send) set=("term") peer=unconfined,

Like with ptrace, a cross-check is performed with signal rules but implicit rules allow unconfined processes to send and receive signals. If pid 11300 were confined, you would adjust the ‘peer=’ in the rule of the foo profile to match the denial in the log, and then adjust the target profile to have something like:

signal (receive) set=("term") peer=foo,

Signal and ptrace rules are very flexible and the AppArmor base abstraction in Ubuntu 14.04 LTS has several rules to help make profiling and transitioning to the new mediation easier:

# Allow other processes to read our /proc entries, futexes, perf tracing and
# kcmp for now
ptrace (readby),
 
# Allow other processes to trace us by default (they will need
# 'trace' in the first place). Administrators can override
# with:
# deny ptrace (tracedby) ...
ptrace (tracedby),
 
# Allow unconfined processes to send us signals by default
signal (receive) peer=unconfined,
 
# Allow us to signal ourselves
signal peer=@{profile_name},
 
# Checking for PID existence is quite common so add it by default for now
signal (receive, send) set=("exists"),

Note the above uses the new ‘@{profile_name}’ AppArmor variable, which is particularly handy with ptrace and signal rules. See man 5 apparmor.d for more details and examples.

14.10

Work still remains and some of the things we’d like to do for 14.10 include:

  • Finishing mediation for non-networking forms of IPC (eg, abstract sockets). This will be done in time for the phone release.
  • Have services integrate with AppArmor and the upcoming trust-store to become trusted helpers (also for phone release)
  • Continue work on netowrking IPC (for 15.04)
  • Continue to work with the upstream kernel on kdbus
  • Work continued on LXC stacking and we hope to have stacked profiles within the current namespace for 14.10. Full support for stacked profiles where different host and container policy for the same binary at the same time should be ready by 15.04
  • Various fixes to the python userspace tools for remaining bugs. These will also be backported to 14.04 LTS

Until next time, enjoy!


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
David Murphy (schwuk)

This is really inspiring to me, on several levels: as an Ubuntu member, as a Canonical, and as a school governor.

Not only are they deploying Ubuntu and other open-source software to their students, they are encouraging those students to tinker with their laptops, and – better yet – some of those same students are directly involved in the development, distribution, and providing support for their peers. All of those students will take incredibly valuable experience with them into their future careers.

Well done.

Read more
David Planella

As part of the Ubuntu App Developer Week, I just ran a live on-air session on how to internationalize your Ubuntu apps. Some of the participants on the live chat asked me if I could share the slides somewhere online.

So here they are for your viewing pleasure :) If you’ve got any questions on i18n or in Ubuntu app development in general, feel free to ask in the comments or ping me (dpm) on IRC.

The video

The slides

Enjoy!

The post Internationalizing your apps at the Ubuntu App Developer Week appeared first on David Planella.

Read more
David Planella

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

After the recent news of Jono stepping down as the Ubuntu Community Manager to seek new challenges at XPRIZE, a new era in Ubuntu begins. Jono’s leadership, passion and drive to continually push the boundaries have been contagious over the years, and have been the catalyst for growing the unique community of individuals that defines Ubuntu today.

Jono is now joining the ranks of non-Canonical Ubuntu members, and while this will change the angle of participation, I’m certain that it won’t change his energy and dedication one bit. But most importantly, it’s a testament to his work that his former team will continue to thrive and take up the torch in pushing those boundaries.

For us, it will be business as usual in the sense of implementing our roadmap, continuing to grow a strong and open community, being innovative in how we do it, and coordinating the logistics around our plans. So not much will be different in that regard, but obviously some organizational bits will change.

As part of the transition, the Ubuntu Community Team at Canonical in full, that is, Michael Hall, Daniel Holbach, Alan Pope, Nicholas Skaggs and myself, will now be hosting the weekly Ubuntu Q&A, starting today at 18:00 UTC on Ubuntu On Air (click here for the time at your location).

The Ubuntu Community Team Q&A

Openness, both in being a transparent and welcoming community, is one of the core values of Ubuntu, and we believe the channels should be always open for a healthy information flow and to help contributors get involved.

As such, the Ubuntu Community Team Q&A will continue to provide a weekly, 1-hour-long session open for participation to anyone who wants to ask their questions about Ubuntu. In fact, as in former editions, you can ask the Community Team just anything about Free Software, Technology, or whatever you come up with. As before, the only questions we won’t answer are those related to technical support, where you’ll be much better served using Ask Ubuntu, the Ubuntu forums or IRC.

Join the Ubuntu Community Team Q&A at 18:00 UTC today and ask your questions >

The Ubuntu Online Summit is coming soon!

Also, following the thread of events and participation, the new Ubuntu Online Summit (UOS) is coming up very soon, and it’s an excellent opportunity to learn about getting involved in Ubuntu, organizing or presenting the plans of the different Ubuntu teams for the next months.

UOS will be held on June 10th – 12th and it will be a combination of the former Ubuntu Developer Summit and the more user-facing events we’ve been organizing in the past. This opens the door to a wider audience that can follow a richer mix of developer and user or contributor content.

If you want to learn about the details, check out Michael’s UOS post on how it’s going to work. If you want to contribute and make a difference in Ubuntu, do register a session too!

Looking forward to seeing you soon!

The post A new era for the Ubuntu community team, or business as usual appeared first on David Planella.

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Michael Hall

Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.

Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.

But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted.  Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.

As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.

View the Report

Read more
David Planella

Ubuntu emulator guide

Following the initial announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[…] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, we’re continually seeing the improvements in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • Ubuntu 14.04 or later (see installation notes for older versions)
  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)

Installing the emulator

If you are using Ubuntu 14.04, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running these commands, followed by Enter:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa && sudo apt-get update
sudo apt-get install ubuntu-emulator

Alternatively, if you are running an older stable release such as Ubuntu 12.04, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named MARKDOWN_HASHb3eeabb8ee11c2be770b684d95219ecbMARKDOWN_HASH in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the MARKDOWN_HASH05556613978ce6821766bb234e2ff0f2MARKDOWN_HASH package corresponding to your architecture (i386 or amd64) to download in the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the MARKDOWN_HASH1843750ed619186a2ce7bdabba6f8062MARKDOWN_HASH package corresponding to download it at the same MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: MARKDOWN_HASH8844018ed0ccc8c506d6aff82c62c46fMARKDOWN_HASH
  10. Then run this command to install the packages: MARKDOWN_HASH0452d2d16235c62b87fd735e6496c661MARKDOWN_HASH
  11. Once the installation is successful you can close the terminal and remove the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder and its contents

Installation notes

  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.

Running the emulator

Ubuntu emulator guide

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chosen for it):

sudo ubuntu-emulator create myinstance --arch=i386
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

Notice how in the command above the --arch parameter was specified to override the default architecture (armhf). Using the i386 arch will make the emulator run at a (much faster) native speed.

Other parameters you might want to experiment with are also: --scale=0.7 and --memory=720. In these examples, we’re scaling down the UI to be 70% of the original size (useful for smaller screens) and specifying a maximum of 720MB for the emulator to use (on systems with memory to spare).

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.

Runtime notes

  • Make sure you’ve got enough space to install the emulator and create new instances, otherwise the operation will fail (or take a long time) without warning.
  • At this time, the emulator takes a while to load for the first time. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even though the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command.

Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Read more
Michael Hall

A couple of months ago Jono announced the dates for the Ubuntu Online Summit, June 10th – 12th,  and those dates are almost upon us now.  The schedule is opened, the track leads are on board, all we need now are sessions.  And that’s where you come in.

Ubuntu Online Summit is a change for us, we’re trying to mix the previous online UDS events with our Open Week, Developer Week and User Days events, to try and bring people from every part of our community together to celebrate, educate, and improve Ubuntu. So in addition to the usual planning sessions we had at UDS, we’re also looking for presentations from our various community teams on the work they do, walk-throughs for new users learning how to use Ubuntu, as well as instructional sessions to help new distro developers, app developers, and cloud devops get the most out of it as a platform.

What we need from you are sessions.  It’s open to anybody, on any topic, anyway you want to do it.  The only requirement is that you can start and run a Google+ OnAir Hangout, since those are what provide the live video streaming and recording for the event.  There are two ways you can propose a session: the first is to register a Blueprint in Launchpad, this is good for planning session that will result in work items, the second is to propose a session directly in Summit, which is good for any kind of session.  Instructions for how to do both are available on the UDS Website.

There will be Track Leads available to help you get your session on the schedule, and provide some technical support if you have trouble getting your session’s hangout setup. When you propose your session (or create your Blueprint), try to pick the most appropriate track for it, that will help it get approved and scheduled faster.

Ubuntu Development

Many of the development-oriented tracks from UDS have been rolled into the Ubuntu Development track. So anything that would previously have been in Client, Core/Foundations or Cloud and Server will be in this one track now. The track leads come from all parts of Ubuntu development, so whatever you session’s topic there will be a lead there who will be familiar with it.

Track Leads:

  • Łukasz Zemczak
  • Steve Langasek
  • Leann Ogasawara
  • Antonio Rosales
  • Marc Deslaurs

Application Development

Introduced a few cycles back, the Application Development track will continue to have a focus on improving the Ubuntu SDK, tools and documentation we provide for app developers.  We also want to introduce sessions focused on teaching app development using the SDK, the various platform services available, as well as taking a deeper dive into specifics parts of the Ubuntu UI Toolkit.

Track Leads:

  • Michael Hall
  • David Planella
  • Alan Pope
  • Zsombor Egri
  • Nekhelesh Ramananthan

Cloud DevOps

This is the counterpart of the Application Development track for those with an interest in the cloud.  This track will have a dual focus on planning improvements to the DevOps tools like Juju, as well as bringing DevOps up to speed with how to use them in their own cloud deployments.  Learn how to write charms, create bundles, and manage everything in a variety of public and private clouds.

Track Leads:

  • Jorge Castro
  • Marco Ceppi
  • Patricia Gaughen
  • Jose Antonio Rey

Community

The community track has been a stable of UDS for as long as I can remember, and it’s still here in the Ubuntu Online Summit.  However, just like the other tracks, we’re looking beyond just planning ways to improve the community structure and processes.  This time we also want to have sessions showing users how they can get involved in the Ubuntu community, what teams are available, and what tools they can use in the process.

Track Leads:

  • Daniel Holbach
  • Jose Antonio Rey
  • Laura Czajkowski
  • Svetlana Belkin
  • Pablo Rubianes

Users

This is a new track and one I’m very excited about. We are all users of Ubuntu, and whether we’ve been using it for a month or a decade, there are still things we can all learn about it. The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server.  From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level.

Track Leads:

  • Elizabeth Krumbach Joseph
  • Nicholas Skaggs
  • Valorie Zimmerman

So once again, it’s time to get those sessions in.  Visit this page to learn how, then start thinking of what you want to talk about during those three days.  Help the track leads out by finding more people to propose more sessions, and let’s get that schedule filled out. I look forward to seeing you all at our first ever Ubuntu Online Summit.

Read more
Michael Hall

I’ve just finished the last day of a week long sprint for Ubuntu application development. There were many people here, designers, SDK developers, QA folks and, which excited me the most, several of the Core Apps developers from our community!

image20140520_0048I haven’t been in attendance at many conferences over the past couple of years, and without an in-person UDS I haven’t had a chance to meetup and hangout with anybody outside of my own local community. So this was a very nice treat for me personally to spend the week with such awesome and inspiring contributors.

It wasn’t a vacation though, sprints are lots of work, more work than UDS.  All of us were jumping back and forth between high information density discussions on how to implement things, and then diving into some long heads-down work to get as much implemented as we could. It was intense, and now we’re all quite tired, but we all worked together well.

I was particularly pleased to see the community guys jumping right in and thriving in what could have very easily been an overwhelming event. Not only did they all accomplish a lot of work, fix a lot of bugs, and implement some new features, but they also gave invaluable feedback to the developers of the toolkit and tools. They never cease to amaze me with their talent and commitment.

It was a little bitter-sweet though, as this was also the last sprint with Jono at the head of the community team.  As most of you know, Jono is leaving Canonical to join the XPrize foundation.  It is an exciting opportunity to be sure, but his experience and his insights will be sorely missed by the rest of us. More importantly though he is a friend to so many of us, and while we are sad to see him leave, we wish him all the best and can’t wait to hear about the things he will be doing in the future.

Read more
bmichaelsen

Train Stops

And the sons of pullman porters and the sons of engineers
Ride their father’s magic carpets made of steel
Mothers with their babes asleep are rockin’ to the gentle beat
And the rhythm of the rails is all they feel

– The City of New Orleans, Willie Nelson interpreting Steve Goodman

So, LibreOffice does its releases on a train release schedule and since we recently modified the schedule a bit (by putting out the alpha1 release earlier), I took the opportunity to take a closer look and explain a bit on what we are doing. With this every 6 months of LibreOffice development currently roughly look like this:
week after x.y.0 development release candidates finalized releases
fresh stable fresh stable
0 x.y.0
1 x.y.1~rc1 x.(y-1).4~rc1
2
3 x.y.1~rc2 x.(y-1).4~rc2
4 x.y.1 x.(y-1).4
5
6 x.y.2~rc1
7
8 x.y.2~rc2 x.(y-1).5~rc1
9 x.y.2
10 x.(y-1).5~rc2
11 x.y.3~rc1 x.(y-1).5
12
13 x.(y+1)~alpha1 x.y.3~rc2
14 x.y.3
15
16
17
18 x.(y+1)~beta1
19
20 x.(y+1)~beta2 x.(y-1).6~rc1
21
22 x.(y+1)~rc1 x.(y-1).6~rc2
23 x.(y-1).6
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last two columns are most visible to most visitors of the LibreOffice website. Those are the versions found on the LibreOffice Fresh and LibreOffice Stable download pages. We are in roughly at week 18 after 4.2.0 release now, and the versions available are 4.2.4 fresh and 4.1.6 stable. A careful reader will note that according to that schedule we should be at 4.2.3 and 4.1.5 — that is true, but the 4.2 series still had an extra 4.2.1 intermediate release to adjust the schedule of 4.2 in direction of the current plan. This is not expected for future releases (also note that there is always some flexibility in the plan to allow for holidays etc.)

If you count all the prereleases, release candidates and releases, you will find that we do 25 of those in 26 weeks. Beside the fact that this is a lot of work for release engineers, one might wonder if anyone can keep up with that, and if so — how? The answer to that depends on how you are using LibreOffice.

self deployment on LibreOffice fresh

If you are an user or a small business installing LibreOffice yourself, you will probably run LibreOffice fresh and the table above simplifies for you as follows:

week after x.y.0 development release candidates finalized releases
0 x.y.0
1 x.y.1~rc1
4 x.y.1
6 x.y.2~rc1
9 x.y.2
11 x.y.3~rc1
14 x.y.3
18 x.(y+1)~beta1
20 x.(y+1)~beta2
22 x.(y+1)~rc1
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last column shows the releases you are running. If you are a member of the LibreOffice community it would be very helpful if you also spend some time of this 6 months period for three actions:

  • running at least one of the release candidates in the table (available for download here) before the final is released.
  • running at least one beta releases in the table. Note that there will be a bug hunting session on the 4.3.0 beta release this week, that will help you get started.
  • running a nightly build once anywhere in the weeks 1-18. Note that if you are getting excited about seeing the latest and greatest builds while they are still steaming, there are tools that can help you with this on Linux and Windows.

If you do these each of these three things once in the timeframe of six months and report any issues you find, you are helping LibreOffice already a lot — and you are making sure that the finalized releases of the fresh series are not only containing all the latest features, but also free of severe regressions.

bigger deployments on LibreOffice stable

If you are not installing LibreOffice yourself, but instead have a major deployment administrated centrally, things are a bit different. You might be more conservative and interested in the releases from LibreOffice stable. And you probably have professional support from a certified developer or a company employing certified developers.

week after x.y.0 development release candidates finalized releases
1 x.(y-1).4~rc1
4 x.(y-1).4
8 x.(y-1).5~rc1
11 x.(y-1).5
13 x.(y+1)~alpha1
18 x.(y+1)~beta1
20 x.(y-1).6~rc1
23 x.(y-1).6

If you intend to deploy one series of LibreOffice (e.g. 4.3), there are two things that are highly recommended to be done:

  • make the alpha or beta releases available quickly to interested volunteers in your deployment early. They might find bugs or regressions that are specific to your use of the software.
  • make the release candidates of versions that you intent to deploy available early to your users.

Of these two actions, the first is by far the most important: It identifies issues early on in the life cycle and gives both your support provider and the LibreOffice developer community at large time to resolve the issue. In fact, I would argue that if you have a major deployment, the only excuse for not making available prereleases, is that you made available nightly builds.

Ubuntu

So, Ubuntu qualifies as a “bigger deployment” and I have to take care of LibreOffice on it. Also people want to be able to run the latest and greatest LibreOffice releases from the LibreOffice fresh series. Do I follow my recommendations here? Yes, mostly I do:

  • both LibreOffice fresh and LibreOffice stable series are available from PPAs for Ubuntu and are updated regularly and quickly when an rc2 is available.
  • prereleases are made available as bibisect repositories rather quick (build on Ubuntu 12.04 LTS). In addition, fully packaged versions of LibreOffice are build in the prereleases PPA as early as starting with beta1.

So, you are invited to run or test builds from these PPAs — or download the bibisect repositories — to keep LibreOffice releases coming in the steady and stable fashion they do. Finally, there is a bug hunting session for LibreOffice this week and as said above, no matter if you are running a huge deployment or installing on your own, you are helping LibreOffice — and yourself, as a user of LibreOffice — a lot by testing the prereleases:


Read more
Prakash Advani

Ubuntu 32-Bit or 64-Bit ?

I often get asked this question 32 or 64-Bit? In the past I have told people to stick to 32-Bit because of compatibility issues but things have changed now. Quoting for Phoronix which recently did a benchmark test between 32-Bit and 64-Bit.

Going back years we have run 32-bit vs. 64-bit Linux benchmarks. While the results seldom change, we keep running them as the question of choosing between a 32-bit and 64-bit Linux distribution image is still a popular question… These tests drive in a surprising amount of traffic and I continue to be flabbergasted by the number of people still asking this question when nearly all modern x86 Intel/AMD hardware fully supports x86_64 and it generally means much better performance. Usually the only caveat in not using a 64-bit Linux image is if running a system with less than 2GB of RAM.

In the past there were issues surrounding the Java and Flash support for 64-bit Linux along with an assortment of other possible problems (e.g. with Wine), but all those major issues are a matter of the past. 64-bit Linux is in great shape and as long as you have a decent amount of RAM you really should be running 64-bit Linux.

If you are still in doubt, read the full article for the benchmark results.

Read more
Michael Hall

Ubuntu has always been about breaking new ground. We broke the ground with the desktop back in 2004, we have broken the ground with cloud orchestration across multiple clouds and providers, and we are building a powerful, innovative mobile and desktop platform that is breaking ground with convergence.

The hardest part about breaking new ground and innovating is not having the vision and creating the technology, it is getting people on board to be part of it.

We knew this was going to be a challenge when we first took the wraps off the Ubuntu app developer platform: we have a brand new platform that was still being developed, and when we started many of the key pieces were not there such as a solid developer portal, documentation, API references, training and more. Today the story is very different with a compelling, end-to-end, developer story for building powerful convergent apps.

We believed and always have believed in the power of this platform, and every single one of those people who also believed in what we are doing and wrote apps have shared the same spirit of pioneering a new platform that we have.

As such, we want to acknowledge those people.

And with this, I present Ubuntu Pioneers.

The idea is simple, we want to celebrate the first 200 app developers who get their apps in Ubuntu. We are doing this in two ways.

Firstly, we have created http://developer.ubuntu.com/pioneers which displays all of these developers and lists the apps that they have created. This will provide a permanent record of those who were there right at the beginning.

Secondly, we have designed a custom, limited-edition Ubuntu Pioneers t-shirt that we want to send to all of our pioneers. For those of you who are listed on this page, please ensure that your email address is correct in MyApps as we will be getting in touch soon.

Thank-you so much to every single person listed on that page. You are an inspiration for me, my team, and the wider Ubuntu project.

If you have that pioneering spirit and wished you were up there, fear not! We still have some space before we hit 200 developers, so go here to get started building an app.

Original by Jono Bacon

Read more
Dustin Kirkland

Click and drag to rotate, zoom with middle mouse button

It was September of 2009.  I answered a couple of gimme trivia questions and dropped my business card into a hat at a Linux conference in Portland, Oregon.  A few hours later, I received an email...I had just "won" a developer edition HTC Dream -- the Android G1.  I was quite anxious to have a hardware platform where I could experiment with Android.  I had, of course, already downloaded the SDK, compiled Android from scratch, and fiddled with it in an emulator.  But that experience fell far short of Android running on real hardware.  Until the G1.  The G1 was the first device to truly showcase the power and potential of the Android operating system.

And with that context, we are delighted to introduce the Orange Box!


The Orange Box


Conceived by Canonical and custom built by TranquilPC, the Orange Box is a 10-node cluster computer, that fits in a suitcase.

Ubuntu, MAAS, Juju, Landscape, OpenStack, Hadoop, CloudFoundry, and more!

The Orange Box provides a spectacular development platform, showcasing in mere minutes the power of hardware provisioning and service orchestration with Ubuntu, MAAS, Juju, and Landscape.  OpenStack, Hadoop, CloudFoundry, and hundreds of other workloads deploy in minutes, to real hardware -- not just instances in AWS!  It also makes one hell of a Steam server -- there's a charm for that ;-)


OpenStack deployed by Juju, takes merely 6 minutes on an Orange Box

Most developers here certainly recognize the term "SDK", or "Software Development Kit"...  You can think of the Orange Box as a "HDK", or "Hardware Development Kit".  Pair an Orange Box with MAAS and Juju, and you have yourself a compact cloud.  Or a portable big data number cruncher.  Or a lightweight cluster computer.


The underside of an Orange Box, with its cover off


Want to get your hands on one?

Drop us a line, and we'd be delighted to hand-deliver an Orange Box to your office, and conduct 2 full days of technical training, covering MAAS, Juju, Landscape, and OpenStack.  The box is yours for 2 weeks, as you experiment with the industry leading Ubuntu ecosystem of cloud technologies at your own pace and with your own workloads.  We'll show back up, a couple of weeks later, to review what you learned and discuss scaling these tools up, into your own data center, on your own enterprise hardware.  (And if you want your very own Orange Box to keep, you can order one from our friends at TranquilPC.)


Manufacturers of the Orange Box

Gear head like me?  Interested in the technical specs?


Remember those posts late last year about Intel NUCs?  Someone took notice, and we set out to build this ;-)


Each Orange Box chassis contains:
  • 10x Intel NUCs
  • All 10x Intel NUCs contain
    • Intel HD Graphics 4000 GPU
    • 16GB of DDR3 RAM
    • 120GB SSD root disk
    • Intel Gigabit ethernet
  • D-Link DGS-1100-16 managed gigabit switch with 802.1q VLAN support
    • All 10 nodes are internally connected to this gigabit switch
  • 100-240V AC/DC power supply
    • Adapter supplied for US, UK, and EU plug types
    • 19V DC power supplied to each NUC
    • 5V DC power supplied to internal network switch


Intel NUC D53427RKE board

That's basically an Amazon EC2 m3.xlarge ;-)

The first node, node0, additionally contains:
  • A 2TB Western Digital HDD, preloaded with a full Ubuntu archive mirror
  • USB and HDMI ports are wired and accessible from the rear of the box

Most planes fly in clouds...this cloud flies in planes!


In aggregate, this micro cluster effectively fields 40 cores, 160GB of RAM, 1.2TB of solid state storage, and is connected over an internal gigabit network fabric.  A single fan quietly cools the power supply, while all of the nodes are passively cooled by aluminum heat sinks spanning each side of the chassis. All in a chassis the size of a tower PC!

It fits in a suit case, and can travel anywhere you go.


Pelican iM2875 Storm Case

How are we using them at Canonical?

If you're here at the OpenStack Summit in Atlanta, GA, you'll see at least a dozen Orange Boxes, in our booth, on stage during Mark Shuttleworth's keynote, and in our breakout conference rooms.


Canonical sales engineer, Ameet Paranjape,
demonstrating OpenStack on the Orange Box in the Ubuntu booth
at the OpenStack Summit in Atlanta, GA
We are also launching an update to our OpenStack Jumpstart program, where we'll deliver and Orange Box and 2 full days of training to your team, and leave you the box while you experiment with OpenStack, MAAS, Juju, Hadoop, and more for 2 weeks.  Without disrupting your core network or production data center workloads,  prototype your OpenStack experience within a private sandbox environment. You can experiment with various storage alternatives, practice scaling services, destroy and rebuild the environment repeatedly. Safe. Risk free.


This is Cloud, for the Free Man.

:-Dustin

Read more
bmichaelsen

I hope that someone gets my message in a bottle

– Message in a Bottle, the Police

So, there was some minor confusion about the wording in the LibreOffice 4.2.4 release notes.

This needs some background first: LibreOffice 4.2 modified the UNO API to pop up a message box in a slight way against LibreOffice 4.1. This was properly announced in our LibreOffice 4.2 release notes many moons ago:

The following UNO interfaces and services were changed […] com.sun.star.awt.XMessageBox, com.sun.star.awt.XMessageBoxFactory

Luckily, LibreOffice extensions can specify a minimal version, so extensions using the new MessageBox-API can explicitly request a version of LibreOffice 4.2 or newer. This change in our sdk-examples shows how an extension can be updated to use the new API and explicitly require a version of LibreOffice 4.2 and higher. All this happened already with LibreOffice 4.2.0 being released and has nothing yet to do with the change in LibreOffice 4.2.4.

So what was changed in LibreOffice 4.2.4? Well, in addition to the LibreOffice version, old extensions sometimes just ask for an “OpenOffice.org version”. Most LibreOffice versions answered its version was “3.4”, so this old backwards compatible check was not very helpful anyway. So in LibreOffice 4.2.4 this value was changed to  “4.1”, which might make some old extensions aware of the incompatible API change. That’s all.

Note that:

So, the short answer to the question to “what changed in LibreOffice 4.2.4?” is: Nothing, if your extension uses LibreOffice-minimal-version as recommended.


Read more
Nicholas Skaggs

Building click packages should be easy. And to a reasonable extent, qtcreator and click-buddy do make it easy. Things however can get a bit more complicated when you need to build a package that needs to run on an armhf device (you know like your phone!). Since your pc is almost certainly based on x86, you need to use, create or fake an armhf environment for building the package.

So then what options exist for getting a proper build of a project that will install properly on your device?

A phone can be more than a phone
It can also be a development environment!? Although it's not my recommendation, you can always use the source device to compile the package with. The downsides of this is namely speed and storage space. Nevertheless, it will build a click.

  1. shell into your device (adb shell / ssh mydevice)
  2. checkout the code (bzr branch lp:my-project)
  3. install the needed dependencies and sdk (apt-get install ubuntu-sdk)
  4. build with click-buddy (click-buddy --dir .)
Chroot to the rescue
The click tools contain a handy way to build a chroot expressly suited for use with click-buddy to build things. Basically, we can create a nice fake environment and pretend it's armhf, even though we're not running that architecture.

sudo click chroot -a armhf -f ubuntu-sdk-14.04 create
click-buddy --dir . --arch armhf

Most likely your package will require extra dependencies, which for now will need to be specified and passed in with the --extra-deps argument. These arguments are packages names, just like you would apt-get. Like so;

click-buddy --dir . --arch armhf --extra-deps "libboost-dev:armhf libssl-dev:armhf"

Notice we specified the arch as well, armhf. If we also add a --maint-mode, our extra installed packages will persist. This is handy if you will only ever be building a single project and don't want to constantly update the base chroot with your build dependencies.

Qtcreator build it for me!
Cmake makes all things possible. Qt Creator can not only build the click for you, it can also hold your hand through creating a chroot1. To create a chroot in qtcreator, do the following:
  1. Open Qt Creator
  2. Navigate to Tools > Options > Ubuntu > Click
  3. Click on Create Click Target
  4. After the click target is finished, add the dependencies needed for building. You can do this by clicking the maintain button.  
  5. Apt-get add what you need or otherwise setup the environment. Once ready, exit the chroot.
Now you can use this chroot for your project
  1. Open qt creator and open the project
  2. Select armhf when prompted
    1. You can also manually add the chroot to the project via Projects > Add kit and then select the UbuntuSDK armhf kit.
  3. Navigate to Projects tab and ensure the UbuntuSDK for armhf kit is selected.
  4. Build!
Rolling your own chroot
So, click can setup a chroot for you, and qt creator can build and manage one too. And these are great options for building one project. However if you find yourself building a plethora of packages or you simply want more control, I recommend setting up and using your own chroot to build. For my own use, I've picked pbuilder, but you can setup the chroot using other tools (like schroot which Qt Creator uses).

sudo apt-get install qemu-user-static ubuntu-dev-tools
pbuilder-dist trusty armhf create
pbuilder-dist trusty armhf login --save-after-login


Then, from inside the chroot shell, install a couple things you will always want available; namely the build tools and bzr/git/etc for grabbing the source you need. Be careful here and don't install too much. We want to maintain an otherwise pristine environment for building our packages. By default changes you make inside the chroot will be wiped. That means those package specific dependencies we'll install each time to build something won't persist.

apt-get install ubuntu-sdk bzr git phablet-tools
exit

By exiting, you'll notice pbuilder will update the base tarball with our changes. Now, when you want to build something, simply do the following:

pbuilder-dist trusty armhf login
bzr branch lp:my-project
apt-get install build-dependencies-you-need

Now, you can build as usual using click tools, so something like

click-buddy --dir .

works as expected. You can even add the --provision to send the resulting click to your device. If you want to grab the resulting click, you'll need to copy it before exiting the chroot, which is mounted on your filesystem under /var/cache/pbuilder/build/. Look for the last line after you issue your login command (pbuilder-dist trusty armhf login). You should see something like, 

File extracted to: /var/cache/pbuilder/build//26213

If you cd to this directory on your local machine, you'll see the environment chroot filesystem. Navigate to your source directory and grab a copy of the resulting click. Copy it to a safe place (somewhere outside of the chroot) before exiting the chroot or you will lose your build! 

But wait, there's more!
Since you have access to the chroot while it's open (and you can login several times if you wish to create several sessions from the base tarball), you can iteratively build packages as needed, hack on code, etc. The chroot is your playground.

Remember, click is your friend. Happy hacking!

1. Thanks to David Planella for this info

Read more
Dustin Kirkland


Upon learning about the Heartbleed vulnerability in OpenSSL, my first thoughts were pretty desperate.  I basically lost all faith in humanity's ability to write secure software.  It's really that bad.

I spent the next couple of hours drowning in the sea of passwords and certificates I would personally need to change...ugh :-/

As of the hangover of that sobering reality arrived, I then started thinking about various systems over the years that I've designed, implemented, or was otherwise responsible for, and how Heartbleed affected those services.  Another throbbing headache set in.

I patched DivItUp.com within minutes of Ubuntu releasing an updated OpenSSL package, and re-keyed the SSL certificate as soon as GoDaddy declared that it was safe for re-keying.

Likewise, the Ubuntu entropy service was patched and re-keyed, along with all Ubuntu-related https services by Canonical IT.  I pushed an new package of the pollinate client with updated certificate changes to Ubuntu 14.04 LTS (trusty), the same day.

That said, I did enjoy a bit of measured satisfaction, in one controversial design decision that I made in January 2012, when creating Gazzang's zTrustee remote key management system.

All default network communications, between zTrustee clients and servers, are encrypted twice.  The outer transport layer network traffic, like any https service, is encrypted using OpenSSL.  But the inner payloads are also signed and encrypted using GnuPG.

Hundreds of times, zTrustee and I were questioned or criticized about that design -- by customers, prospects, partners, and probably competitors.

In fact, at one time, there was pressure from a particular customer/partner/prospect, to disable the inner GPG encryption entirely, and have zTrustee rely solely on the transport layer OpenSSL, for performance reasons.  Tried as I might, I eventually lost that fight, and we added the "feature" (as a non-default option).  That someone might have some re-keying to do...

But even in the face of the Internet-melting Heartbleed vulnerability, I'm absolutely delighted that the inner payloads of zTrustee communications are still protected by GnuPG asymmetric encryption and are NOT vulnerable to Heartbleed style snooping.

In fact, these payloads are some of the very encryption keys that guard YOUR health care and financial data stored in public and private clouds around the world by Global 2000 companies.

Truth be told, the insurance against crypto library vulnerabilities zTrustee bought by using GnuPG and OpenSSL in combination was really the secondary objective.

The primary objective was actually to leverage asymmetric encryption, to both sign AND encrypt all payloads, in order to cryptographically authenticate zTrustee clients, ensure payload integrity, and enforce key revocations.  We technically could have used OpenSSL for both layers and even realized a few performance benefits -- OpenSSL is faster than GnuPG in our experience, and can leverage crypto accelerator hardware more easily.  But I insisted that the combination of GPG over SSL would buy us protection against vulnerabilities in either protocol, and that was worth any performance cost in a key management product like zTrustee.

In retrospect, this makes me wonder why diverse, backup, redundant encryption, isn't more prevalent in the design of security systems...

Every elevator you have ever used has redundant safety mechanisms.  Your car has both seat belts and air bags.  Your friendly cashier will double bag your groceries if you ask.  And I bet you've tied your shoes with a double knot before.

Your servers have redundant power supplies.  Your storage arrays have redundant hard drives.  You might even have two monitors.  You're might be carrying a laptop, a tablet, and a smart phone.

Moreover, important services on the Internet are often highly available, redundant, fault tolerant or distributed by design.

But the underpinnings of the privacy and integrity of the very Internet itself, is usually protected only once, with transport layer encryption of the traffic in motion.

At this point, can we afford the performance impact of additional layers of security?  Or, rather, at this point, can we afford not to use all available protection?

Dustin

p.s. I use both dm-crypt and eCryptFS on my Ubuntu laptop ;-)

Read more
Prakash Advani

The Short answer is No. Ubuntu was patched on 7th April 2014 and the bug was widely reported on 8th April, 2014. If you are using other operating systems,  you need to worry. Especially if its non-Linux based.

Read More: http://news.softpedia.com/news/Dear-Ubuntu-Users-Stop-Saying-the-Ubuntu-Is-Unprotected-Against-the-Heartbleed-Exploit-437846.shtml

The Heartbleed vulnerability that was discovered just last week took the world by surprise, but most of the affected services and operating systems have been patched. Unfortunately, some of the Ubuntu users haven’t understood how the patching process works and have started to flood the forums and other social media with the message that Ubuntu is vulnerable.

Before the OpenSSL issues has become known to the general public, most of the Linux distributions affected by the issue were patched. Most of the media reported on the problem on April 8, but the patch for the Heartbleed vulnerability was already in place on April 7. This is how the security notification looks like in Ubuntu.

Read more