Canonical Voices

David Planella

As part of the Ubuntu App Developer Week, I just ran a live on-air session on how to internationalize your Ubuntu apps. Some of the participants on the live chat asked me if I could share the slides somewhere online.

So here they are for your viewing pleasure :) If you’ve got any questions on i18n or in Ubuntu app development in general, feel free to ask in the comments or ping me (dpm) on IRC.

The video

The slides

Enjoy!

The post Internationalizing your apps at the Ubuntu App Developer Week appeared first on David Planella.

Read more
Martin Albisetti

As previously announced, the file services for Ubuntu One will be discontinued starting today.
In order to make it easier for you to move your content away, we have rolled out a new feature that lets you download all your content with just one click.
As a reminder, on July 31st 2014, all content in the file services will be deleted.

Just log into our website, and you will see the button placed prominently on the page:

u1-shutdown

 

For those looking for another Cloud storage option we’ve talked to a number of companies that wanted to tell Ubuntu One users about their services.

Mover.io

Mover is a web service that helps you migrate and backup files between cloud storage providers. The Mover team tells us they process more than two billion files each month, so that users can migrate files of all shapes and sizes, from a personal photo collection, to all of a workgroups files.

With Mover you can transfer files between services such as Dropbox, Box, Google Drive, Amazon S3 and a range of others.

Mover is helping U1 users to move transfer files for free. Simply create a Mover account and any transfers made in or out of the U1 connector will be free.

To find out more about their service see the blog post.

SpiderOak

SpiderOak is a cloud file service that is similar to Ubuntu One with all files stored in the Cloud. They have a ‘Zero-Knowledge’ privacy environment that provides end-to-end encryption to ensure you – and only you – can view your data. The SpiderOak team tell us they’ve been natively supported Linux from the start in 2007.

SpiderOak is offering two special deals for Ubuntu One customers:

  • 20 GB for $24.99/year

  • 40 GB for for $29.99/year

To take advantage of one of these offers use the Promotion Code LongLiveUbuntu. The steps are:

1. Click here to create your account.
2. Download and install the client.
3. Click  ‘Buy More Space’ in the client itself, or via the web portal (which you can only access once you’ve downloaded the client). In the web portal, you will go to Account, and then choose Upgrade My Plan.
4. Choose YEARLY and enter the following promotion code: LongLiveUbuntu. (NOTE: You MUST type in the promotion code as it will not work if you cut & paste.)
5. Congratulations – SpiderOak now has your back(up)!

For some more information on SpiderOak’s Linux support see their blog post.

 

ownCloud

ownCloud is an Open Source managed file sync and sharing software that you install on your own servers. It’s a great option for those that want the Cloud storage capability but would like to control where the files are stored. The ownCloud team told us that their Community version is well supported under Ubuntu and they also support Windows and Mac desktop machines, while on the mobile side there are clients for Android and iPhone/iPad. For those that need advanced enterprises capabilities there’s also a commercial version that adds those features.

You can read about how to install ownCloud Server and the Desktop clients here.

 

Refunds and Support

For those of you who have been paying customers, we have started processing refunds, and you should see it reflected in your credit cards or Paypal accounts within the next 7-10 days.

 

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140603 Meeting Agenda


ARM Status

No new update this week.


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

   apw    core-1405-kernel    2 work items   
   ogasawara    core-1405-kernel    2 work items   


Status: Utopic Development Kernel

We have most recently rebased our Utopic kernel to v3.15-rc8 and
uploaded (3.15.0-5.10). We are planning on converging on the v3.16
kernel for Utopic. It also appears that the Utopic release date has
been pushed out a week to Thurs Oct 23 in order to not conflict with
the Linux Plumbers Conference.
—–
Important upcoming dates:
Mon-Wed June 10 – 12, UOS – Ubuntu Online Summit (~1 week away)
Thurs Jun 26 – Alpha 1 (~3 weeks away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (June 3):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Quantal – No changes this cycle
  • Saucy – Verification and Testing
  • Trusty – Verification and Testing

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    cycle: 18-May through 07-Jun
    ====================================================================
    16-May Last day for kernel commits for this cycle
    18-May – 24-May Kernel prep week.
    25-May – 31-May Bug verification & Regression testing.
    01-Jun – 07-Jun Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
David Planella

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

After the recent news of Jono stepping down as the Ubuntu Community Manager to seek new challenges at XPRIZE, a new era in Ubuntu begins. Jono’s leadership, passion and drive to continually push the boundaries have been contagious over the years, and have been the catalyst for growing the unique community of individuals that defines Ubuntu today.

Jono is now joining the ranks of non-Canonical Ubuntu members, and while this will change the angle of participation, I’m certain that it won’t change his energy and dedication one bit. But most importantly, it’s a testament to his work that his former team will continue to thrive and take up the torch in pushing those boundaries.

For us, it will be business as usual in the sense of implementing our roadmap, continuing to grow a strong and open community, being innovative in how we do it, and coordinating the logistics around our plans. So not much will be different in that regard, but obviously some organizational bits will change.

As part of the transition, the Ubuntu Community Team at Canonical in full, that is, Michael Hall, Daniel Holbach, Alan Pope, Nicholas Skaggs and myself, will now be hosting the weekly Ubuntu Q&A, starting today at 18:00 UTC on Ubuntu On Air (click here for the time at your location).

The Ubuntu Community Team Q&A

Openness, both in being a transparent and welcoming community, is one of the core values of Ubuntu, and we believe the channels should be always open for a healthy information flow and to help contributors get involved.

As such, the Ubuntu Community Team Q&A will continue to provide a weekly, 1-hour-long session open for participation to anyone who wants to ask their questions about Ubuntu. In fact, as in former editions, you can ask the Community Team just anything about Free Software, Technology, or whatever you come up with. As before, the only questions we won’t answer are those related to technical support, where you’ll be much better served using Ask Ubuntu, the Ubuntu forums or IRC.

Join the Ubuntu Community Team Q&A at 18:00 UTC today and ask your questions >

The Ubuntu Online Summit is coming soon!

Also, following the thread of events and participation, the new Ubuntu Online Summit (UOS) is coming up very soon, and it’s an excellent opportunity to learn about getting involved in Ubuntu, organizing or presenting the plans of the different Ubuntu teams for the next months.

UOS will be held on June 10th – 12th and it will be a combination of the former Ubuntu Developer Summit and the more user-facing events we’ve been organizing in the past. This opens the door to a wider audience that can follow a richer mix of developer and user or contributor content.

If you want to learn about the details, check out Michael’s UOS post on how it’s going to work. If you want to contribute and make a difference in Ubuntu, do register a session too!

Looking forward to seeing you soon!

The post A new era for the Ubuntu community team, or business as usual appeared first on David Planella.

Read more
Nicholas Skaggs

Calling for your UOS users session!

Ubuntu Online Summit is approaching, happening on June 10th-12th.This time it's a bit different from how vUDS has been in the past. Rather than the narrower developer focus, this intends to be a full blown community summit. If you've attending things like ubuntu open week or a classroom session in the past, all of those types of sessions are welcome and encouraged too.

To help foster these types of sessions, there is a special Users track.

"The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server. From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level."

Track Leads:
Elizabeth Krumbach Joseph
Nicholas Skaggs
Valorie Zimmerman

I'm excitied to be a track lead for this track along with Liz and Val. We are all inviting you to consider scheduling a session to share your knowledge of ubuntu. Share an idea, discuss your passion, give a how-to, etc. The sessions in this track are meant for other users of ubuntu like yourself, so feel free to share.

Regardless of your desire to contribute a session, I would encourage everyone to take a look at the schedule as it evolves and considering joining in sessions they find interesting.
.
Remember, this track is your track and filled with your sessions. Let's help make the online summit a success.

So ready to propose a session? Checkout this page and feel free to ping Val, Liz or myself for help. Don't forget to register to attend and check out the currently scheduled sessions!

Read more
Inayaili de León Persson

This post is part of the series ‘Making ubuntu.com responsive‘.

One of the biggest challenges when making the move to responsive was tackling the navigation in ubuntu.com. This included rethinking not only the main navigation with first, second and third level links, but also a big 3-tier footer and global navigation.

Navigation in large screens

Footer in large screensDesktop size main navigation and footer on ubuntu.com.

1. Brainstorming

Instead of assigning this task to a single UX designer, and with the availability of everyone in the team very tight, we gathered two designers and two UX designers in a room for a few hours for an intensive brainstorming session. The goal of the meeting was to find an initial solution for how our navigation could work in small screens.

We started by analysing examples of small screen navigation from other sites and discussing how they could work (or not) for ubuntu.com. This was a good way of generating ideas to solve the navigation for our specific case.

Guardian, BBC, John Lewis navigationSome of the small screen navigation examples we analysed, from left to right: the Guardian, BBC and John Lewis.

We decided to keep to existing common design patterns for small screen navigation rather than trying to think of original solutions, so we stuck with the typical menu icon on the top right with a dropdown on click for the top level links.

Settling on a solution for second and third level navigation was trickier, as we wanted to keep a balance between exposing our content and making the site easy to navigate without the menus getting in the way.

We thought keeping to simple patterns would make it easier for users to understand the mechanics of the navigation quickly, and assumed that in smaller screens users tend to forgive extra clicks if that means simpler and uncluttered navigation.

SketchesSome of the ideas we sketched during the brainstorming session.

2. Prototyping

With little time on our hands, we decided we’d deliver our solution in paper sketches for a prototype to be created by our developers. The starting point for the styling of the navigation should follow closely that of Ubuntu Insights, and the remaining tweaks should be built and designed directly in code.

Ubuntu Insights navigationNavigation of Ubuntu Insights on large and small screens.

We briefed Ant with the sketches and some design and UX direction and he quickly built a prototype of the main navigation and footer for us to test and further improve.

Initial navigation prototypeFirst navigation prototype.

3. Improving

We gathered again to test and review the prototype once it was ready, and suggest improvements.

Everyone agreed that the mechanics of the navigation were right, and that visual tweaks could make it easier to understand, providing the user with more cues about the hierarchy of the content.

Initially, when scaling down the screen the search and navigation overlapped a small amount before the small screen menu icon kicked in, so we also thought it would be nice to animate the change of the amount of padding between widths.

We also made sure that if JavaScript is not available, when the user clicks on the menu icon the page scrolls down to the footer, where the navigation is accessible.

Final navigation prototypeFinal navigation prototype, after some tweaks.

Some final thoughts

When time is of essence, but you still want to be able to experiment and generate as many ideas as possible, spending a couple of hours in a room with other team members, talking through case studies and how they can be applied to your situation proved a really useful and quick way to advance the project. And time and time again, it has proved very useful to invite people who are not directly involved with the project to give the team valuable new ideas and insights.

We’re now planning to test the navigation in our next quarterly set of usability testing, which will surely provide us with useful feedback to make the website easier to navigate across all screen sizes.

Read the next post in this series: “Making ubuntu.com responsive: dealing with responsive images”

Reading list

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Michael Hall

Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.

Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.

But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted.  Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.

As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.

View the Report

Read more
Inayaili de León Persson

This post is part of the series ‘Making ubuntu.com responsive‘.

A big part of converting our existing fixed-width desktop site to be responsive was to make sure we had a flexible grid that would flow seamlessly from small to large screens.

From the start, we decided that we were going to approach the move as simply as possible: we wanted the content our grid holds to become easier to read and browse on any screen size, but that doesn’t necessarily mean creating anything too complex.

Our existing fixed-width grid

Before the transition, our grid consisted of 12 columns with 20px gutters.

The width of each column could be variable, but we were working with 57px columns and 40px padding on each side of the main content container.

GridOur existing fixed-width desktop grid.

Inside that grid, content can be divided into one, two, three or four columns. In extreme cases, we can also use five columns, but we avoid this.

Grid overlaidOur grid laid over an existing page of the site.

We also try keeping text content below eight columns, as it becomes harder to read otherwise.

Adding flexibility

When we first created our web style guide, we decided that, since we were getting our hands dirty with the refactoring of the CSS, we’d go ahead and convert our grid to use percentages instead of pixels.

The idea was that it would be useful for the future, while keeping everything looking almost exactly the same, since the content would still be all held within a fixed-width container for the time being.

Using percentagesA fixed-width container holding percentage-based columns.

This proved one of the best decisions we made and it made the transition to responsive much smoother than we initially thought.

The CSS

We used Gridinator to initially create our basic grid, removed any unnecessary rules and properties from the CSS it generated and added others we needed.

The settings we’ve input, in case you’re wondering, were:

  • Number of columns: 12
  • Column width: 57px
  • Column margins: 20px (technically, the gutters)
  • Container margin: 40px
  • Body font size: 16px
  • Option: percentage units

Gridinator settingsScreenshot of our Gridinator settings.

We could have created this CSS from scratch, but we found this tool saved us some precious time when creating all the variations we needed when using the grid.

You can have a peek into our current grid stylesheet now.

First prototypes

The first two steps we took when creating the initial responsive prototype of ubuntu.com were:

  • Remove the fixed-width container and see how the content would flow freely and fluidly
  • Remove all the floats and positioning rules from the existing CSS and see how the content would flow in a linear manner, and test this on small screen devices

When we removed the fixed-width container, all the content became fluid. Obviously, there were no media queries behind this, so the content was free to grow to huge sizes, with really long line lengths, and equally the columns could shrink to unreasonably narrow sizes. But it was fluid, and we were happy!

Similarly, when checking the float-free prototype, even though there were quite a few issues with background images and custom, absolutely positioned elements, the results were promising.

Float free prototypeOur first float-free prototype: some issues but overall a good try.

These tests showed to us that, even though the bulk of the work was still ahead of us, a lot had been accomplished by simply making the effort to convert our initial pixel-based columns into percentage based ones. This is a test that we think other teams could be able to do, before moving on to a complete revamp of their CSS.

Defining breakpoints

We didn’t want to relate our breakpoints to specific devices, but it was important that we understood what kind of screen sizes people were using to visit our site on.

To give you an idea, here are the top 10 screen sizes (not window size, mind) between 4 March and 3 April 2014, in pixels, on ubuntu.com:

  1. 1366 x 768: 26.15%
  2. 1920 x 1028: 15.4%
  3. 1280 x 800: 7.98%
  4. 1280 x 1024: 7.26%
  5. 1440 x 900: 6.29%
  6. 1024 x 768: 6.26%
  7. 1600 x 900: 5.51%
  8. 1680 x 1050: 4.94%
  9. 1920 x 1200: 2.73%
  10. 1024 x 600: 2.12%

The first small screen (360×640) comes up at number 17 in the list, followed by 320×568 and 320×480 at numbers 21 and 22, respectively.

We decided to test the common approach and try breakpoints at:

  • Under 768px: small screen styles
  • Between 768px and under 986px: medium screen styles
  • 986px and up: large screen styles

This worked well for our content and was in line with what we had seen in our analytics numbers.

At the small screen breakpoint we have:

  • Reduced typographic scale
  • Reduced margins and padding
  • Reduced main content container padding

At this scale, following what we had done in Ubuntu Resources (now Ubuntu Insights), we reused the grid from the Ubuntu phone designs, which divides the portrait screen into 40 squares, horizontally.

Phone gridThe phone grid.

At the medium screen breakpoint we have:

  • Increased (from small screen) typographic scale
  • Increased margins and padding
  • Increased main content container padding

At the large screen break point we have:

  • The original typographic scale
  • The original margins and padding
  • The original main content container padding

Small, medium and large screen spacingComparison between small, medium and large screen spacing.

Ideas for the future

In the future, we would like to use more component-specific breakpoints. Some of our design components would work better if they reflowed or resized at different points than the rest of the site, so more granular control over specific elements would be useful. This usually depends on the type and amount of content the component holds.

What about you? We’d love to know how other people have tackled this issue, and what suggestions you have to create flexible and robust grids. Have you used any useful tools or techniques? Let us know in the comments section.

Read the next post in this series: “Making ubuntu.com responsive: adapting our navigation to small screens”

Reading list

Read more
David Planella

Ubuntu emulator guide

Following the initial announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[...] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, we’re continually seeing the improvements in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • Ubuntu 14.04 or later (see installation notes for older versions)
  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)

Installing the emulator

If you are using Ubuntu 14.04, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running these commands, followed by Enter:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa && sudo apt-get update
sudo apt-get install ubuntu-emulator

Alternatively, if you are running an older stable release such as Ubuntu 12.04, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named MARKDOWN_HASHb3eeabb8ee11c2be770b684d95219ecbMARKDOWN_HASH in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the MARKDOWN_HASH05556613978ce6821766bb234e2ff0f2MARKDOWN_HASH package corresponding to your architecture (i386 or amd64) to download in the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the MARKDOWN_HASH1843750ed619186a2ce7bdabba6f8062MARKDOWN_HASH package corresponding to download it at the same MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: MARKDOWN_HASH8844018ed0ccc8c506d6aff82c62c46fMARKDOWN_HASH
  10. Then run this command to install the packages: MARKDOWN_HASH0452d2d16235c62b87fd735e6496c661MARKDOWN_HASH
  11. Once the installation is successful you can close the terminal and remove the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder and its contents

Installation notes

  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.

Running the emulator

Ubuntu emulator guide

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chosen for it):

sudo ubuntu-emulator create myinstance --arch=i386
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

Notice how in the command above the --arch parameter was specified to override the default architecture (armhf). Using the i386 arch will make the emulator run at a (much faster) native speed.

Other parameters you might want to experiment with are also: --scale=0.7 and --memory=720. In these examples, we’re scaling down the UI to be 70% of the original size (useful for smaller screens) and specifying a maximum of 720MB for the emulator to use (on systems with memory to spare).

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.

Runtime notes

  • Make sure you’ve got enough space to install the emulator and create new instances, otherwise the operation will fail (or take a long time) without warning.
  • At this time, the emulator takes a while to load for the first time. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even though the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command.

Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Read more
Michael Hall

A couple of months ago Jono announced the dates for the Ubuntu Online Summit, June 10th – 12th,  and those dates are almost upon us now.  The schedule is opened, the track leads are on board, all we need now are sessions.  And that’s where you come in.

Ubuntu Online Summit is a change for us, we’re trying to mix the previous online UDS events with our Open Week, Developer Week and User Days events, to try and bring people from every part of our community together to celebrate, educate, and improve Ubuntu. So in addition to the usual planning sessions we had at UDS, we’re also looking for presentations from our various community teams on the work they do, walk-throughs for new users learning how to use Ubuntu, as well as instructional sessions to help new distro developers, app developers, and cloud devops get the most out of it as a platform.

What we need from you are sessions.  It’s open to anybody, on any topic, anyway you want to do it.  The only requirement is that you can start and run a Google+ OnAir Hangout, since those are what provide the live video streaming and recording for the event.  There are two ways you can propose a session: the first is to register a Blueprint in Launchpad, this is good for planning session that will result in work items, the second is to propose a session directly in Summit, which is good for any kind of session.  Instructions for how to do both are available on the UDS Website.

There will be Track Leads available to help you get your session on the schedule, and provide some technical support if you have trouble getting your session’s hangout setup. When you propose your session (or create your Blueprint), try to pick the most appropriate track for it, that will help it get approved and scheduled faster.

Ubuntu Development

Many of the development-oriented tracks from UDS have been rolled into the Ubuntu Development track. So anything that would previously have been in Client, Core/Foundations or Cloud and Server will be in this one track now. The track leads come from all parts of Ubuntu development, so whatever you session’s topic there will be a lead there who will be familiar with it.

Track Leads:

  • ?ukasz Zemczak
  • Steve Langasek
  • Leann Ogasawara
  • Antonio Rosales
  • Marc Deslaurs

Application Development

Introduced a few cycles back, the Application Development track will continue to have a focus on improving the Ubuntu SDK, tools and documentation we provide for app developers.  We also want to introduce sessions focused on teaching app development using the SDK, the various platform services available, as well as taking a deeper dive into specifics parts of the Ubuntu UI Toolkit.

Track Leads:

  • Michael Hall
  • David Planella
  • Alan Pope
  • Zsombor Egri
  • Nekhelesh Ramananthan

Cloud DevOps

This is the counterpart of the Application Development track for those with an interest in the cloud.  This track will have a dual focus on planning improvements to the DevOps tools like Juju, as well as bringing DevOps up to speed with how to use them in their own cloud deployments.  Learn how to write charms, create bundles, and manage everything in a variety of public and private clouds.

Track Leads:

  • Jorge Castro
  • Marco Ceppi
  • Patricia Gaughen
  • Jose Antonio Rey

Community

The community track has been a stable of UDS for as long as I can remember, and it’s still here in the Ubuntu Online Summit.  However, just like the other tracks, we’re looking beyond just planning ways to improve the community structure and processes.  This time we also want to have sessions showing users how they can get involved in the Ubuntu community, what teams are available, and what tools they can use in the process.

Track Leads:

  • Daniel Holbach
  • Jose Antonio Rey
  • Laura Czajkowski
  • Svetlana Belkin
  • Pablo Rubianes

Users

This is a new track and one I’m very excited about. We are all users of Ubuntu, and whether we’ve been using it for a month or a decade, there are still things we can all learn about it. The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server.  From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level.

Track Leads:

  • Elizabeth Krumbach Joseph
  • Nicholas Skaggs
  • Valorie Zimmerman

So once again, it’s time to get those sessions in.  Visit this page to learn how, then start thinking of what you want to talk about during those three days.  Help the track leads out by finding more people to propose more sessions, and let’s get that schedule filled out. I look forward to seeing you all at our first ever Ubuntu Online Summit.

Read more
Antonio Rosales

Meeting Minutes: May 20th

Agenda

  • Review ACTION points from previous meeting
  • T Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Pretty straightforward meeting given 14.04 release is still pretty fresh in our minds, and ODS was last week. Great Demo at UDS! Utopic Unicorn is underway way (https://wiki.ubuntu.com/UtopicUnicorn/ReleaseSchedule) the first Alpha release scheduled for June 26th, and vUDS on June 12th. Server team blueprints are also in progress with the topic blueprint (https://blueprints.launchpad.net/ubuntu/+spec/topic-u-server_ already created and dependencies being posted to it.

The Bugs we covered were:

  • Launchpad bug 1319555 in ec2-api-tools (Ubuntu Utopic) “update out-dated ec2-api-tools for 12.04″ [High,New]
  • Launchpad bug 1315052 in lxc (Ubuntu Utopic) “lxc-attach from a different login session fails” [High,Triaged]
  • Launchpad bug 1317587 in clamav (Ubuntu Utopic) “ClamAV 0.98.1 is Outdated” [High,In progress]
  • Launchpad bug 1317811 in linux (Ubuntu) “Dropped packets on EC2, “xen_netfront: xennet: skb rides the rocket: x slots”” [Medium,In progress]

Next Meeting

Next meeting will be on Tuesday, May 27th at 16:00 UTC in #ubuntu-meeting.

Additional logs @ https://wiki.ubuntu.com/MeetingLogs/Server/20140520

Read more
John Pugh

In June of last year, Leadwerks Software launched a Kickstarter campaign to bring their game development software to Linux. This week the company has released Leadwerks Game Engine on the Ubuntu Software Center enabling Ubuntu users to build and play games without ever leaving Ubuntu.

Leadwerks is a visual editor for building game levels combined with a programming API for coding games. Development in Leadwerks occurs at three layers:

  • Beginners can use the visual editor to set up scenes and add game mechanics using the flowgraph system. The software includes eight example maps that demonstrate various aspects of gameplay including physics puzzles, weapons, and AI.
  • At the next level, users can open up Lua scripts and modify them with the built-in script editor. A debugger users to step through code as your game is running. Scripts can be modified, or new ones can be created that add new functionality to the flowgraph system.
  • At the lowest level, users can program the entire Leadwerks API directly in C++. Extensive documentation is included for every command in the Leadwerks API, with examples for both C++ and Lua.

Creating a New C++ Project

When starting the Leadwerks Editor in Ubuntu, it looks like this:

1

To create a new project, select the File > Project Manager menu item to open the project manager. Press the New button to open the project wizard. You can enter the name of your new game project, set the location if needed, and add an author and company name. Press ok to create the project.
2

You can then double-click on the project item to open the file directory. If you navigate to the “Projects/Linux” subfolder, you will find a new Code::Blocks project with the name of your game. Open this file in Code::Blocks and it will be ready to compile.
3

By default, a simple example is provided, but there are lots of code examples for all the commands in the Leadwerks API. The program documentation can be access from the editor by pressing F1 or selecting the Help > Help Contents menu item.
4

We’re excited about Leadwerks coming to Ubuntu! Head over to the Ubuntu Software Center and get your copy of Leadwerks today!

Read more
Michael Hall

I’ve just finished the last day of a week long sprint for Ubuntu application development. There were many people here, designers, SDK developers, QA folks and, which excited me the most, several of the Core Apps developers from our community!

image20140520_0048I haven’t been in attendance at many conferences over the past couple of years, and without an in-person UDS I haven’t had a chance to meetup and hangout with anybody outside of my own local community. So this was a very nice treat for me personally to spend the week with such awesome and inspiring contributors.

It wasn’t a vacation though, sprints are lots of work, more work than UDS.  All of us were jumping back and forth between high information density discussions on how to implement things, and then diving into some long heads-down work to get as much implemented as we could. It was intense, and now we’re all quite tired, but we all worked together well.

I was particularly pleased to see the community guys jumping right in and thriving in what could have very easily been an overwhelming event. Not only did they all accomplish a lot of work, fix a lot of bugs, and implement some new features, but they also gave invaluable feedback to the developers of the toolkit and tools. They never cease to amaze me with their talent and commitment.

It was a little bitter-sweet though, as this was also the last sprint with Jono at the head of the community team.  As most of you know, Jono is leaving Canonical to join the XPrize foundation.  It is an exciting opportunity to be sure, but his experience and his insights will be sorely missed by the rest of us. More importantly though he is a friend to so many of us, and while we are sad to see him leave, we wish him all the best and can’t wait to hear about the things he will be doing in the future.

Read more
Inayaili de León Persson

This post is part of the series ‘Making ubuntu.com responsive‘.

If you’re transitioning a fixed-width website into a responsive one with several time and resource constraints, updating all your content to be mobile-friendly will likely not be an option.

It’s important to understand what your constraints are and work within them. This is what makes a good designer great — you could even say it’s the definition of our jobs.

This was certainly our case: very early in the process of converting ubuntu.com into a responsive site we knew we wouldn’t be able to edit the existing content. We did, however, follow a few ‘content rules’, and this is something you can define within your projects too.

Evergreen content

We created the Ubuntu Insights site to hold dated content like case studies, news and white papers, and to keep a constant influx of fresh content into ubuntu.com and other Ubuntu sites. Not only did creating Ubuntu Insights allow us to keep ubuntu.com fresh, it gave us a place to move a lot of the detailed content that previously existed on the main site to. We end up with fewer pages and also with shorter pages, which is one of the challenges of converting a site to be responsive with no content updates: the pages become too long.

Ubuntu InsightsThe latest iteration of Ubuntu Insights.

We’ve been working on this project for a few months now, and will be releasing its final update soon, which will include a dedicated press area.

No content or information architecture updates

Once you go back to work you’ve completed some time ago, it’s natural that you start seeing lots of things, big and small, you want to improve. However, when the scope of a project is really tight (and which project isn’t?), it’s important not to fall into temptation.

Updating the structure and content of the website in preparation for making it responsive was not an option for us, as that would involve a fair number of people and time that were not at our disposal.

We decided to flag anything we’d want to look at again in the future, but moving things around was out of the question.

A couple of sections of the site were going to suffer some changes that might impact content and information architecture, but those had been flagged at earlier stages, and we knew to only start reviewing and working on those later on in the responsive project.

No hidden content

A decision we made early on was that we weren’t going to hide any content from small screens.

We could still use common patterns like accordions and tabs to show content in a more digestible format, but all content should be available in small screens, just as it would in larger screens.

AccordionAccordions chunk the content nicely at smaller screen sizes.

Future plans

Improving the content on ubuntu.com will be a gradual process. As new pieces of content are added and updated, we’re now making sure that content is optimised for a smaller screen experience, being mindful of endless scrolling and keeping the message clear and focused on each page and section of the site.

Looking at mobile first has already pushed us towards simplifying our content. We’re trying to think about shorter, more carefully written text that relies less on images and animations. This includes paring down on charts, cutting out text that really is there to support images, and considering the reason for existence of any new fourth-level pages.

In the future, we’ll likely want to do a content revamp of the entire site, but that’s a huge project on its own and probably one that deserves its own series of posts.

We’d love to hear about your experiences and tips on improving content for a responsive iteration of your sites: add your thoughts in the comments section!

Read the next post in this series: “Making ubuntu.com responsive: making our grid responsive”

Reading list

Read more
facundo


En este post detallé todo lo que querría tener en el testrunner ideal, con la idea de trabajar un poco sobre eso en el último PyCamp.

Así fue. Nos sentamos un rato largo con Martín Gaitán y empezamos a ver si con nosetest podíamos lograr parte o todo de lo que queremos.

Algo en lo que no nos metimos mucho es la integración de nosetests con frameworks que proveen un reactor (o main loop). Buscando un poco ví que hay algo para integrarlo con Twisted, bastante sencillo, pero no encontré nada para GTK o Qt... no sé si porque no se puede, o es automático :p

Entonces, vayamos a los bifes... ¿qué necesitamos para tener el testrunner ideal?

Keep testing


Los componentes

El primer paso, obvio, es instalar el nosetests base. Con esto tenemos el primer par de puntos de lo que queríamos en mi post original: que soporte recibir un directorio y que busque de ahí para abajo, y que al recibir un archivo que corra los tests de ese archivo.

El primer plugin que necesitamos para ir a donde queremos es "nose-progressive". Este es un plugin que nos cambia bastante la forma de ver los resultados de los tests. Por ejemplo, no hace falta que muestre cada linea de cada test ejecutado, en jerarquía, porque ante un error nos va a dar un pequeño resumen donde podemos ver la info del test que falló.

También nos va a dar un lindo OK en verde si todo salió bien... y si hubieron problemas vamos a ver esos resúmenes, coloridos, con un montón de info copada.

De la info que nos da en ese resumen también podemos extraer el path para correr el test sólo, o toda la clase, todo el archivo etc. Pero también tenemos otro plugin, el "nosecomplete" que hace que podamos ir escribiendo el path para un test, autocompletando de una forma muy piola, descubriendo lo que hay para correr.

Para correr más de un test, un subconjunto que matchee con una regex, tenemos el "nose-selecttests", que nos permite pasarle un --select-tests= que hace que le podamos pasar luego aquello que queremos que coincida.

Finalmente, tenemos varios detalles. Le podemos decir que corte en el primer test que falla, y que no siga, con -x. También podemos pedirle que no esconda ni los prints que hagamos ni lo que logueamos, con --nocapture y --nologcapture. Y le podemos pedir que nos tire un buen resumen de cuales tests tardan más con --with-timer (necesitamos el plugin "nose-timer").


Armando el entorno

Lo primero, obviamente en el virtualenv de tu proyecto, es instalar nose y todos sus plugins:

    pip install nose nose-progressive nose-selecttests nose-timer nosecomplete

Para el plugin de autocomplete, como autocompletamos desde el shell, realmente, tenemos que hookear al mismo con el plugin de nose. Es copiar y pegar algo, nada más, las instrucciones acá.

Finalmente, hay que decirle a nose, cuando lo ejecutamos, que use tal o cual plugin, y de qué forma. Acá viene mezclada la mano... algunas configs la podemos poner en el el $HOME/.noserc...

    [nosetests]
    with-progressive=1
    nologcapture=1
    verbose=1

..., pero otras las tenemos que especificar al ejecutarlo:

    nosetests --progressive-bar-filled=2 --progressive-function-color=1 --progressive-dim-color=5

Esto último se podría meter como un alias del bash, o simplemente encapsularlo en un script 'test' en el proyecto (junto con algún pyflakes o pylint, etc).

En fin. Lo importante es: Keep Testing :)

Read more
Inayaili de León Persson

Latest from the web team — May 2014

We’re fast approaching the summer, and the first few sunny days have already arrived in London. The web team cannot slow its pace though…

In the last few weeks we’ve worked on:

  • Responsive ubuntu.com: we’ve had a sprint to clean up our processes and CSS files after the big responsive release last month
  • Ubuntu.com: we’ve updated our Jumpstart service to include the exciting new Orange Box Micro-cluster and Your cloud product pages in preparation for the OpenStack Developer Summit
  • Juju GUI: we’ve finished creating new personas
  • Ubuntu OpenStack Interoperability Lab: we’ve completed the report design
  • Ubuntu OpenStack Installer: the installer was presented at the OpenStack Developer Summit last week, and we’ve done iterations on the designs based on recent user research
  • Fenchurch: we’ve moved Fenchurch into a proper Django project, nearly completed the first phase of a new asset server with a new Juju charm, and set up a new Fenchurch instance for the new legal website
  • Ubuntu Insights: we’ve made the move from Ubuntu Resources to Ubuntu Insights, and launched the desktop version of the site
  • Las Vegas sprint: we worked on updated, mobile-first bundle and charm details pages and started planning for the next cycle
  • Partners: we’ve completed the final UX and copy for this new Ubuntu website

And we’re currently working on:

  • Responsive ubuntu.com: we’re now in the process of updating our web style guide documents before the public release of the new styles
  • Ubuntu Insights: we’re adding the final touches before launching the press centre in the next few weeks
  • Juju GUI: we’re planning the work for the next cycle
  • Fenchurch: we’re working on getting the Juju charms in production for the new legal site, finishing up the asset server and planning the development of our new partners website
  • Partners: we’re currently building the new partners website
  • Legal pages: we’re now in the process of building the new hub that will hold all our legal information
  • Chinese website: we’ve finalised UX and copy for this upcoming Ubuntu site

If you’d like to join the web team, we are currently looking for experienced user experience and web designers to join the team!

Design team moving desksThe design team getting ready to move desks, at the end of April.

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Read more
bmichaelsen

Train Stops

And the sons of pullman porters and the sons of engineers
Ride their father’s magic carpets made of steel
Mothers with their babes asleep are rockin’ to the gentle beat
And the rhythm of the rails is all they feel

– The City of New Orleans, Willie Nelson interpreting Steve Goodman

So, LibreOffice does its releases on a train release schedule and since we recently modified the schedule a bit (by putting out the alpha1 release earlier), I took the opportunity to take a closer look and explain a bit on what we are doing. With this every 6 months of LibreOffice development currently roughly look like this:
week after x.y.0 development release candidates finalized releases
fresh stable fresh stable
0 x.y.0
1 x.y.1~rc1 x.(y-1).4~rc1
2
3 x.y.1~rc2 x.(y-1).4~rc2
4 x.y.1 x.(y-1).4
5
6 x.y.2~rc1
7
8 x.y.2~rc2 x.(y-1).5~rc1
9 x.y.2
10 x.(y-1).5~rc2
11 x.y.3~rc1 x.(y-1).5
12
13 x.(y+1)~alpha1 x.y.3~rc2
14 x.y.3
15
16
17
18 x.(y+1)~beta1
19
20 x.(y+1)~beta2 x.(y-1).6~rc1
21
22 x.(y+1)~rc1 x.(y-1).6~rc2
23 x.(y-1).6
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last two columns are most visible to most visitors of the LibreOffice website. Those are the versions found on the LibreOffice Fresh and LibreOffice Stable download pages. We are in roughly at week 18 after 4.2.0 release now, and the versions available are 4.2.4 fresh and 4.1.6 stable. A careful reader will note that according to that schedule we should be at 4.2.3 and 4.1.5 — that is true, but the 4.2 series still had an extra 4.2.1 intermediate release to adjust the schedule of 4.2 in direction of the current plan. This is not expected for future releases (also note that there is always some flexibility in the plan to allow for holidays etc.)

If you count all the prereleases, release candidates and releases, you will find that we do 25 of those in 26 weeks. Beside the fact that this is a lot of work for release engineers, one might wonder if anyone can keep up with that, and if so — how? The answer to that depends on how you are using LibreOffice.

self deployment on LibreOffice fresh

If you are an user or a small business installing LibreOffice yourself, you will probably run LibreOffice fresh and the table above simplifies for you as follows:

week after x.y.0 development release candidates finalized releases
0 x.y.0
1 x.y.1~rc1
4 x.y.1
6 x.y.2~rc1
9 x.y.2
11 x.y.3~rc1
14 x.y.3
18 x.(y+1)~beta1
20 x.(y+1)~beta2
22 x.(y+1)~rc1
24 x.(y+1)~rc2
25 x.(y+1)~rc3

The last column shows the releases you are running. If you are a member of the LibreOffice community it would be very helpful if you also spend some time of this 6 months period for three actions:

  • running at least one of the release candidates in the table (available for download here) before the final is released.
  • running at least one beta releases in the table. Note that there will be a bug hunting session on the 4.3.0 beta release this week, that will help you get started.
  • running a nightly build once anywhere in the weeks 1-18. Note that if you are getting excited about seeing the latest and greatest builds while they are still steaming, there are tools that can help you with this on Linux and Windows.

If you do these each of these three things once in the timeframe of six months and report any issues you find, you are helping LibreOffice already a lot — and you are making sure that the finalized releases of the fresh series are not only containing all the latest features, but also free of severe regressions.

bigger deployments on LibreOffice stable

If you are not installing LibreOffice yourself, but instead have a major deployment administrated centrally, things are a bit different. You might be more conservative and interested in the releases from LibreOffice stable. And you probably have professional support from a certified developer or a company employing certified developers.

week after x.y.0 development release candidates finalized releases
1 x.(y-1).4~rc1
4 x.(y-1).4
8 x.(y-1).5~rc1
11 x.(y-1).5
13 x.(y+1)~alpha1
18 x.(y+1)~beta1
20 x.(y-1).6~rc1
23 x.(y-1).6

If you intend to deploy one series of LibreOffice (e.g. 4.3), there are two things that are highly recommended to be done:

  • make the alpha or beta releases available quickly to interested volunteers in your deployment early. They might find bugs or regressions that are specific to your use of the software.
  • make the release candidates of versions that you intent to deploy available early to your users.

Of these two actions, the first is by far the most important: It identifies issues early on in the life cycle and gives both your support provider and the LibreOffice developer community at large time to resolve the issue. In fact, I would argue that if you have a major deployment, the only excuse for not making available prereleases, is that you made available nightly builds.

Ubuntu

So, Ubuntu qualifies as a “bigger deployment” and I have to take care of LibreOffice on it. Also people want to be able to run the latest and greatest LibreOffice releases from the LibreOffice fresh series. Do I follow my recommendations here? Yes, mostly I do:

  • both LibreOffice fresh and LibreOffice stable series are available from PPAs for Ubuntu and are updated regularly and quickly when an rc2 is available.
  • prereleases are made available as bibisect repositories rather quick (build on Ubuntu 12.04 LTS). In addition, fully packaged versions of LibreOffice are build in the prereleases PPA as early as starting with beta1.

So, you are invited to run or test builds from these PPAs — or download the bibisect repositories — to keep LibreOffice releases coming in the steady and stable fashion they do. Finally, there is a bug hunting session for LibreOffice this week and as said above, no matter if you are running a huge deployment or installing on your own, you are helping LibreOffice — and yourself, as a user of LibreOffice — a lot by testing the prereleases:


Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140520 Meeting Agenda


ARM Status

Nothing new to report this week


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

Here’s our todo list until we formulate a better plan for tracking work
next week at our team sprint:

   apw    core-1405-kernel    2 work items   
   ogasawara    core-1405-kernel    2 work items   


Status: Utopic Development Kernel

We have uploaded our first v3.15 based kernel, 3.15.0-1.5, to the Utopic
archive. It is currently based on the v3.15-rc5 upstream kernel.
—–
Important upcoming dates:
Mon-Wed June 9 – 11, vUDS (~3 weeks away)
Thurs Jun 26 – Alpha 1 (~5 weeks away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~5 weeks away)

  • NOTE: The PrecisePangolin/ReleaseSchedule notes Kernel Freeze as Aug 9. I believe this should be amended to Jun 27.


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 20):

  • Lucid – Prep week
  • Precise – Prep week
  • Quantal – Prep week
  • Saucy – Prep week
  • Trusty – Prep week

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    cycle: 18-May through 07-Jun
    ====================================================================
    16-May Last day for kernel commits for this cycle
    18-May – 24-May Kernel prep week.
    25-May – 31-May Bug verification & Regression testing.
    01-Jun – 07-Jun Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Louis

A few years ago, while I started to participate to the packaging of makedumpfile and kdump-tools for Debian and ubuntu. I am currently applying for the formal status of Debian Maintainer to continue that task.

For a while now, I have been noticing that our version of the kernel dump mechanism was lacking from a functionality that has been available on RHEL & SLES for a long time : remote kernel crash dumps. On those distribution, it is possible to define a remote server to be the receptacle of the kernel dumps of other systems. This can be useful for centralization or to capture dumps on systems with limited or no local disk space.

So I am proud to announce the first functional beta-release of kdump-tools with remote kernel crash dump functionality for Debian and Ubuntu !

For those of you eager to test or not interested in the details, you can find a packaged version of this work in a Personal Package Archive (PPA) here :

https://launchpad.net/~louis-bouchard/+archive/networked-kdump

New functionality : remote SSH and NFS

In the current version available in Debian and Ubuntu, the kernel crash dumps are stored on local filesystems. Starting with version 1.5.1, they are stored in a timestamped directory under /var/crash. The new functionality allow to either define a remote host accessible through SSH or an NFS mount point to be the receptacle for the kernel crash dumps.

A new section of the /etc/default/kdump-tools file has been added :

# ---------------------------------------------------------------------------
# Remote dump facilities:
# SSH - username and hostname of the remote server that will receive the dump
# and dmesg files.
# SSH_KEY - Full path of the ssh private key to be used to login to the remote
# server. use kdump-config propagate to send the public key to the
# remote server
# HOSTTAG - Select if hostname of IP address will be used as a prefix to the
# timestamped directory when sending files to the remote server.
# 'ip' is the default.
# NFS - Hostname and mount point of the NFS server configured to receive
# the crash dump. The syntax must be {HOSTNAME}:{MOUNTPOINT} 
# (e.g. remote:/var/crash)
#
# SSH="<user@server>"
#
# SSH_KEY="<path>"
#
# HOSTTAG="hostname|[ip]"
# 
# NFS="<nfs mount>"
#

The kdump-config command also gains a new option : propagate which is used to send a public ssh key to the remote server so passwordless ssh commands can be issued to the remote SSH host.

Those options and commands are nothing new : I simply based my work on existing functionality from RHEL & SLES. So if you are well acquainted with RHEL remote kernel crash dump mechanisms, you will not be lost on Debian and Ubuntu. So I want to thank those who built the functionality on those distributions; it was a great help in getting them ported to Debian.

Testing on Debian

First of all, you must enable the kernel crash dump mechanism at the kernel level. I will not go in details as it is slightly off topic but you should :

  1. Add crashkernel=128M to /etc/default/grub in GRUB_CMDLINE_LINUX_DEFAULT
  2. Run udpate-grub
  3. reboot

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ apt-get install software-properties-common
$ add-apt-repository ppa:louis-bouchard/networked-kdump

Since you are on Debian, the result of the last command will be wrong, as the serie defined in the PPA is for Utopic. Just use the following command to fix that :

$ sed -i -e 's/sid/utopic/g' /etc/apt/sources.list.d/louis-bouchard-networked-kdump-sid.list 
$ apt-get update
$ apt-get install kdump-tools makedumpfile

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

root@sid:~# kdump-config propagate
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (192.168.122.70)' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash

If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :

SSH_KEY="/root/.ssh/mykey_id_rsa"

Then run the propagate command as previously :

root@sid:~/.ssh# kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

root@sid:~/.ssh# ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd
/home/ubuntu

If the passwordless connection can be achieved, then everything should be all set. You can proceed with a real crash dump test if your setup allows for it (not a production environment for instance).

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually :

root@sid:~/.ssh# mount -t nfs TrustyS-netcrash:/var/crash /mnt
root@sid:~/.ssh# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167360 5278848 19% /mnt
root@sid:~/.ssh# umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

Testing on Ubuntu

As you would expect, setting things on Ubuntu is quite similar to Debian.

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ sudo add-apt-repository ppa:louis-bouchard/networked-kdump

Packages are available for Trusty and Utopic.

$ sudo apt-get update
$ sudo apt-get -y install linux-crashdump

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

ubuntu@TrustyS-testdump:~$ sudo kdump-config propagate
[sudo] password for ubuntu: 
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (192.168.122.70)' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash
ubuntu@TrustyS-testdump:~$
If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :
SSH_KEY="/root/.ssh/mykey_id_rsa"

Then run the propagate command as previously :

ubuntu@TrustyS-testdump:~$ kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

ubuntu@TrustyS-testdump:~$sudo ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd
/home/ubuntu

If the passwordless connection can be achieved, then everything should be all set.

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually (you might need to install the nfs-common package) :

ubuntu@TrustyS-testdump:~$ sudo mount -t nfs TrustyS-netcrash:/var/crash /mnt 
ubuntu@TrustyS-testdump:~$ df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167488 5278720 19% /mnt
ubuntu@TrustyS-testdump:~$ sudo umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

 Miscellaneous commands and options

A few other things are under the control of the administrator

The HOSTTAG modifier

When sending the kernel crash dump, kdump-config will use the IP address of the server to as a prefix to the timestamped directory on the remote host. You can use the HOSTTAG variable to change that default. Simply define in /etc/default/kdump-tools :

HOSTTAG="hostname"

The hostname of the server will be used as a prefix instead of the IP address.

Currently, this is only implemented for the SSH method, but it will be available for NFS as well in the final version.

kdump-config show

To verify the configuration that you have defined in /etc/default/kdump-tools, you can use kdump-config’s show command to review your options.

ubuntu@TrustyS-testdump:~$ sudo kdump-config show
USE_KDUMP: 1
KDUMP_SYSCTL: kernel.panic_on_oops=1
KDUMP_COREDIR: /var/crash
crashkernel addr: 0x2d000000
SSH: ubuntu@TrustyS-netcrash
SSH_KEY: /root/.ssh/kdump_id_rsa
HOSTTAG: ip
current state: ready to kdump
kexec command:
 /sbin/kexec -p --command-line="BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/TrustyS--vg-root ro console=ttyS0,115200 irqpoll maxcpus=1 nousb" --initrd=/boot/initrd.img-3.13.0-24-generic /boot/vmlinuz-3.13.0-24-generic

If the remote crash kernel dump functionality is setup, you will see the options listed in the output of the commands.

Conclusion

As outlined at the beginning, this is the first functional beta version of the code. If you are curious, you can find the code I am working on here :

http://anonscm.debian.org/gitweb/?p=collab-maint/makedumpfile.git;a=shortlog;h=refs/heads/networked_kdump_beta1

Don’t hesitate to test & let me know if you find issues

Read more