Canonical Voices

Victor Palau

In my previous post, Jorge Castro commented that a new super wordpress charm is in the works, and I want to keep working on my blog site configuration (theme and plug-ins) without missing out on any updates. This means that I needed to stop forking the wordpress plugin and find a way to just use the one in the charm store and then ,onto the same instance, roll-out my configuration.

I mentioned that I might try splitting my configuration out into a Subordinate service, and that is what I done :) It was actually pretty easy.

I created a new charm called wordpress-conf.  I set the metadata.yaml file to contain:

name: wordpress-conf
summary: "WordPress configuration"
subordinate: true
description: |
Provides configuration for wordpress blogs
- Plugin:
- WordPress importers
- super cache plug-in
 interface: logging-directory
 scope: container
 interface: juju-info
 scope: container

As you can see it has a line calling out that this charm is a subordinate, and has two requirement.  The two requirements are for testing purposes really. The “logging” requirement is an explicit requirement that the charm that you are “subordinating” to must have defined, while “juju-info” is an implicit requirement that is define for all charms. What this mean is that using “juju-info”, I can deploy my charm against any service. The key is to define the scope as container.

The magic happens not when you deploy a subordinate charm, but when you add a relationship to another service. For example the following commands result on a wordpress instance setup following the WP charm in the charm store, but with my plugin and theme set up:

juju bootstrap
juju deploy wordpress
juju deploy mysql
juju add-relation wordpress mysql
juju expose wordpress
juju deploy --repository=~/mycharm local:precise/wordpress-conf
juju add-relation wordpress-conf wordpress

Pretty cool, eh!? I should be able to upgrade the two charms now independently

Read more
Victor Palau

Reposting here the blog entry that I uploaded to

Have you been wondering if your Web application will work with the new generation of Hyperdense ARM Servers? Now you can easily find out by using Ubuntu and Amazon Web Services. Canonical has made available in Amazon Web Services an AMI image for developers wishing to experiment with Ubuntu ARM Server. Dann Frazier is the engineer behind this initiative. I took some of his time today to asking a few questions:

How did this came about?
We were wanting to do some internal functional testing of the 12.04 release across our global team without shipping hardware around. We had a QEMU model with us and using cloud systems to host it seemed like an excellent way to grow our (emulated) machine count.

Can you give me some examples of what could I do with it?
Basically, anything you can do with Ubuntu Server. You can install packages, deploy Juju charms, test your web applications, etc. However, I would strongly suggest not using it for any production work or performance testing – being an emulated environment, you will notice some overhead.

Who do you expect will use this new AMI?
Developers looking to test their applications on ARM, people wanting to test Juju charm deployments in a multi-architecture environment, and anyone just looking to kick the tires.

This is all great, How do I get my hands on it?
Canonical has published an AMI on Amazon EC2. You will need an Amazon Web Services account, then just go into your Management Console for EC2 and launch a new instance.  Select “Community AMIs” and look for AMI ID ‘ami-aef328c7′. (We’ll keep the latest AMI ID posted at Or click here.

Are there any limations compared to a real hardware box?
The AMI provides an Ubuntu 12.04 (‘armhf’) system running on an emulated hardware system. Performance is limited due to the emulation overhead. This AMI requires the use of an m1.large instance type due to memory requirements.

Once again, thanks to Dann and the Canonical team for sharing this neat tool with the community. It sounds great and easy to set up. So, What are you waiting for?

Read more
Victor Palau

For a while now, I have being toying with the idea to move away from a hosted site for this blog. The main reason: I am not really happy to have to pay for every single simple stuff.. add an additional URL to the site, take aways advertising, edit a CSS… it really stops me from playing :)

What had prevented me from doing this in the past was that I haven’t really got much experience setting up WordPress or MySQL. What I really needed was a save (and free) sandbox to try changes to my site, until I was happy with it, and then easily deploy it live… Can you say Juju?

With Juju you can do all your playing locally using LXC, save all the changes into a Charm and then just deploy them into a public cloud. Perfect!


The first thing I did was to set-up an Amazon AWS account and configure my juju environment to deploy to the public cloud. My rationale here was: if I can’t get vanilla WordPress in a live public site, then there is little point continuing with the experiment.

This was actually pretty easy, I just followed the Getting Started guide. The only stumbling block was that I was using my travel laptop at the time that didn’t have my launchpad ssh keys. You need to create ssh keys to use Juju, but apparently you also need to publish them into your launchpad account. Once this was done, I had a public WordPress instance in just 5 commands.

Next step: destroy the environment and stop paying :) Now I needed to bring up my cheap sandbox.

Again this is pretty easy to setup, just follow the Getting Started guide. I hit another road block once my deployment instances seemed not be doing much, “juju status -e local” showed them in pending state and the logs did not display any activity…A bit of Googling later , I found that Jorge Castro had hit the same problem and found the solution in Ask Ubuntu.

With my WordPress local instance now fully up and running, now I needed to upload my own content. To do this, I just needed to upload the wordpress importer plugin. Fairly trivial to get gone by hand, thanks to the very useful “juju scp” and “juju ssh” commands, but how to do it via a Charm. I wanted to make sure that the next time a deploy wordpress, it would have already this plugin install. Crudly this is what I did:

  • Using the charm-tools I got the wordpress charm loc ally (charm get wordpress)
  • I then edited the install file under hooks/ to include:
    apt-get -y install wordpress pwgen wget unzip
    sudo unzip wordpress-importer.0.6 -d /usr/share/wordpress/wp-content/plugins/
  • redeploy using locally stored charm. juju deploy –repository=~/charms local:precise/wordpress -e local

Guess what, it worked. I did get some warnings (WARNING Charm ‘.mrconfig’ has an error) that I am yet to iron out, but when the wordpress instance came up the new plugin was there:

That is all for today, and before I go, one last useful hint courtesy of James Page: Add “default: {name of your env}” if you have multiple environments  but you normally always use one. Save me having to type “-e local” all the time.

Read more
Victor Palau

I had a CR-48 Chromebook for a while, which has recently fallen in disuse. While I have never being totally convinced about Chrome OS being a polished, well designed, interface that simplifies the “always connected” user journey that Google was envisioning, I liked the concept.

Now I am reading in ArsTechnica that Chrome OS is getting a brand new look, that is … basically.. well, not new. While I am sure there are many technical advantages of a fully hardware accelerated windows managers, my issue is with the [lack of] concept.

Google has spent much energy convincing users that they do not need to have local apps, that they can do everything in the cloud and that the portal to this experience is Chrome. Having an OS which the only application that could possibly run, and at full screen, was the browser was a controversial but bold move. More over, it really hit home the user experience they were targeting.

This new UI seems to be sending the opposite message. It seems to be saying: “OK, we were wrong.. but  maybe if we make Chrome OS look more like windows you will like it better?”. Is that really the message? Well if you give me an app launcher in a desktop, I am bound to ask for local apps. If you give me off-line sync for Google apps, I am bound to ask for local apps.

I fear Google is paving the road to [windows vista] hell with good window manager intentions. I am primary an Ubuntu user, and what I like about it is that every single release over the last few years has continue to build on a design concept. Every new release is closely wrap on a consistent user message. Take as an example the HUD introduced in 12.04: it is new and different, but somehow it feels like it always belonged in Unity.

I am bought into the Ubuntu user experience, and I am excited to see what a new release will bring. If I had bought into the Chrome OS experience, I think I will be asking for a refund.

Anyway, I am looking forward to the new Chrome OS UI being available for the CR-48. Maybe I will change my mind once I get my hands on it.

Read more
Victor Palau

I have been using Scrum for a while. Back at my previous role, we tried using Scrum within the integration team that was creating the nightly builds and our bi-weekly releases. It brought good results, the team specially liked the visibility of the task board and the daily stand-ups.

We did found a bit artificial to have a cadence. We were suppose to put out a release every two weeks but we end up doing it as often as we could (or made sense), as we were not in control of when the new software was landing in our plate.

Since then, I’ve this nagging thought that Scrum might not be appropriated to service teams or teams with a large portion of maintenance/customer support work. I have found iterations shorter than 2 weeks, can be over burden by the demo, planning and sizing overheads. In the other hand, two weeks is too much time for teams with Service Level Agreements of days or hours. It also seems a bit cumbersome for short project (~1 month), were you end up with 2 or 1 iterations… What to do!?

In Canonical several teams have used Kanban in order to improve their development processes, so I started reading up on it when I stumbled on this excellent article on Kanban vs Scrum.

The author won me over straight away by not trying to decide which of the two practices is best but instead doing a great job at remaining impartial.

Looking back at the Symbian Foundation’s integration team it seems that Kanban would have been better suited. It retains the focus on making information visible while concentrating on reducing WIP.  It seems better suited to a “specialist” team, where most members share the same skills and work on similar tasks. Scrum seems to work better for cross-discipline project teams.

Also, the emphasis on managing constant flow of work is one that resonates with teams that have a work “currency” measured in days of effort (bugs?) rather in large projects lasting months at the time.

While Scrum has been very successfully adopted by the Certification team at Canonical, My previous experience with the Integration team had stopped me from cheering on Scrum in teams that have a constant flow of work. Now, we are thinking on going Kanban! Don’t get me wrong, we are going to continue using Scrum. It is just a case of using the right tool for each job. I will keep you posted on how it goes.

If you have any advice, tips or gotchas that you could share with us, I would be most grateful if you could drop your comments here!

Time to try something new (by theonlyanla)


Read more
Victor Palau

The Ubuntu Certification Website has just got better. We have roll-out improvements to how we list systems and provided a powerful search feature. We want to ensure that you get as quick as possible to the information that you need.

As part of the Certification website, we provide a feedback mechanism through Launchpad Answers. Over the last year, we have seen a trend of questions around:

  • Most models are sold with different graphics cards , processors… so which one is the one listed as certified?
  • Does the system listed as certified works with a version of Ubuntu that I can download from Or only with the one that the manufacturer sales?
  • What release is this model certified for?

To address these questions, we have introduced some changes to the website. We now display what components are included on the certified system in the search results. We’ve also added a icon to indicate if the system is only certified with a vendor image or with the standard Ubuntu image.

The new and simpler search interface eliminates confusion on what data is presented. A small filter box has been added to the website allowing  users to select the device type, Ubuntu release and image type that they are interested in.

If you have any comments on the new website design, I would really like to hear from you!

Read more
Victor Palau

Coinciding with the 2011 Ubuntu Hardware Summit, we are launching a new portal aimed to help engineers at device manufacturers shipping Ubuntu systems:

The Ubuntu community is great. It provides users and developers with lots and lots of useful information. This means that sometimes finding the right informationfor you can take a bit longer than expected.

The portal content is a selection of the best articles in the Ubuntu community sites that are relevant to device manufacturers (OEM and ODMs) engineers. The content has been selected by the Canonical Hardware Enablement team and builds on the good work of the Ubuntu Kernel team.

We will continue to add and improve the content of the portal over the coming months, including news on tools and techniques to help you better integrate Ubuntu with your hardware. Please let us know if there is specific content you would like to see there.

Read more
Victor Palau

Asus and Ubuntu in Portugal

Read more
Victor Palau

I have previously complained about the about the amount of gadgets that seem to be piling by my bedside table charging quietly every night.. Laptops, Tablets, phones, kindle (yes the plural is not a typo).

On top of that I am growing frustrated with my DVR. Last week the new series of “The Mentalist” was broadcasted in the UK. I did set the record in advance but somehow it clashed and did not get recorded. Even with the missed show only one-click away, in the TV channel’s website, it turns out that my only options were to wait for a repeat on TV in 4 days or go upstairs and watch it in the office desktop. Why is it so complicated!?

All of this frustration got me thinking and I have come up to some conclusions of what the future of my home computing is going to look like.

Centralised Content &  Specialised Consumption Devices

So it turns out that I am not going to give up my E-Ink screen for reading books. Why? Because it doesn’t hurt my eyes like an  tablet screen does. Neither I am going to convince my son that watching Peppa Pig in the iPad is not any better than watching it on TV. Why? Dunno, he isn’t talking yet.

The future for me looks like it is going to involve a lot of different devices, and I am fine with that as long as:

  1. I don’t have to charge them too often – once a month would about right,
  2. They are flexible and powerful enough to get them to do what I need them to do when I need it done,
  3. I can get all my content in all of them!

The good news is that the technology to allow all of this to happen is already being designed. Point number 3, is the easy one! you just need an Ubuntu One account. Point 1&2, I have considered them incompatible for a long time, until I heard about big.LITTLE.

big.LITTLE is going to be BIG

How do you make a low powered device that gives you plenty of battery life, yet is capable of processing complex tasks? ARM seems to have a pretty good answer: big.LITTLE.

Big.LITTLE is a System-on-a-Chip (SoC) that pairs up to four top notch dual-core A15 processors with four very low powered  dual-core A7 processors. The beauty is that they have very similar feature-set and architecture. ARM expects to be able to switch between them depending on the tasks asked to performed without the operating systems noticing the difference.

In a nutshell, it’s like being able to choose between a Prius or a Ferrari engine without having to change cars! Just choose the one that suits your needs better for today’s journey.

This is one of the technologies that is going to ignite the next personal computing revolution. I’ll tell you all about the other ones soon ;)

Read more
Victor Palau

Ubuntu on ARM(Techcon)

Here is Ronald, doing a great job at explaining why Ubuntu on ARM is AWESOME!!!

Read more
Victor Palau

If you follow the Canonical blog, you would have seen that a new white paper has been published on how to implement UEFI Secure Boot in a manner that can be used by all users, including Linux.  The paper is signed and authored by Matthew Garret from Red Hat, Jeremy Kerr from Canonical and James Bottomley, Linux Kernel developer.

Since Microsoft talked about their plans for Secure Boot at /Build2011, there has been lots of things said on the matter. With more than 16,000 people signing the Free Software Foundation statement on “Secure Boot vs Restricted Boot”, it is clear that this is an issue that needed some attention.

It is great to see companies like Red Hat and Canonical getting together, and coming up with recommendations that benefit the whole industry. The paper is well worth the read, Enjoy!

Padlocks of love by Wlodi

Read more
Victor Palau

Ubuntu 11.10 on ARM

I have been using Ubuntu 11.10 on ARM now for a couple of days and I have to say: It Rocks! Ubuntu has had a long history of supporting ARM Systems on a Chip (SoC) since 2008, but Ubuntu 11.10 is a significant milestone.

Introducing.. Ubuntu Server on ARM – Technology Preview

Canonical announced back in August that Ubuntu Server 11.10 would include the first ARM version of the product, and here it is. While this is just the first step on an exciting journey, it is worth to celebrate that the voyage has started. I look forward to see what 12.04 LTS brings us on this space!

Playing with Ubuntu on ARM (Toshiba AC100)

It is hard to really grasp the full experience of Ubuntu on

ARM when you are playing with a development board. For this reason, we have released a demo image for the Tegra2-based (Nvidia) Toshiba AC100.

Running Unity 2D, it shows off  that Ubuntu on ARM is a great platform for computing, in a very compact design and with a very long battery life. For all these reasons, this is my system of choice to take to UDS-P.

If you have a Toshiba AC100, I encourage you to install Ubuntu 11.10 in it!

TI OMAP4 Panda Board

Powered by the Texas Instruments OMAP4430 processor, the Panda Board packs in “a dual-core 1 GHz ARM Cortex-A9 MPCore CPU, a PowerVR SGX540 GPU, a C64x DSP, and 1 GB of DDR2 SDRAM“.  Providing an affordable and competitive design tool for the embedded mobile space.

Ubuntu 11.10 on ARM is available in Headless and full image for Panda. You can find download links and installation instructions here. You can also find there Ubuntu 11.10 for OMAP3 (Beagle Board).

Freescale IMX53 QuickStart Board

The IMX53 Family is oriented towards automotive solutions. Ubuntu 11.10 on ARM is the first release of Ubuntu to provide support for the IMX53 QuickStart Board. You can find download links and installation instructions here.

Linaro and Ubuntu

Both the TI OMAP4 and Freescale images are based on the Linaro outputs for those SoCs. This has greatly our capacity to support ARM development boards.

Read more
Victor Palau

What Phone To Buy Next?

I don’t believe I am saying this, but I am no longer interested on the phone industry.. The thing is that I have been paying attention to the gadget news all this year and  I am pretty interested on the new kindle’s, however I have not been interested on phones now for long time.

Android has managed to make the phone industry boring. All the phones look the same, they run the same apps, they run the same services..YAWN! Do you feel the same way? The problem to me is that 1-2 years ago a phone was the only tech item that you really needed to access all services and do anything you could possibly want.

Since then, thanks to tablets and e-readers amongst other the phone is no longer the ultimate convergence device. I am back to carrying multiple gadgets and a never-ending battery charging nightmare. Can someone invent the evolution on computing devices? please..

So, this week Apple is launching the Iphone 5 – will see…

Read more
Victor Palau

Ubuntu Friendly Needs You!

Are you running an up to date version of Oneiric? Do you have 15 minutes spare?YOU can help Ubuntu Friendly today! Read on..

The Ubuntu Friendly program is now on its test phase. One think that we could really do with is some more real user data to test website views. The Ubuntu Friendly feeds from test submissions from Launchpad.

So what do I need to do?

You need to run the recently improved System Test tool. This tool is in the default Oneiric image and the run-time has been reduced to under 15 minutes (disclaimer: this depends on how powerful is your system!)

If you are not sure how to find this tool, just go to the Unity Search lens and type “System Testing” and click in the icon that looks like a computer screen with a tick mark.

Just follow the instructions and, if you don’t mind, ping me a comment back on this post with how long it took you to run it and any other feedback you might have!

Go on, it is Friday don’t you know…

Read more
Victor Palau

We frequently get asked what do we test on the certification program.  While we do have a simple page covering this topic, some times we are asked for further details. We have now updated the certification program guide with a more comprehensive description of the test cases.  We review and update if necessary the list of test cases for each release:

Note that these test cases only apply to hardware that actually supports the functionality. For example, we do not run the bluetooth tests on a laptop that does not list bluetooth on its specifications.

Here is what the program guide says for Oneiric:

We use three different lists:

  • Whitelist, or features that are required for certification. If any of the tests in the whitelist fails, the certification will fail.
  • Greylist, or features that are tested, but that don’t block certification. If any of the tests under the greylist fail, a note will be added to the certificate to warn the potential customer or user.
  • Blacklist, or features that are not currently tested. We will consider adding more tests as needed.



  •  ia32 (x86), x86_64 and ARM processors are tested to ensure proper functionality.
  •  Stress tests are performed to ensure that they work during high utilization as well.


  • Proper detection
  • General usage
  • Stress testing

Hard drive(s) tests are conducted to validate proper operation:

  • Performance
  • High load

Optical drives (CD/DVD):

  • Read
  • Write


  • Primary display (laptop panels or primary video port on desktops)
  • Multiple-Monitor (where supported, we test multi-head display (2 heads))
  • External video connections (HDMI, DisplayPort, VGA, RGB, etc.)
  • Multiple resolutions


  • Speakers and Headphones
  • Microphone (Built-in, External)
  • USB Mic, USB Headphones


  • Cable
  • Wireless

USB controllers. Several USB devices are used to ensure all USB ports operate as expected:

  • Keyboard
  • Mouse
  • Storage

Bluetooth controllers. Several bluetooth devices are used to ensure it works

  • Mouse
  • Keyboard
  • File transfer

Built-in Web cams
Lid sensors

  • Lid open
  • Lid close

Input devices:

  • Internal keyboard
  • Touchpad
  • Touchpoint
  • Touch screens (single touch)

Primary special keys (volume, mute)
Suspend/Resume (30 iterations)
Tested after resume:

  • Wireless
  • Audio
  • Bluetooth
  • Display resolutions
  • USB controllers

External Expansion Port

  • PCExpress

Firewire external storage devices
Data Card ports

  • SD
  • SDHC

Hibernate/Resume (30 iterations)
Data cards that are not SD or SDHC (for example MMC)

  • Hybrid Graphics: if UMA or discreet work out of the box: all ports working
  • we will note which card is the one that is certified.
  • Whether proprietary drivers are necessary to enable 3D graphics.


  • Wi-fi Slider: if the slider to turn the wi-fi on/off is not working, but the wi-fi
  • can be disconnected through the UI controls, this failure is accepted (and noted).

Secondary special keys:

  • Brightness
  • Media Control
  • Wireless
  • Sleep



  • Fingerprint readers
  • HDMI/DisplayPort audio
  • Surround audio
  • Multitouch touchpads
  • Multitouch screens
  • Accelerometer
  • Specific USB 3.0 devices
  • 3G connections

Read more
Victor Palau

After spending some time last week locked in a room thinking about how to better display hardware information to consumers for Ubuntu Friendly, I started to wonder if we could apply some of the ideas to the certification site.

We collect lots of feedback about either through our blogs or through answer.launchpad. I would classify 90% of the comments into the following categories:

  • I’ve looked at your website and I am confused by what release of Ubuntu works with my system
  • I’ve looked at your website and I am confused if my system is certified with standard Ubuntu or Only with a Pre-installed image
  • Your website says that my system is certified (pre-install only) but I can not find the “pre-install” image anywhere.
  • I have looked at your website and says my system is certified, but my system does not work with Ubuntu.  What components are included in the system that you tested?

Following some discussions on my previous blog post, I have come up with a wire-frame design that I hope would address these points:

For me the main improvements are:

  • Only listing one release at the time, defaulting to the latest. This way the user has to select what release they are looking for and only the relevant data is displayed.
  • Default to only listing Systems certified with a standard image, giving users the option to choose “Vendor image only” certified systems.
  • Display SKUs rather than Systems as entries on the results list. For example, theVostro 3300 is listed twice in the mock-up. It displays the make of the 3 components that most often differentiate a SKU – Network, Graphics and Chipset. Hovering over the icons would produce a call out with the detail component name.

What do you think, will this help? Does this address users concerns?

Read more
Victor Palau

I was reading the Ubuntu Forum when I saw a thread called Ubuntu-certified hardware is not accurate ! This grabbed my attention.

The main issue seemed to be that the user that started the thread wanted to know if he should buy the Lenovo X220 or not. He had looked around and seen that the system is Certified (pre-install only) for 10.10 but found several user comments in the web pointing at problems with stock Ubuntu.

I was planning to reply explaining when I found this great reply from  williumbillium:

First of all, the X220 works well with Ubuntu. I bought one last week and for the most part the laptop is well supported and IMO the current issues are either minor (probably wouldn’t cause the laptop to fail certification) or will likely be fixed soon. I’m documenting my experience on the wiki.

I believe that the “special image of Ubuntu” referenced on the certification page must be a business only deal. I’ve contacted Lenovo about it and been told that it’s not available.

That said, I saw a number of bugs fixed by Canonical employees before the laptop was even released so I believe that us consumers are benefiting from the fact that it’s certified.

Finally, I would not recommend installing 10.10 on this machine unless you have a particular reason to. Since it’s using brand new hardware (Sandy Bridge) it really needs the latest kernel to work well. I don’t have most of the issues mentioned on this ThinkWiki page for example.

The reason why Williumbillium “saw a number of bugs fixed by Canonical employees” is that Canonical has commercial engagements with companies like Lenovo to make Ubuntu work well on their systems. These engagements result on:

  • A custom image delivered to the manufacturer with all major problems fixed. The manufacturer then chooses in what cases to distribute this image with their system. This is why it is certified as Pre-install only.
  • Stock Ubuntu Certification  in a future release. Canonical continues to work after we deliver the custom image to include all the fixes into the latest development release. We do this until all issues blocking certification have been resolved.

Following this process, the Canonical team has successfully Certified with standard Ubuntu over Fifty systems for 11.04 that previously did not work well with Ubuntu. And more are in the pipeline for 11.10…

Read more
Victor Palau

Good news for embedded device developers trying to bring up a Linux software stack on their systems, Ubuntu Core is getting ready for Oneiric.

First thing you are going to ask me is what is Ubuntu Core?  Well here is what the Ubuntu wiki says:

Ubuntu Core is a minimal rootfs for use in the creation of custom images for specific needs. Ubuntu Core strives to create a suitable minimal environment for use in Board Support Packages, constrained or integrated environments, or as the basis for application demonstration images.

Ubuntu Core delivers a functional user-space environment, with full support for installation of additional software from the Ubuntu repositories, through the use of the apt-get command.

So what does it all mean? Ubuntu core is all about making it easy to get started with a functional software stack that needs to fit into a tiny space.

I have seen the pain of many Symbian hackers bringing up  new hardware with only a massive system configuration work with. Where do you start debugging?

Undoubtedly the best way to work is to start with a minimal system configuration, which you can use on your early stages of board support software development, and slowly add only what you need to it. Keeping the software from bloating is a corner stone of Bill Of Material (BOM) management

A good example of this is the Ubuntu IVI remix that is build up from Ubuntu Core and has recently achieve Genivi Compliance. You can also check the Canonical site for more details on the benefits of Ubuntu Core.

Well, how small is SMALL? Well, it is around 100MB , although it compresses to a download of  32MB. So pretty small!

Ok, ok – but when is it getting released? Ubuntu Core is currently being build daily and the first officially supported release will be Oneiric in October 2011.

Read more
Victor Palau

Undeniably, a Long Term Support release is all about the maintenance. In the certification team, we will be focusing our efforts in the next release cycle on improving our Stable Release Updates (SRU) testing for Certified hardware.

While we get a fair amount of feedback on regressions introduced by proposed SRUs in clients, we do not hear very often from the Server community. Therefore have less idea on where to improve our testing. I would like to assume that this is because we are catching all regressions ;) but what are the chances of that?

If you are running Ubuntu Server, I would like to hear from you about your experience with SRUs and any serious hardware-specific regressions that you may have encounter? Anything that regularly goes wrong with SRUs that we should be looking for?

Thanks and look forward to discuss more with you at UDS-P!

Read more
Victor Palau

As reported previously , the DELL Vostro 3300 has been pestered with continuous problems with external monitors.  I am happy to report that since this week I am running Natty (11.04) with a dual monitor set-up and perfect image in both.

The downside is that I am currently using a custom Kernel. It was created by Seth Forshee to fix the “Intel Core i3 External Monitor Wavy Output” bug. Thanks Seth! You are my hero!

This bug has been a long standing issue, with over 40 users reported to be affected in Launchpad.  That doesn’t seem so high, but if you go through the comments, you will see the variety of hardware impacted by this.

Seth has provided several custom kernel:

If you give them a try, please add your feedback to the bug! In case you are not too sure on how to install them, here is what I do:

  • Download all files for your architecture , plus the generic file
  • In a terminal type: sudo dpkg -i [name of file]
    1. linux-headers(generic)
    2. linux-headers(all)
    3. linux-image
  • Reboot your system and select the new kernel

I hope that this fix makes it upstream and through the SRU process soon, so I can keep installing kernel updates!

Read more