I have previously complained about the about the amount of gadgets that seem to be piling by my bedside table charging quietly every night.. Laptops, Tablets, phones, kindle (yes the plural is not a typo).
On top of that I am growing frustrated with my DVR. Last week the new series of “The Mentalist” was broadcasted in the UK. I did set the record in advance but somehow it clashed and did not get recorded. Even with the missed show only one-click away, in the TV channel’s website, it turns out that my only options were to wait for a repeat on TV in 4 days or go upstairs and watch it in the office desktop. Why is it so complicated!?
Centralised Content & Specialised Consumption Devices
So it turns out that I am not going to give up my E-Ink screen for reading books. Why? Because it doesn’t hurt my eyes like an tablet screen does. Neither I am going to convince my son that watching Peppa Pig in the iPad is not any better than watching it on TV. Why? Dunno, he isn’t talking yet.
The future for me looks like it is going to involve a lot of different devices, and I am fine with that as long as:
The good news is that the technology to allow all of this to happen is already being designed. Point number 3, is the easy one! you just need an Ubuntu One account. Point 1&2, I have considered them incompatible for a long time, until I heard about big.LITTLE.
big.LITTLE is going to be BIG
Big.LITTLE is a System-on-a-Chip (SoC) that pairs up to four top notch dual-core A15 processors with four very low powered dual-core A7 processors. The beauty is that they have very similar feature-set and architecture. ARM expects to be able to switch between them depending on the tasks asked to performed without the operating systems noticing the difference.
In a nutshell, it’s like being able to choose between a Prius or a Ferrari engine without having to change cars! Just choose the one that suits your needs better for today’s journey.
This is one of the technologies that is going to ignite the next personal computing revolution. I’ll tell you all about the other ones soon
Here is Ronald, doing a great job at explaining why Ubuntu on ARM is AWESOME!!!
If you follow the Canonical blog, you would have seen that a new white paper has been published on how to implement UEFI Secure Boot in a manner that can be used by all users, including Linux. The paper is signed and authored by Matthew Garret from Red Hat, Jeremy Kerr from Canonical and James Bottomley, Linux Kernel developer.
Since Microsoft talked about their plans for Secure Boot at /Build2011, there has been lots of things said on the matter. With more than 16,000 people signing the Free Software Foundation statement on “Secure Boot vs Restricted Boot”, it is clear that this is an issue that needed some attention.
It is great to see companies like Red Hat and Canonical getting together, and coming up with recommendations that benefit the whole industry. The paper is well worth the read, Enjoy!Read more
I have been using Ubuntu 11.10 on ARM now for a couple of days and I have to say: It Rocks! Ubuntu has had a long history of supporting ARM Systems on a Chip (SoC) since 2008, but Ubuntu 11.10 is a significant milestone.
Canonical announced back in August that Ubuntu Server 11.10 would include the first ARM version of the product, and here it is. While this is just the first step on an exciting journey, it is worth to celebrate that the voyage has started. I look forward to see what 12.04 LTS brings us on this space!
It is hard to really grasp the full experience of Ubuntu on
ARM when you are playing with a development board. For this reason, we have released a demo image for the Tegra2-based (Nvidia) Toshiba AC100.
Running Unity 2D, it shows off that Ubuntu on ARM is a great platform for computing, in a very compact design and with a very long battery life. For all these reasons, this is my system of choice to take to UDS-P.
If you have a Toshiba AC100, I encourage you to install Ubuntu 11.10 in it!
Powered by the Texas Instruments OMAP4430 processor, the Panda Board packs in “a dual-core 1 GHz ARM Cortex-A9 MPCore CPU, a PowerVR SGX540 GPU, a C64x DSP, and 1 GB of DDR2 SDRAM“. Providing an affordable and competitive design tool for the embedded mobile space.
Ubuntu 11.10 on ARM is available in Headless and full image for Panda. You can find download links and installation instructions here. You can also find there Ubuntu 11.10 for OMAP3 (Beagle Board).
The IMX53 Family is oriented towards automotive solutions. Ubuntu 11.10 on ARM is the first release of Ubuntu to provide support for the IMX53 QuickStart Board. You can find download links and installation instructions here.
Both the TI OMAP4 and Freescale images are based on the Linaro outputs for those SoCs. This has greatly our capacity to support ARM development boards.
I don’t believe I am saying this, but I am no longer interested on the phone industry.. The thing is that I have been paying attention to the gadget news all this year and I am pretty interested on the new kindle’s, however I have not been interested on phones now for long time.
Android has managed to make the phone industry boring. All the phones look the same, they run the same apps, they run the same services..YAWN! Do you feel the same way? The problem to me is that 1-2 years ago a phone was the only tech item that you really needed to access all services and do anything you could possibly want.
Since then, thanks to tablets and e-readers amongst other the phone is no longer the ultimate convergence device. I am back to carrying multiple gadgets and a never-ending battery charging nightmare. Can someone invent the evolution on computing devices? please..
So, this week Apple is launching the Iphone 5 – will see…
Are you running an up to date version of Oneiric? Do you have 15 minutes spare?YOU can help Ubuntu Friendly today! Read on..
The Ubuntu Friendly program is now on its test phase. One think that we could really do with is some more real user data to test website views. The Ubuntu Friendly feeds from test submissions from Launchpad.
So what do I need to do?
You need to run the recently improved System Test tool. This tool is in the default Oneiric image and the run-time has been reduced to under 15 minutes (disclaimer: this depends on how powerful is your system!)
If you are not sure how to find this tool, just go to the Unity Search lens and type “System Testing” and click in the icon that looks like a computer screen with a tick mark.
Just follow the instructions and, if you don’t mind, ping me a comment back on this post with how long it took you to run it and any other feedback you might have!
Go on, it is Friday don’t you know…
After spending some time last week locked in a room thinking about how to better display hardware information to consumers for Ubuntu Friendly, I started to wonder if we could apply some of the ideas to the certification site.
Following some discussions on my previous blog post, I have come up with a wire-frame design that I hope would address these points:
What do you think, will this help? Does this address users concerns?
Good news for embedded device developers trying to bring up a Linux software stack on their systems, Ubuntu Core is getting ready for Oneiric.
First thing you are going to ask me is what is Ubuntu Core? Well here is what the Ubuntu wiki says:
Ubuntu Core is a minimal rootfs for use in the creation of custom images for specific needs. Ubuntu Core strives to create a suitable minimal environment for use in Board Support Packages, constrained or integrated environments, or as the basis for application demonstration images.
Ubuntu Core delivers a functional user-space environment, with full support for installation of additional software from the Ubuntu repositories, through the use of the apt-get command.
So what does it all mean? Ubuntu core is all about making it easy to get started with a functional software stack that needs to fit into a tiny space.
I have seen the pain of many Symbian hackers bringing up new hardware with only a massive system configuration work with. Where do you start debugging?
Undoubtedly the best way to work is to start with a minimal system configuration, which you can use on your early stages of board support software development, and slowly add only what you need to it. Keeping the software from bloating is a corner stone of Bill Of Material (BOM) management.
A good example of this is the Ubuntu IVI remix that is build up from Ubuntu Core and has recently achieve Genivi Compliance. You can also check the Canonical site for more details on the benefits of Ubuntu Core.
Well, how small is SMALL? Well, it is around 100MB , although it compresses to a download of 32MB. So pretty small!
As reported previously , the DELL Vostro 3300 has been pestered with continuous problems with external monitors. I am happy to report that since this week I am running Natty (11.04) with a dual monitor set-up and perfect image in both.
The downside is that I am currently using a custom Kernel. It was created by Seth Forshee to fix the “Intel Core i3 External Monitor Wavy Output” bug. Thanks Seth! You are my hero!
This bug has been a long standing issue, with over 40 users reported to be affected in Launchpad. That doesn’t seem so high, but if you go through the comments, you will see the variety of hardware impacted by this.
Seth has provided several custom kernel:
If you give them a try, please add your feedback to the bug! In case you are not too sure on how to install them, here is what I do:
I hope that this fix makes it upstream and through the SRU process soon, so I can keep installing kernel updates!
The Ubuntu Certification team is fully distributed and has now been running Scrum for over 9 months. The team has members in Canada&US, Europe and Asia. I have been blogging about several parts of our scrum experience, now is time to piece it all together!
We run in 2 week iteration cycles within a larger 6 month release cadence. Here is what those two weeks look like:
Day1 (Thursday)- Planning session
We run the planning session (30 minutes) just after the previous iteration Demo session – No room to breath! The reason for doing this is just down to timezone and trying to get as many people as possible into this sessions.
We host the planning session in Mumble, and we review the backlog for the next iteration. We found it a bit dull just for the Product Owner to explain what each story was about. Instead, we ensure that everyones participation by agreeing the definition of done for the stories. This eliminates any misunderstandings of what needs to be deliver and ensure that everyone is paying attention.
Just after the planning session, the scrum team gets together to flesh out the task-board for the iteration. At this point the stories are re-size via IRC planning poker: At the count of three by the Scrum Master every one pastes a t-shirt size on the IRC channel.
Following the poker planning, the team discusses possible implementations and they write down tasks in the IRC channel, to be later translated by the Scrum Master into the backlog.
We run two scrums (no longer than 15 minutes) a day. A reduced one at 9.30 UK time with Europe and Asia, and a larger one at 15.00 UK time including UK and US. We run both using Mumble but Google+ is also a good option.
Day 5 (Wednesday) – Backlog review with the Scrum Master
On Wendnesday, Ara and I review the progress of the backlog and discuss any stories that might need to be refocused, unblocked or delayed to a later iteration.
Day 6 (Thursday) – Discussing impediments and new ideas
At this point, we have reach the equator of the iteration. We host a 45 minutes meeting following the main scrum to talk about any issues the team wants to raise. This mainly focuses around problems facing our work or new ideas for future iterations or releases.
Also, the Scrum Master send an mid-iteration status email. This ensures that nothing is falling through the cracks and everyone knows the overall iteration progress. We find that scrums tent to focus on what people are working and not what is left in the backlog, this can lead to lower priority user stories being worked on while higher importance ones remain overlooked.
Day 9 (Tuesday) – Backlog review for next iteration
The Scrum Master, Product Owner and I get together to review what stories are likely to not be completed. This is normally 80% accurate and gives us a better idea of how many new stories are to be added for the next iteration. Then, we discuss priority of stories and we create a draft backlog for the next iteration. Although there are always changes during the planning session, this gives us a solid draft to start from.
Day 11/Next Day 1 (Thursday) – Demo
We completed the full circle and we are back at the demo and planning meeting, where a demo lead shows via Spreed (screen sharing tool) what has been achieved.
In my team, we spent most of our time working with system manufacturers improving hardware support in Ubuntu. Apart of allowing users to install Ubuntu after they purchases their laptop, we also like to increase the number of computers that you can buy from the shops with Ubuntu pre-installed.
If you ever had the chance to work within the logistics of a manufacturing line, you will understand the level of complexity and how far remove software developers are from the shop floor. As feedback is the best way to learn and improve, here is my request: Please share your Ubuntu Pre-installed story with us!
Have you ever bought a system with Ubuntu pre-installed? Where did you get it? What systems was it? How did it go? Could you then upgrade to the next Ubuntu release?
I look forward to hear your story!
(this blog has been reproduce from goingagile.org)
When working with Agile, make sure to define your long term strategy that gives direction to your product backlog.
The Ubuntu Certification programme follows the beat of the 6 monthly release cadence. In the certification team we run a two week iteration cadence. It is a continuous delivery machine! The danger is for your ambitions to get stuck in the quick rhythm.
Regardless if I am working with a product or a service team, I found it important to set a clear vision to aim for. The constant cadence of Agile is normally riddle with changes in priorities. While this enables the team to remain flexible, I have found that can be confusing for the individual: “Tell me again why are we doing this?”
Having a clear vision or product road map doesn’t only benefit your team, but also your stakeholders. I often find that a lack of a shared vision creates a mistrust – “This iteration could be the last one. Quick, I better ask for everything I need at once! Everything is high priority!”, sounds familiar?
Sharing a common set of principles and aspiration to deliver great value is sometimes confused with the need to have a committed two-year plan. To remain competitive, I rather stop second guessing the future and build working practices that allow for change and make people comfortable working with the unknown.
The Certification team at Canonical has been Going Agile now for the last 9 months. Oneiric is the first release that we are running full Scrum practices. We are a bit unique as we are spread all over the world. We have 2 people in Montreal (Canada), 1 person in Boston (USA) , 1 person in Raleigh (USA), 3 scatter over the United Kingdom, our Scrum Master is in Germany, and our latest team member is in Taipei (Taiwan). Running Scrum in this type of environment needs constant innovation. I am keeping track of our progress in my blog at victorpalau.net/tag/scrum/
Roughly every three months, we get together somewhere in the world. We just got back from the Ubuntu Rally in Dublin, where we decided to give our backlog some love!
We largely build our backlog at the Ubuntu Developer Summits and then we continue to add and remove items as we go.
Halfway through the project and with over 100 items to complete before the end of October, we needed to step back and make sure that we were working on the right priorities and that nothing had fallen trough the cracks. What better way to do this than a full poker planning session. Here is how it worked:
I was aware that data centers around the world were starting to be talked about as an environmental problem, but perhaps the statistic that data centers have the same carbon footprint than the aviation industry (about 2% of the global carbon footprint pie) really put things in perspective for me.
The Open Data Center Alliance ”Carbon Footprint Values” document starts its executive summary with:
According to market research and consulting firm Pike Research, data centers around the world consumed 201.8 terawatt hours (TWh)
in 2010 and energy expenditures reached $23.3 billion. That’s enough electricity to power 19 million average U.S. households. The
good news is that, according to Pike Research, the adoption of cloud computing could lead to a 38% reduction in worldwide data center
energy expenditures by 2020.
The prediction that cloud computing will lead to large savings of energy consumption can be justified by economies of scale. Todays’ enterprise data centers average 20-30% computing power utilisation. The same data center serving Infrastructure As A Service (IaaS) is expected to run at 80-90% occupancy. This plus the opportunity for enterprises to transform a fix cost of ownership into a flexible service subscription will lead to consolidation of data centers.
Economies of scale will also allow large scales data center providers to invest in propose build more sustainable and cheaper to run buildings. A good examples of this is the server and data center specifications shared by Facebook via the Open Compute Project, or Google’s water powered and cooled at-Sea data centers.
As discussions of hefty fines for London by the European Union are currently taking place, sustainability is becoming less a matter of corporate responsibility and more of legal compliance.
However, Cloud computing is bringing applications to individuals that were only available to enterprises a few years ago. This will multiply the need for data centers across the globe beyond the current demand. We need to work beyond finding cheaper ways to cool and power servers and start tackling the real problem, servers themselves need to be exponentially more efficient.
Certification is a generic level of functionality to be expected for hardware running on an Ubuntu Release. Part of the challenge is to identify what hardware components should be included in the test.
The aim is to cover all widely accepted components, while excluding fringe ones that may only be of interest to a small set of the user base.
In the past it has been a bit hard to understand what components were tested for Certification and this has lead to questions like “Why is the finger reader scan not working in this certified system?” when the answer is simply that certification does not test for finger-reading functionality.
You can now see at a high level what certification includes for both servers and clients (click on the image to see the full list):
Excellent video on why working for Canonical is great! enjoy.
Some of the work done to enable Sandybridge Suspend (S3) and Hibernate(S4) showed how painful it can be to get hardware to do what it oughts to do! The problem arises when you find yourself with not many tools to debug what is going on, since your console and half of the OS functionality has already gone to sleep.
BIOS Vendors rely on the use of expensive JTAG debugging tools. While this is ok, it does not really allow for the community to participate and considerably increases the cost of enabling a system to work with Ubuntu.
Faced with this problem, the Hardware Enablement team at Canonical has set themselves the goal in Oneiric to create a “tool to analyze and suggest where suspend/resume is failing to help guide people through debug phase” i.e an automated version of Colin King.
The basic idea is going back to debugging basics: “Have you hit that print statement before dying?”. The problem is that you are trying to instrument a fairly complex part of the systems and you do not have a screen to print stuff to.
For the first problem, the team is trying an ready available open source solution: System tap. For the second problem, they are going old school: Audio and Light signals. Today most systems have speakers and a few LEDs to let you know one thousand irrelevant things that you can do with your keyboard. So why not put them to good use?
The blueprint goes beyond a simple “BEEP” when you hit your breakpoint :
We would need some hardware to record the lights/sound at a sensible speed:
- Have another PC record the audio and interpret it
- Leverage ham radio code already done to interpret sound
With some initial prototypes already floating around, I can’t wait to see what they deliver!
As part of looking at how we ensure that Ubuntu Certification delivers a great experience to consumers, we are revising our coverage of Suspend/Resume and boot testing. This is a proposal at the moment , so please feel free to comment on the suggested tests or additional ones that you would fine worth including.
Here is the list of proposed tests that a certify system would need to pass for 11.10 with respect suspend and resume, this include existing and new tests:
|Suspend_once||Main sleep test, all “after” tests depend on this one. Triggered manually, auto-wakes after 30-60 seconds (not all systems support automatic trigger and wake up)|
|hibernate_once||Triggered manually, auto-wakes after 5 minutes|
|CPUs Checking before/after suspend||Check in /proc/interrupt before and after suspend how many cores are online. Fail ‘after’ test if different.|
|Memory before/after suspend||Check in /proc/meminfo that all system memory is still available.|
|USB Before and After suspend||One per port, write to USB storage device (thumb drive) and verify.|
|Display||Ensure that the display is working after resume (N/A for servers)|
|resolution_*_suspend||ensure that resolution is same before/after suspend (N/A for servers)|
|cycle_resolutions_after_suspend||Can cycle through resolutions after a suspend (N/A for servers)|
|*wireless_*_suspend||Checks that wireless network is still available and can connect to it|
|network_*_suspend||Checks that wired network is still available and can connect to it|
|Wake-on-lan||put system to sleep then send an IP packet from a remote system to wake it (only applicable to servers)|
|audio_*_suspend||Audio device still works after detected (N/A for servers)|
|record_playback_after_suspend||records and playback after suspend (N/A for servers)|
|bluetooth_obex_*_suspend||Bluetooth obex object can be sent to another device after suspend (N/A for servers)|
|stress/suspend||Does 30 S3/Resume cycles,can only fail once|
|30 Soft Reboots||Restart during testing 30 times|
Just a small note to let you know that we have added download links from the system pages in ubuntu.com/certification to the images that correspond to the actual certificate.
For example the DELL Inspiron One 2205 page lists 2 images to download. These correspond to the certified 11.04 and 10.10 releases. Note that 10.04 LTS was certified as “(Pre-install Only)” and we are not linking to the corresponding image since it requires a customs ISO to work correctly.
The next improvement to the website is under implementation and it is a clearer listing of Certified and Ready systems.
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.