Canonical Voices

Posts tagged with 'canonical'

Michael Hall

Ubuntu API Website

For much of the past year I’ve been working on the Ubuntu API Website, a Django project for hosting all of the API documentation for the Ubuntu SDK, covering a variety of languages, toolkits and libraries.  It’s been a lot of work for just one person, to make it really awesome I’m going to need help from you guys and gals in the community.

To help smooth the onramp to getting started, here is a breakdown of the different components in the site and how they all fit together.  You should grab a copy of the branch from Launchpad so you can follow along by running: bzr branch lp:ubuntu-api-website

Django

First off, let’s talk about the framework.  The API website uses Django, a very popular Python webapp framework that’s also used by other community-run Ubuntu websites, such as Summit and the LoCo Team Portal, which makes it a good fit. A Django project consists of one or more Django “apps”, which I will cover below.  Each app consists of “models”, which use the Django ORM (Object-Relational Mapping) to handle all of the database interactions for us, so we can stick to just Python and not worry about SQL.  Apps also have “views”, which are classes or functions that are called when a URL is requested.  Finally, Django provides a default templating engine that views can use to produce HTML.

If you’re not familiar with Django already, you should take the online Tutorial.  It only takes about an hour to go through it all, and by the end you’ll have learned all of the fundamental things about building a Django site.

Branch Root

When you first get the branch you’ll see one folder and a handful of files.  The folder, developer_network, is the Django project root, inside there is all of the source code for the website.  Most of your time is going to be spent in there.

Also in the branch root you’ll find some files that are used for managing the project itself. Most important of these is the README file, which gives step by step instructions for getting it running on your machine. You will want to follow these instructions before you start changing code. Among the instructions is using the requirements.txt file, also in the branch root, to setup a virtualenv environment.  Virtualenv lets you create a Python runtime specifically for this project, without it conflicting with your system-wide Python installation.

The other files you can ignore for now, they’re used for packaging and deploying the site, you won’t need them during development.

./developer_network/

As I mentioned above, this folder is the Django project root.  It has sub-folders for each of the Django apps used by this project. I will go into more detail on each of these apps below.

This folder also contains three important files for Django: manage.py, urls.py and settings.py

manage.py is used for a number of commands you can give to Django.  In the README you’ll have seen it used to call syncdbmigrate and initdb.  These create the database tables, apply any table schema changes, and load them with initial data. These commands only need to be run once.  It also has you run collectstatic and runserver. The first collects static files (images, css, javascript, etc) from all of the apps and puts them all into a single ./static/ folder in the project root, you’ll need to run that whenever you change one of those files in an app.  The second, runserver, runs a local HTTP server for your app, this is very handy during development when you don’t want to be bothered with a full Apache server. You can run this anytime you want to see your site “live”.

settings.py contains all of the Django configuration for the project.  There’s too much to go into detail on here, and you’ll rarely need to touch it anyway.

urls.py is the file that maps URLs to an application’s views, it’s basically a list of regular-expressions that try to match the requested URL, and a python function or class to call for that match. If you took the Django project tutorial I recommended above, you should have a pretty good understanding of what it does. If you ever add a new view, you’ll need to add a corresponding line to this file in order for Django to know about it. If you want to know what view handles a given URL, you can just look it up here.

./developer_network/ubuntu_website/

If you followed the README in the branch root, the first thing it has you do is grab another bzr branch and put it in ./developer_network/ubuntu_website.  This is a Django app that does nothing more than provide a base template for all of your project’s pages. It’s generic enough to be used by other Django-powered websites, so it’s kept in a separate branch that each one can pull from.  It’s rare that you’ll need to make changes in here, but if you do just remember that you need to push you changes branch to the ubuntu-community-webthemes project on Launchpad.

./developer_network/rest_framework/

This is a 3rd party Django app that provides the RESTful JSON API for the site. You should not make changes to this app, since that would put us out of sync with the upstream code, and would make it difficult to pull in updates from them in the future.  All of the code specific to the Ubuntu API Website’s services are in the developer_network/service/ app.

./developer_network/search/

This app isn’t being used yet, but it is intended for giving better search functionality to the site. There are some models here already, but nothing that is being used.  So if searching is your thing, this is the app you’ll want to work in.

./developer_network/related/

This is another app that isn’t being used yet, but is intended to allow users to link additional content to the API documentation. This is one of the major goals of the site, and a relatively easy area to get started contributing. There are already models defined for code snippets, Images and links. Snippets and Links should be relatively straightforward to implement. Images will be a little harder, because the site runs on multiple instances in the cloud, and each instance will need access to the image, so we can’t just use the Django default of saving them to local files. This is the best place for you to make an impact on the site.

./developer_network/common/

The common app provides views for logging in and out of the app, as well as views for handling 404 and 500 errors when the arise.  It also provides some base models the site’s page hierarchy. This starts with a Topic at the top, which would be qml or html5 in our site, followed by a Version which lets us host different sets of docs for the different supported releases of Ubuntu. Finally each set of docs is placed within a Section, such as Graphical Interface or Platform Service to help the user browse them based on use.

./developer_network/apidocs/

This app provides models that correspond directly to pieces of documentation that are being imported.  Documentation can be imported either as an Element that represents a specific part of the API, such as a class or function, or as a Page that represents long-form text on how to use the Elements themselves.  Each one of these may also have a given Namespace attached to it, if the imported language supports it, to further categorize them.

./developer_network/web/

Finally we get into the app that is actually generates the pages.  This app has no models, but uses the ones defined in the common and apidocs apps.  This app defines all of the views and templates used by the website’s pages, so no matter what you are working on there’s a good chance you’ll need to make changes in here too. The templates defined here use the ones in ubuntu_website as a base, and then add site and page specific markup for each.

Getting Started

If you’re still reading this far down, congratulations! You have all the information you need to dive in and start turning a boring but functional website into a dynamic, collaborative information hub for Ubuntu app developers. But you don’t need to go it alone, I’m on IRC all the time, so come find me (mhall119) in #ubuntu-website or #ubuntu-app-devel on Freenode and let me know where you want to start. If you don’t do IRC, leave a comment below and I’ll respond to it. And of course you can find the project, file bugs (or pick bugs to fix) and get the code all from the Launchpad project.

Read more
John

I’ll be in Barcelona next week, for Mobile World Congress. I’m going to be based on Canonical’s stand. It’s been a few years since I went to this show, and it will be my first time in Barcelona. I’m looking forward to some new experiences.

See you there?

Read more
Michael Hall

It may surprise some of you (not really) to learn that in addition to being a software geek, I’m also a sci-fi nerd. One of my current guilty pleasures is the British Sci-Fi hit Doctor Who. I’m not alone in this, I know many of you reading this are fans of the show too.  Many of my friends from outside the floss-o-sphere are, and some of them record a weekly podcast on the subject.

Tonight one of them was over at my house for dinner, and I was reminded of Stuart Langridge’s post about making a Bad Voltage app and how he had a GenericPodcastApp component that provided common functionality with a clean separation from the rest of his app. So I decided to see how easy it would be to make a DWO Whocast app with it.  Turns out, it was incredibly easy.

Here are the steps I took:

  1. Create a new project in QtCreator
  2. Download Stuart’s GenericPodcastApp.qml into my project’s ./components/ folder
  3. Replace the template’s Page components with GenericPodcastApp
  4. Customize the necessary fields
  5. Add a nice icon and Suru-style gradients for good measure

That’s it! All told it took my less than 10 minutes to put the app together, test it, show it off, and submit my Click package to the store.  And the app doesn’t look half bad either.  Think about that, 10 minutes to get from an idea to the store.  It would have been available to download too if automatic reviews were working in the store (coming soon).

That’s the power of the Ubuntu SDK. What can you do with it in 10 minutes?

Update: Before this was even published this morning the app was reviewed, approved, and available in the store.  You can download it now on your Ubuntu phone or tablet.

Read more
Michael Hall

Yesterday, in a conference call with the press followed immediately by a public Town Hall with the community, Canonical announced the first two hardware manufacturers who are going to ship Ubuntu on smartphones!

Now many have speculated on why we think we can succeed where so many giants have failed.  It’s a question we see quite a bit, actually.  If Microsoft, RIM/Blackberry and HP all failed, what makes us think we can succeed?  It’s simple math, really.  We’re small.  Yeah, that’s it, we’re just small.

Unlike those giants who tried and failed, we don’t need to dominate the market to be successful. Even just 1% of the market would be enough to sustain and continue the development of Ubuntu for phones, and probably help cover the cost of developing it for desktops too.  The server side is already paying for itself.  Because we’re small and diversified, we don’t need to win big in order to win at all.  And 1%, that’s a very reachable target.

 

Read more
Michael Hall

Today I reached another milestone in my open source journey: I got my first package uploaded into Debian’s archives.  I’ve managed to get packages uploaded into Ubuntu before, and I’ve attempted to get one into Debian, but this is the first time I’ve actually gotten a contribution in that would benefit Debian users.

I couldn’t have done with without the the help and mentorship of Paul Tagliamonte, but I was also helped by a number of others in the Debian community, so a big thank you to everybody who answered my questions and walked me through getting setup with things like Alioth and re-learning how to use SVN.

One last bit of fun, I was invited to join the Linux Unplugged podcast today to talk about yesterday’s post, you can listen it it (and watch IRC comments scroll by) here: http://www.jupiterbroadcasting.com/51842/neckbeard-entitlement-factor-lup-28/

Read more
olli

With UDS just around the corner (11-13 March 2014, 2pm-8pm UTC) we wanted to give you a quick update where things stand in the 14.04 cycle. The teams have been busy working towards the goals for the 14.04 cycle, with a focus on the Ubuntu 14.04 LTS release, the App Developer Program and of course Ubuntu for Phones. […]

Read more
Michael Hall

Today was a distracting day for me.  My homeowner’s insurance is requiring that I get my house re-roofed[1], so I’ve had contractors coming and going all day to give me estimates. Beyond just the cost, we’ve been checking on state licensing, insurance, etc.  I’ve been most shocked at the differences in the level of professionalism from them, you can really tell the ones for whom it is a business, and not just a job.

But I still managed to get some work done today.  After a call with Francis Ginther about the API website importers, we should soon be getting regular updates to the current API docs as soon as their source branch is updated.  I will of course make a big announcement when that happens

I didn’t have much time to work on my Debian contributions today, though I did join the DPMT (Debian Python Modules Team) so that I could upload my new python-model-mommy package with the DPMT as the Maintainer, rather than trying to maintain this package on my own.  Big thanks to Paul Tagliamonte for walking me through all of these steps while I learn.

I’m now into my second week of UbBloPoMo posts, with 8 posts so far.  This is the point where the obligation of posting every day starts to overtake the excitement of it, but I’m going to persevere and try to make it to the end of the month.  I would love to hear what you readers, especially those coming from Planet Ubuntu, think of this effort.

[1] Re-roofing, for those who don’t know, involves removing and replacing the shingles and water-proofing paper, but leaving the plywood itself.  In my case, they’re also going to have to re-nail all of the plywood to the rafters and some other things to bring it up to date with new building codes.  Can’t be too safe in hurricane-prone Florida.

Read more
Michael Hall

Quick overview post today, because it’s late and I don’t have anything particular to talk about today.

First of all, the next vUDS was announced today, we’re a bit late in starting it off but we wanted to have another one early enough to still be useful to the Trusty release cycle.  Read the linked mailinglist post for details about where to find the schedule and how to propose sessions.

I pushed another update to the API website today that does a better job balancing the 2-column view of namespaces and fixes the sub-nav text to match the WordPress side of things. This was the first deployment in a while to go off without a problem, thanks to  having a new staging environment created last time.  I’m hoping my deployment problems on this are now far behind me.

I took a task during my weekly Core Apps update call to look more into the Terminal app’s problem with enter and backspace keys, so I may be pinging some of you in the coming week about it to get some help.  You have been warned.

Finally, I decided a few weeks ago to spread out my after-hours community a activity beyond Ubuntu, and I’ve settled on the Debian new maintainers Django website as somewhere I can easily start.  I’ve got a git repo where I’m starting writing the first unit tests for that website, and as part of that I’m also working on Debian packaging for the Python model-mommy library which we use extensively in Ubuntu’s Django website. I’m having to learn (or learn more) Debian packaging, Git workflows and Debian’s processes and community, all of which are going to be good for me, and I’m looking forward to the challenge.

Read more
Michael Hall

We wrapped up the last day of our sprint with a new team photo.  I can honestly say I couldn’t think of a better group of people to be working with.  Even the funny looking guy in the middle.

I mentioned that earlier in the week we decided on naming SDK releases after distro releases, and with that information in hand I spent my last day getting the latest API docs uploaded, so if you’re writing apps for the latest device images, you’ll want to use these: http://developer.ubuntu.com/api/qml/sdk-14.04/

In the coming week I’ll be working to get the documentation publishing scripts added to the automated build and testing process, so those docs will be continuously updated until the release of Ubuntu 14.04, at which point we’ll freeze those doc pages and start publishing daily updates for 14.10.  Being able to publish  all of those docs in a matter of minutes was a particularly thrill for me, after working for so long to get that feature into production.  It certainly proves that it was the right approach.

Read more
Michael Hall

Second to last day of the sprint, and we’ve been shifting gears from presenting ideas and brainstorming to making solid plans and bringing all the disparate pieces together.  The result is looking very, very promising.

I started out this morning by updating my Nexus 4 to build 166, which brings some improvements to the Unity 8 and system apps.  I’m still poking around to discover what’s new.

I had a handful of great conversations with the Jamie (security) and Ken (content-hub) about how to deliver creative content via click packages in the new store.  It looks like wallpapers will be relatively easy to support, and Ken and I (mostly Ken) will be working on adding that to the Click installer and System Settings.  Theme support is unfortunately going to be more difficult, since our QML themes are full QML themselves, and can run their own code, which makes them a security concern. We’re going to try and support a safe subset of styling to be delivered via Click packages, but that’s not likely to happen this cycle.

After lunch we had another set of presentations, this time from Florian Boucault on the SDK team about app performance.  After briefly covering performance goals we need to meet to make our UI as smooth and responsive an iOS, he stunned us all by showing off live performance graphs overlaid on top of one of the Core Apps (sadly I didn’t get a picture of that) so you can see the CPU and GPU usages while interacting with the app.  This wonderful little piece of magic should be landing in device images in the next couple of weeks, and I for one can not wait to try it out. In the mean time, he was nice enough to sit down with me and walk me through using QtCreator’s Analyse tab to see what parts of my own app might be using more resources than then should.

Among the sessions I wasn’t able to attend today: More HTML5 device APIs are coming online, contacts syncing via the Online Accounts provider for Google got it’s first cut, the SDK’s StateSaver component got some finishing work done, and AppArmor optimizations that will speed up boot times.

Read more
Michael Hall

Today we had a lot of good discussions around app development, starting off with an update on the state of GoLang support and what was needed to get the Go/QML bridge packaged and available for people to start using.

From there we moved on to the future of Content Hub, which is really set to reach it’s full potential now and we will hopefully see a wide range of system, core and 3rd party apps providing it with content.

After lunch Nick gave us all a quick lesson in how to properly use Autopilot, something I think we’re all going to become more familiar with in the coming months.  The key takeaway: Don’t Sleep.

Then we discussed QtCreator itself, and our various plugins for it.  We identified some easy fixes, and did a lot of brainstorming on how to attack the harder ones.  We saw the new packaging and cross-compilation support that’s being added to it now. Zoltan topped it all off by giving us a very short demonstration, going from the creation of a new project all the way, through creating a package, running package verification tests on it, copying it onto a phone and installing it, all in about 30 seconds!

We also discovered that the current SDK packages in the PPA were broken for Saucy and older releases (Trust was okay).  Daniel, Zoltan and David Barth spent much of the day intensely debugging the problem, providing a fix, shepherding those fixes though Launchpad and into the PPAs so that we could get it all working by the end of the day.  We then set aside time for a new session where we discussed what happened and what we can do to prevent it from happening again.  I’m pleased to say that some of those steps have already been implemented, and the rest will soon follow.

Finally we wrapped up the evening with chicken wings and beer, plus another fantastically entertaining card game courtesy of Alan Pope’s deranged humor.

Read more
Michael Hall

Another day packed with meetings and discussions today.  Here’s some of the highlights:

We decided that SDK version numbering should mirror distro numbering, so instead of Ubuntu SDK 2.0 we will have Ubuntu SDK 14.04.

We worked out more details on the next App Developer Showdown, including what additions and changes to the SDK and store will be ready for the contest, and what prizes we will try to get for it.

After reviewing the current documentation on developer.ubuntu.com, we identified some areas where we need to improve it before the App Showdown.

Alan Pope and I guest starred in Jono’s weekly Q&A session, from the hotel bar, which was loads of fun.  Watch the full video to hear more about what we’ve been discussing here and maybe find answers to some of your own questions.

Read more
Michael Hall

As I mentioned in my last post, I’m with the rest of my team in Orlando this week for a sprint. We are joined by many other groups from Canonical, and unfortunately we didn’t have enough meeting rooms for all of the breakout session, so the Community team was forced (forced I tell you) to meet on the patio by the pool.

We have had a lot of good discussions already, and we have four days left.  You’ll start to seem some of the new ideas and changes going into effect next week.  Until then, stay tuned.

Read more
Michael Hall

Last week I posted on G+ about the a couple of new sets of QML API docs that were published.  Well that was only a part of the actual story of what’s been going on with the Ubuntu API website lately.

Over the last month I’ve been working on implementing and deploying a RESTful JSON service on top of the Ubuntu API website, and last week is when all of that work finally found it’s way into production.  That means we now have a public, open API for accessing all of the information available on the API website itself!  This opens up many interesting opportunities for integration and mashups, from integration with QtCreator in the Ubuntu SDK, to mobile reference apps to run on the Ubuntu phone, or anything else your imagination can come up with.

But what does this have to do with the new published docs?  Well the RESTful service also gives us the ability to push documentation up to the production server, which is how those docs got there.  I’ve been converting the old Django manage.py scripts that would import docs directly into the database, to instead push them to the website via the new service, and the QtMultimedia and QtFeedback API docs were the first ones to use it.

Best of all, the scripts are all automated, which means we can start integrating them with the continuous integration infrastructure that the rest of Ubuntu Engineering has been building around our projects.  So in the near future, whenever there is a new daily build of the Ubuntu SDK, it will also push the new documentation up, so we will have both the stable release documentation as well as the daily development release documentation available online.

I don’t have any docs yet on how to use the new service, but you can go to http://developer.ubuntu.com/api/service/ to see what URLs are available for the different data types.  You can also append ?<field>=<value> keyword filters to your URL to narrow the results.  For example, if you wanted all of the Elements in the Ubuntu.Components namespace, you can use http://developer.ubuntu.com/api/service/elements/?namespace__name=Ubuntu.Components to do that.

That’s it for today, the first day of my UbBloPoMo posts.  The rest of this week I will be driving to and fro for a work sprint with the rest of my team, the Ubuntu SDK team, and many others involved in building the phone and app developer pieces for Ubuntu.  So the rest of this week’s post may be much shorter.  We’ll see.

Happy Hacking.

Read more
Michael Hall

So it’s not February first yet, but what the heck I’ll go ahead and get started early.  I tried to do the whole NaBloPoMo thing a year or so ago, but didn’t make it more than a week.  I hope to do better this time, and with that in mind I’ve decided to put together some kind of a plan.

First things first, I’m going to cheat and only plan on having a post published ever week day of the month, since it seems that’s when most people are reading my blog (and/or Planet Ubuntu) anyway, and it means I don’t have to worry about it over the weekends.  If you really, really want to read a new post from me on Saturday……you should get a hobby.  Then blog about it, on Planet Ubuntu.

To try and keep me from forgetting to blog during the days I am committing to, I’ve scheduled a recurring 30 minute slot on my calendar.  UbBloPoMo posts should be something you can write up in 30 minutes or less, I think, so that should suffice.  I’ve also scheduled it for the end of my work day, so I can talk about things that are still fresh in my mind, to make it even easier.

Finally, because Europe is off work by the end of my day, I’m going to schedule all of my posts to publish the following morning at 9am UTC (posts written Friday will publish on Monday morning).  I’ve been doing this for a while with my previous posts, and it seems to get more views when I do. For example, this post was written yesterday, but posted while I was still sound asleep this morning.  The internet is a magical place.

So, today being Friday, I will be writing my first actual UbBloPoMo entry this evening, and it will post on Monday February 3rd.  What will it be about I wonder?  The suspense is killing me.

 

Read more
mandel

You might have had the following error in your dbus daemon at some point and said to yoursefl WTF???

process 1288: arguments to dbus_message_set_error_name() were incorrect, assertion "error_name == NULL
   || _dbus_check_is_valid_error_name (error_name)" failed in file dbus-message.c line 2900.

Well, you are not the only one and I might be able to point you to the correct direction, your code is probably returnning a QDBusError that you created using the QDBusError::Other enum. The problem here is that the enum value your are using only indicates that the error name is not known and therefore cannot be match to a value in the QDBusError enum. When you use that enumerator the message created does have an incorrect name as follows:

QDBusMessage(type=Error, service="", error name="other", error message="msg", signature="", contents=() ) 

And “other” is, of course, not a valid DBus name and thefore the app crashes. Easies way to solve it, create a correct DBusError ;)

Read more
Dustin Kirkland

and bought 3 more with the i5-3427u CPU!



A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those...and upgraded to the i5 version!!!  Read on to find out why...
Whenever I publish an article here, the Blogger/G+ integration immediately posts a link to my G+ feed.  In that thread, Mark Shuttleworth asked if these NUCs supported IPMI or a similar technology, such that they could be enabled in MAAS.  I responded in kind, that, sadly, no, they only support tried-and-trusty-but-dumb-old-Wake-on-LAN.

Alas, an old friend, fellow homebrewer, and new Canonicaler, Ryan Harper, noted that the i5-3427u version of the NUC (performance specs here) actually supports Intel AMT, which is similar to IPMI.  Actually, it's an implementation of WBEM, which itself is fundamentally an implementation of the CIM standard.

That's a health dose of alphabet soup for you.  MAAS, NUC, AMT, IPMI, WEBM, CIM.  What does all of this mean?

Let's do a quick round of introductions for the uninitiated!
  • NUC - Intel's Next Unit of Computing.  It's a palm sized computer, probably intended to be a desktop, but actually functions quite well as a Linux server too.  Drawing about 10W, it's has roughly the same power of an AWS m1.xlarge, and costs about as much as 45 days of an m1.xlarge's EC2 bill.
  •  MAAS - Metal as a Service.  Installing Ubuntu servers (or desktops, for that matter), one by one, with a CD/DVD/USB-key is so 2004.  MAAS is your PXE/DHCP/TFTP/DNS (shit, more alphabet soup...) solution, all-in-one, ready to install Ubuntu onto lots of systems at scale!  Oh, and good news...  Juju supports MAAS as one of its environments, which is cool, in that you can deploy any charmed Juju workload to bare metal, in addition to AWS and OpenStack clouds.
  • AMT - Intel's Asset Management Technology.  This is a feature found on some Intel platforms (specifically, those whose CPU and motherboard support vPro technology), which enables remote management of the system.  Specifically, if you can authenticate successfully to the system, you can retrieve detailed information about the hardware, power cycle it on and off, and modify the boot sequence.  These are the essential functions that MAAS requires to support a system.
  • IPMI - Intelligent Platform Management Interface.  Also pioneered by Intel, this is a more server focused remote network management of systems, providing power on/off and other capabilities.
  • WBEM - Web Based Enterprise Management.  Remote system management technology available through a web browser, based on some internet standards, including CIM.
  • CIM - Common Information Model.  An open open standard that defines how systems in an IT environment are represented and managed.  Does that sound meta to you?  Well, yes, yes it is.
Okay, we have our vocabulary...now what?

So I actually returned all 3 of my Intel NUCs, which had the i3 processor, in favor of the more powerful (and slightly more expensive) i5 versions.  Note that I specifically bought the i5 Ivy Bridge versions, rather than the newer i5 Haswell, because only the Ivy Bridge actually supports AMT (for reasons that I cannot explain).  In fact, in comparison to Haswell, the Ivy Bridge systems:
  1. have AMT
  2. are less expensive
  3. have a higher maximum clock speed
  4. support a higher maximum memory
The only advantage I can see of the newer Haswells is a slightly lower energy footprint, and a slightly better video processor.

When 3 of my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

At this point, I split this post in two.  You're welcome to read on, to learn what you need to know about Intel AMT + Ubuntu + the i5-3427u NUC...

:-Dustin

Read more
John

Merry Christmas, from both of us here in London:

20131221-164821.jpg

Read more
Dustin Kirkland


A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those, and upgraded to the i5-3427u version, since it supports Intel AMT.  Why would I do that?  Read on...
When my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

But what followed was 6 straight hours of complete and utter frustration :-(  Like slam your fist into the keyboard and shout obscenities into cheese.
Actually, on that last point, I find it useful, when I'm mad, to open up cheese on my desktop and get visibly angry.  Once I realize how dumb I look when I'm angry, its a bit easier to stop being angry.  Seriously, try it sometime.
Okay, so I posted a couple of support requests on Intel's community forums.

Basically, I found it nearly impossible (like 1 in 100 chances) of actually getting into the AMT configuration menu using the required Ctrl-P.  And in the 2 or 3 times I did get in there, the default password, "admin", did not work.

After putting the kids to bed, downing a few pints of homebrewed beer, and attempting sleep (with a 2-week-old in the house), I lay in bed, awake in the middle of the night and it crossed my mind that...
No, no.  No way.  That couldn't be it.  Surely not.  That's really, really dumb.  Is it possible that the NUC's BIOS...  Nah.  Maybe, though.  It's worth a try at this point?  Maybe, just maybe, the NumLock key is enabled at boot???  It can't be.  The NumLock key is effin retarded, and almost as dumb as its braindead cousin, the CapsLock key.  OMFG!!!
Yep, that was it.  Unbelievable.  The system boots with the NumLock key toggled on.  My keyboard doesn't have an LED indicator that tells me such inane nonsense is the case.  And the BIOS doesn't expose a setting to toggle this behavior.  The "P" key is one of the keys that is NumLocked to "*".


So there must be some incredibly unlikely race condition that I could win 1 in 100 times where me pressing Ctrl-P frantically enough actually sneaks me into the AMT configuration.  Seriously, Intel peeps, please make this an F-key, like the rest of the BIOS and early boot options...

And once I was there, the default password, "admin", includes two more keys that are NumLocked.  For security reasons, these look like "*****" no matter what I'm typing.  When I thought I was typing "admin", I was actually typing "ad05n".  And of course, there's no scratch pad where I can test my keyboard and see that this is the case.  In fact, I'm not the only person hitting similar issues.  It seems that most people using keyboards other than US-English are quite confused when they type "admin" over and over and over again, to their frustration.

Okay, rant over.  I posted my solution back to my own questions on the forum.  And finally started playing with AMT!

The synopsis: AMT is really, really impressive!

First, you need to enter bios and ensure that it's enabled.  Then, you need to do whatever it takes to enter Intel's MEBx interface, using Ctrl-P (NumLock notwithstanding).  You'll be prompted for a password, and on your first login, this should be "admin" (NumLock notwithstanding).  Then you'll need to choose your own strong password.  Once in there, you'll need to enable a couple of settings, including networking/dhcp auto setup.  You can, at your option, also install some TLS certificates and secure your communications with your device.

AMT has a very simple, intuitive web interface.  Here are a comprehensive set of screen shots of all of the individual pages.

Once AMT is enabled on the target system, point a browser to port 16992, and click "Log On..."

The username is always "admin".  You'll set this password in the MEBx interface, using Ctrl-P just after BIOS post.

Here's the basic system status/overview.

The System Information page contains basic information about the system itself, including some of its capabilities.

The processor information page gives you the low down on your CPU.  Search ark.intel.com for your Intel CPU type to see all of its capabilities.

Check your memory capacity, type, speed, etc.

And your disk type, size, and serial number.

NUCs don't have battery information, but my Thinkpad does.

An event log has some interesting early boot and debug information here.

Arguably the most useful page, here you can power a system on, off, or hard reboot it.

If you have wireless capability, you choose whether you want that enabled/disabled when the system is off, suspended, or hibernated.

Here you can configure the network settings.  Unlike a BMC (Board Management Controller) on most server class hardware, which has its own dedicated interface, Intel AMT actually shares the network interface with the Operating System.

AMT actually supports IPv6 networking as well, though I haven't played with it yet.

Configure the hostname and Dynamic DNS here.

You can set up independent user accounts, if necessary.

And with a BIOS update, you can actually use Intel AMT over a wireless connection (if you have an Intel wireless card)
So this pointy/clicky web interface is nice, but not terribly scriptable (without some nasty screenscraping).  What about the command line interface?

The amttool command (provided by the amtterm package in Ubuntu) offers a nice command line interface into some of the functionality exposed by AMT.  You need to export an environment variable, AMT_PASSWORD, and then you can get some remote information about the system:

kirkland@x230:~⟫ amttool 10.0.0.14 info
### AMT info on machine '10.0.0.14' ###
AMT version: 7.1.20
Hostname: nuc1.
Powerstate: S0
Remote Control Capabilities:
IanaOemNumber 0
OemDefinedCapabilities IDER SOL BiosSetup BiosPause
SpecialCommandsSupported PXE-boot HD-boot cd-boot
SystemCapabilitiesSupported powercycle powerdown powerup reset
SystemFirmwareCapabilities f800

You can also retrieve the networking information:

kirkland@x230:~⟫ amttool 10.0.0.14 netinfo
Network Interface 0:
DhcpEnabled true
HardwareAddressDescription Wired0
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 31
MACAddress 00-aa-bb-cc-dd-ee
DefaultGatewayAddress 10.0.0.1
LocalAddress 10.0.0.14
PrimaryDnsAddress 10.0.0.1
SecondaryDnsAddress 0.0.0.0
SubnetMask 255.255.255.0
Network Interface 1:
DhcpEnabled true
HardwareAddressDescription Wireless1
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 0
MACAddress ee-ff-aa-bb-cc-dd
DefaultGatewayAddress 0.0.0.0
LocalAddress 0.0.0.0
PrimaryDnsAddress 0.0.0.0
SecondaryDnsAddress 0.0.0.0
SubnetMask 0.0.0.0

Far more handy than WoL alone, you can power up, power down, and power cycle the system.

kirkland@x230:~⟫ amttool 10.0.0.14 powerdown
host x220., powerdown [y/N] ? y
execute: powerdown
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powerup
host x220., powerup [y/N] ? y
execute: powerup
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powercycle
host x220., powercycle [y/N] ? y
execute: powercycle
result: pt_status: success

I was a little disappointed that amttool's info command didn't provide nearly as much information as the web interface.  However, I did find a fork of Gerd Hoffman's original Perl script in Sourceforge here.  I don't know the upstream-ability of this code, but it worked very well for my part, and I'm considering sponsoring/merging it into Ubuntu for 14.04.  Anyone have further experience with these enhancements?

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data BIOS
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'BIOS' (1 item):
(data struct.ver. 1.0)
Vendor: 'Intel Corp.'
Version: 'RKPPT10H.86A.0028.2013.1016.1429'
Release date: '10/16/2013'
BIOS characteristics: 'PCI' 'BIOS upgradeable' 'BIOS shadowing
allowed' 'Boot from CD' 'Selectable boot' 'EDD spec' 'int13h 5.25 in
1.2 mb floppy' 'int13h 3.5 in 720 kb floppy' 'int13h 3.5 in 2.88 mb
floppy' 'int5h print screen services' 'int14h serial services'
'int17h printer services'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data ComputerSystem
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'ComputerSystem' (1 item):
(data struct.ver. 1.0)
Manufacturer: ' '
Product: ' '
Version: ' '
Serial numb.: ' '
UUID: 7ae34e30-44ab-41b7-988f-d98c74ab383d

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Baseboard
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Baseboard' (1 item):
(data struct.ver. 1.0)
Manufacturer: 'Intel Corporation'
Product: 'D53427RKE'
Version: 'G87971-403'
Serial numb.: '27XC63723G4'
Asset tag: 'To be filled by O.E.M.'
Replaceable: yes

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Processor
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Processor' (1 item):
(data struct.ver. 1.0)
ID: 0x4529f9eaac0f
Max Socket Speed: 2800 MHz
Current Speed: 1800 MHz
Processor Status: Enabled
Processor Type: Central
Socket Populated: yes
Processor family: 'Intel(R) Core(TM) i5 processor'
Upgrade Information: [0x22]
Socket Designation: 'CPU 1'
Manufacturer: 'Intel(R) Corporation'
Version: 'Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data MemoryModule
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'MemoryModule' (2 items):
(* No memory device in the socket *)
(data struct.ver. 1.0)
Size: 8192 Mb
Form Factor: 'SODIMM'
Memory Type: 'DDR3'
Memory Type Details:, 'Synchronous'
Speed: 1333 MHz
Manufacturer: '029E'
Serial numb.: '123456789'
Asset Tag: '9876543210'
Part Number: 'GE86sTBF5emdppj '

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data VproVerificationTable
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'VproVerificationTable' (1 item):
(data struct.ver. 1.0)
CPU: VMX=Enabled SMX=Enabled LT/TXT=Enabled VT-x=Enabled
MCH: PCI Bus 0x00 / Dev 0x08 / Func 0x00
Dev Identification Number (DID): 0x0000
Capabilities: VT-d=NOT_Capable TXT=NOT_Capable Bit_50=Enabled
Bit_52=Enabled Bit_56=Enabled
ICH: PCI Bus 0x00 / Dev 0xf8 / Func 0x00
Dev Identification Number (DID): 0x1e56
ME: Enabled
Intel_QST_FW=NOT_Supported Intel_ASF_FW=NOT_Supported
Intel_AMT_FW=Supported Bit_13=Enabled Bit_14=Enabled Bit_15=Enabled
ME FW ver. 8.1 hotfix 40 build 1416
TPM: Disabled
TPM on board = NOT_Supported
Network Devices:
Wired NIC - PCI Bus 0x00 / Dev 0xc8 / Func 0x00 / DID 0x1502
BIOS supports setup screen for (can be editable): VT-d TXT
supports VA extensions (ACPI Op region) with maximum ver. 2.6
SPI Flash has Platform Data region reserved.

On a different note, I recently sponsored a package, wsmancli, into Ubuntu Universe for Trusty, at the request of Kent Baxley (Canonical) and Jared Dominguez (Dell), which provides the wsman command.  Jared writes more about it here in this Dell technical post.  With Kent's help, I did manage get wsman to remotely power on a system.  I must say that it's a bit less user friendly than the equivalent amttool functionality above...

kirkland@x230:~⟫  wsman invoke -a RequestPowerStateChange -J request.xml http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService?SystemCreationClassName="CIM_ComputerSystem",SystemName="Intel(r)AMT",CreationClassName="CIM_PowerManagementService",Name="Intel(r) AMT Power Management Service" --port 16992 -h 10.0.0.14 --username admin -p "ABC123abc123#" -V -v

I'm really enjoying the ability to remotely administer these systems.  And I'm really, really looking forward to the day when I can use MAAS to provision these systems!

:-Dustin

Read more
Kyle Nitzsche

Cordova 3.3 adds Ubuntu

Upstream Cordova 3.3.0 is released just in time for the holidays with a gift we can all appreciate: built-in Ubuntu support!

Cordova: multi-platform HTML5 apps

Apache Cordova is a framework for HTML5 app development that simplifies building and distributing HTML5 apps across multiple platforms, like Android and iOS. With Cordova 3.3.0, Ubuntu is an official platform!

The cool idea Cordova starts with is a single www/ app source directory tree that is built to different platforms for distribution. Behind the scenes, the app is built as needed for each target platform. You can develop your HTML5 app once and build it for many mobile platforms, with a single command.

With Cordova 3.3.0, one simply adds the Ubuntu platform, builds the app, and runs the Ubuntu app. This is done for Ubuntu with the same Cordova commands as for other platforms. Yes, it is as simple as:

$ cordova create myapp REVERSEDOMAINNAME.myapp myapp
$ cd myapp
(Optionally modify www/*)
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Plugins

Cordova is a lot more than an HTML5 cross-platform web framework though.
It provides JavaScript APIs that enable HTML5 apps to use of platform specific back-end code to access a common set of devices and capabilities. For example, you can access device Events (battery status, physical button clicks, and etc.), Gelocation, and a lot more. This is the Cordova "plugin" feature.

You can add Cordova standard plugins to an app easily with commands like this:

$ cordova plugin add org.apache.cordova.battery-status
(Optionally modify www/* to listen to the batterystatus event )
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Keep an eye out for news about how Ubuntu click package cross compilation capabilities will soon weave together with Cordova to enable deployment of plugins that are compiled to specified target architecture, like the armhf architecture used in Ubuntu touch images (for phones, tablets and etc.).

Docs

As a side note, I'm happy to note that my documentation of initial Ubuntu platform support has landed and has been published at Cordova 3.3.0 docs.


Read more