Canonical Voices

Posts tagged with 'canonical'

alex

In both git and bzr, each branch you clone is a full copy of project’s history. Once you have downloaded the source control objects from the remote location (e.g. github or launchpad), you can then use your local copy of the repo to quickly create more local branches.

What if another user has code in their branch that you want to inspect or use?

In git, since it’s common to have many logical git branches in the same physical filesystem directory, the operation is conceptually a simple extension of the default workflow, where you use “git checkout” to switch between logical branches.

The conceptually simple extension of the workflow is to add the location of the remote repo to your local repo and download any potentially new objects you don’t already have.

Now you have access to the new branches, and can switch between them with “git checkout”.

In command sequences:

git remote add alice https://github.com/alice/project.git
git remote update
git checkout alice/new_branch

This workflow is great if project.git is very large, and you have a slow network. The remote update will only download Alice’s objects that you don’t already have, which should be minimal, comparatively speaking.

In bzr, the default workflow is to have a separate physical filesystem directory for each logical branch. It is possible to make different branches share the same physical directory with the ‘colo’ plugin, but my impression is most people don’t use it and opt for the default.

Since different bzr branches will have different directories by default, getting them to share source control objects can be trickier especially when a remote repo is involved.

Again, the use case here is to avoid having to re-download a gigantic remote branch especially when perhaps 98% of the objects are the same.

I read and re-read the `bzr branch` man page multiple times, wondering if some combination of –stacked, –use-existing-dir, –hardlink, or –bind could do this, but I ended up baffled. After some good clues from the friendly folks in the #bzr irc channel, I found this answer:

Can I take a bazaar branch and make it my main shared repository?

I used a variation of the second (non-accepted) answer:

bzr init-repo ../
bzr reconfigure --use-shared

I was then able to:

cd ..
bzr branch lp:~alice/project/new_branch
cd new_branch

The operation was very fast, as bzr downloaded only the new objects from Alice that I was missing, and that was exactly what I wanted. \o/

###

Additional notes:

  1. When you issue “bzr init-repo ../” be sure that your parent directory does not already contain a .bzr/ directory or you might be unhappy
  2. Another method to accomplish something similar during “git clone” is to use the –reference switch
  3. I don’t know what would have happened if you just issued “bzr pull lp:~alice/project/new_branch” inside your existing branch, but my intuition tells me “probably not what you want”, as “bzr pull” tends to want to change the working state of your tree with merge commits.
  4. Again contrast to git, which has a “git fetch” concept that only downloads the remote objects without applying them, and leaving it up to the user to decide what to do with them.

Read more
Michael Hall

We wrapped up the last day of our sprint with a new team photo.  I can honestly say I couldn’t think of a better group of people to be working with.  Even the funny looking guy in the middle.

I mentioned that earlier in the week we decided on naming SDK releases after distro releases, and with that information in hand I spent my last day getting the latest API docs uploaded, so if you’re writing apps for the latest device images, you’ll want to use these: http://developer.ubuntu.com/api/qml/sdk-14.04/

In the coming week I’ll be working to get the documentation publishing scripts added to the automated build and testing process, so those docs will be continuously updated until the release of Ubuntu 14.04, at which point we’ll freeze those doc pages and start publishing daily updates for 14.10.  Being able to publish  all of those docs in a matter of minutes was a particularly thrill for me, after working for so long to get that feature into production.  It certainly proves that it was the right approach.

Read more
Michael Hall

Second to last day of the sprint, and we’ve been shifting gears from presenting ideas and brainstorming to making solid plans and bringing all the disparate pieces together.  The result is looking very, very promising.

I started out this morning by updating my Nexus 4 to build 166, which brings some improvements to the Unity 8 and system apps.  I’m still poking around to discover what’s new.

I had a handful of great conversations with the Jamie (security) and Ken (content-hub) about how to deliver creative content via click packages in the new store.  It looks like wallpapers will be relatively easy to support, and Ken and I (mostly Ken) will be working on adding that to the Click installer and System Settings.  Theme support is unfortunately going to be more difficult, since our QML themes are full QML themselves, and can run their own code, which makes them a security concern. We’re going to try and support a safe subset of styling to be delivered via Click packages, but that’s not likely to happen this cycle.

After lunch we had another set of presentations, this time from Florian Boucault on the SDK team about app performance.  After briefly covering performance goals we need to meet to make our UI as smooth and responsive an iOS, he stunned us all by showing off live performance graphs overlaid on top of one of the Core Apps (sadly I didn’t get a picture of that) so you can see the CPU and GPU usages while interacting with the app.  This wonderful little piece of magic should be landing in device images in the next couple of weeks, and I for one can not wait to try it out. In the mean time, he was nice enough to sit down with me and walk me through using QtCreator’s Analyse tab to see what parts of my own app might be using more resources than then should.

Among the sessions I wasn’t able to attend today: More HTML5 device APIs are coming online, contacts syncing via the Online Accounts provider for Google got it’s first cut, the SDK’s StateSaver component got some finishing work done, and AppArmor optimizations that will speed up boot times.

Read more
Michael Hall

Today we had a lot of good discussions around app development, starting off with an update on the state of GoLang support and what was needed to get the Go/QML bridge packaged and available for people to start using.

From there we moved on to the future of Content Hub, which is really set to reach it’s full potential now and we will hopefully see a wide range of system, core and 3rd party apps providing it with content.

After lunch Nick gave us all a quick lesson in how to properly use Autopilot, something I think we’re all going to become more familiar with in the coming months.  The key takeaway: Don’t Sleep.

Then we discussed QtCreator itself, and our various plugins for it.  We identified some easy fixes, and did a lot of brainstorming on how to attack the harder ones.  We saw the new packaging and cross-compilation support that’s being added to it now. Zoltan topped it all off by giving us a very short demonstration, going from the creation of a new project all the way, through creating a package, running package verification tests on it, copying it onto a phone and installing it, all in about 30 seconds!

We also discovered that the current SDK packages in the PPA were broken for Saucy and older releases (Trust was okay).  Daniel, Zoltan and David Barth spent much of the day intensely debugging the problem, providing a fix, shepherding those fixes though Launchpad and into the PPAs so that we could get it all working by the end of the day.  We then set aside time for a new session where we discussed what happened and what we can do to prevent it from happening again.  I’m pleased to say that some of those steps have already been implemented, and the rest will soon follow.

Finally we wrapped up the evening with chicken wings and beer, plus another fantastically entertaining card game courtesy of Alan Pope’s deranged humor.

Read more
Michael Hall

Another day packed with meetings and discussions today.  Here’s some of the highlights:

We decided that SDK version numbering should mirror distro numbering, so instead of Ubuntu SDK 2.0 we will have Ubuntu SDK 14.04.

We worked out more details on the next App Developer Showdown, including what additions and changes to the SDK and store will be ready for the contest, and what prizes we will try to get for it.

After reviewing the current documentation on developer.ubuntu.com, we identified some areas where we need to improve it before the App Showdown.

Alan Pope and I guest starred in Jono’s weekly Q&A session, from the hotel bar, which was loads of fun.  Watch the full video to hear more about what we’ve been discussing here and maybe find answers to some of your own questions.

Read more
Michael Hall

As I mentioned in my last post, I’m with the rest of my team in Orlando this week for a sprint. We are joined by many other groups from Canonical, and unfortunately we didn’t have enough meeting rooms for all of the breakout session, so the Community team was forced (forced I tell you) to meet on the patio by the pool.

We have had a lot of good discussions already, and we have four days left.  You’ll start to seem some of the new ideas and changes going into effect next week.  Until then, stay tuned.

Read more
Michael Hall

Last week I posted on G+ about the a couple of new sets of QML API docs that were published.  Well that was only a part of the actual story of what’s been going on with the Ubuntu API website lately.

Over the last month I’ve been working on implementing and deploying a RESTful JSON service on top of the Ubuntu API website, and last week is when all of that work finally found it’s way into production.  That means we now have a public, open API for accessing all of the information available on the API website itself!  This opens up many interesting opportunities for integration and mashups, from integration with QtCreator in the Ubuntu SDK, to mobile reference apps to run on the Ubuntu phone, or anything else your imagination can come up with.

But what does this have to do with the new published docs?  Well the RESTful service also gives us the ability to push documentation up to the production server, which is how those docs got there.  I’ve been converting the old Django manage.py scripts that would import docs directly into the database, to instead push them to the website via the new service, and the QtMultimedia and QtFeedback API docs were the first ones to use it.

Best of all, the scripts are all automated, which means we can start integrating them with the continuous integration infrastructure that the rest of Ubuntu Engineering has been building around our projects.  So in the near future, whenever there is a new daily build of the Ubuntu SDK, it will also push the new documentation up, so we will have both the stable release documentation as well as the daily development release documentation available online.

I don’t have any docs yet on how to use the new service, but you can go to http://developer.ubuntu.com/api/service/ to see what URLs are available for the different data types.  You can also append ?<field>=<value> keyword filters to your URL to narrow the results.  For example, if you wanted all of the Elements in the Ubuntu.Components namespace, you can use http://developer.ubuntu.com/api/service/elements/?namespace__name=Ubuntu.Components to do that.

That’s it for today, the first day of my UbBloPoMo posts.  The rest of this week I will be driving to and fro for a work sprint with the rest of my team, the Ubuntu SDK team, and many others involved in building the phone and app developer pieces for Ubuntu.  So the rest of this week’s post may be much shorter.  We’ll see.

Happy Hacking.

Read more
Michael Hall

So it’s not February first yet, but what the heck I’ll go ahead and get started early.  I tried to do the whole NaBloPoMo thing a year or so ago, but didn’t make it more than a week.  I hope to do better this time, and with that in mind I’ve decided to put together some kind of a plan.

First things first, I’m going to cheat and only plan on having a post published ever week day of the month, since it seems that’s when most people are reading my blog (and/or Planet Ubuntu) anyway, and it means I don’t have to worry about it over the weekends.  If you really, really want to read a new post from me on Saturday……you should get a hobby.  Then blog about it, on Planet Ubuntu.

To try and keep me from forgetting to blog during the days I am committing to, I’ve scheduled a recurring 30 minute slot on my calendar.  UbBloPoMo posts should be something you can write up in 30 minutes or less, I think, so that should suffice.  I’ve also scheduled it for the end of my work day, so I can talk about things that are still fresh in my mind, to make it even easier.

Finally, because Europe is off work by the end of my day, I’m going to schedule all of my posts to publish the following morning at 9am UTC (posts written Friday will publish on Monday morning).  I’ve been doing this for a while with my previous posts, and it seems to get more views when I do. For example, this post was written yesterday, but posted while I was still sound asleep this morning.  The internet is a magical place.

So, today being Friday, I will be writing my first actual UbBloPoMo entry this evening, and it will post on Monday February 3rd.  What will it be about I wonder?  The suspense is killing me.

 

Read more
alex

MLS/SFO - before

When you use GPS on your mobile device, it is almost certainly using some form of assistance to find your location faster. Attempting to only use pure GPS satellites can take as long as 15 or 20 minutes.

Therefore, modern mobile devices use other ambient wireless signals such as cell towers and wifi access points to speed up your location lookup. There’s lots of technology behind this, but we simplify by calling it all AGPS (assisted GPS).

The thing is, the large databases that contain this ambient wireless information are almost all proprietary. Some data collectors will sell you commercial access to their database. Others such as Google, provide throttled, restricted, TOS-protected access. No one I am aware of provides access to the raw data at all.

Why are these proprietary databases an issue? Consider — wireless signals such as cell towers and wifi are ambient. They are just part of the environment. Since this information exists in the public domain, it should remain in the public domain, and free for all to access and build upon.

To be clear, collecting this public knowledge, aggregating it, and cleaning it up requires material effort. From a moral standpoint, I do think that if a company or organization goes through the immense effort to collect the data, it is reasonable and legitimate to monetarily profit from it. I have no moral issue there1.

At the same time, this is the type of infrastructural problem that an open source, crowd sourced approach is perfectly designed to fix. Once. And for all of humanity.

Which is why the Mozilla Location Service is such an interesting and important project. Giving API access to their database is fantastic2.

If you look at the map though, you’ll see lots of dark areas. And this is where you can help.

If you’re comfortable with early stage software with rough edges, you should install their Android app and help the project by going around and collecting this ambient wireless data.

Note: the only way to install the app right now is to put your Android phone in developer mode, physically connect a USB cable, and use the ‘adb’ tool to manually install it. Easy if you already know how; not so easy if you don’t. Hopefully they add the app to the Play store soon…

The app will upload the collected data to their database, and you can watch the map fill in (updated once a day). If you need more instant gratification, the leaderboard is updated in near realtime.

You might not want to spend time proofreading articles on Wikipedia, but running an app on your Android device and then moving around is pretty darn easy in comparison.

So that’s what I did today — rode my bike around for open source ideals. Here’s the map of my ride in Strava:

strava ride

I think I collected about 4000+ data points on that ride. And now the map in San Francisco looks like this:

MLS/SFO - after

Pretty neat! You can obviously collect data however you like: walking around, driving your car, or taking public transportation.

For more reading:

Happy mapping!

1: well, I might quibble with the vast amount of resources spent to collect this data, repeated across each vendor. None of them are collaborating with each other, so they all have to individually re-visit each GPS coordinate on the planet, which is incredibly wasteful.

2: you can’t download the raw database yet as they’re still working out the legal issues, but the Mozilla organization has a good track record of upholding open access ideals. This is addressed in their FAQ.

Read more
mandel

You might have had the following error in your dbus daemon at some point and said to yoursefl WTF???

process 1288: arguments to dbus_message_set_error_name() were incorrect, assertion "error_name == NULL
   || _dbus_check_is_valid_error_name (error_name)" failed in file dbus-message.c line 2900.

Well, you are not the only one and I might be able to point you to the correct direction, your code is probably returnning a QDBusError that you created using the QDBusError::Other enum. The problem here is that the enum value your are using only indicates that the error name is not known and therefore cannot be match to a value in the QDBusError enum. When you use that enumerator the message created does have an incorrect name as follows:

QDBusMessage(type=Error, service="", error name="other", error message="msg", signature="", contents=() ) 

And “other” is, of course, not a valid DBus name and thefore the app crashes. Easies way to solve it, create a correct DBusError ;)

Read more
alex

terminal

A little something I worked out before the holiday break was to start figuring out how to make it easy to target Ubuntu Touch if you run OSX.

Michael Hall wrote a blurb about it and the wiki instructions are here.

There are quite a number of dependencies that must be resolved before you can actually write and deploy an Ubuntu Touch app from OSX, but for now, simply installing a device is a good start.

Combined with our recently announced dual boot instructions, we’re trying to remove as many barriers to entry as possible.

Happy new year!

Read more
Dustin Kirkland

and bought 3 more with the i5-3427u CPU!



A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those...and upgraded to the i5 version!!!  Read on to find out why...
Whenever I publish an article here, the Blogger/G+ integration immediately posts a link to my G+ feed.  In that thread, Mark Shuttleworth asked if these NUCs supported IPMI or a similar technology, such that they could be enabled in MAAS.  I responded in kind, that, sadly, no, they only support tried-and-trusty-but-dumb-old-Wake-on-LAN.

Alas, an old friend, fellow homebrewer, and new Canonicaler, Ryan Harper, noted that the i5-3427u version of the NUC (performance specs here) actually supports Intel AMT, which is similar to IPMI.  Actually, it's an implementation of WBEM, which itself is fundamentally an implementation of the CIM standard.

That's a health dose of alphabet soup for you.  MAAS, NUC, AMT, IPMI, WEBM, CIM.  What does all of this mean?

Let's do a quick round of introductions for the uninitiated!
  • NUC - Intel's Next Unit of Computing.  It's a palm sized computer, probably intended to be a desktop, but actually functions quite well as a Linux server too.  Drawing about 10W, it's has roughly the same power of an AWS m1.xlarge, and costs about as much as 45 days of an m1.xlarge's EC2 bill.
  •  MAAS - Metal as a Service.  Installing Ubuntu servers (or desktops, for that matter), one by one, with a CD/DVD/USB-key is so 2004.  MAAS is your PXE/DHCP/TFTP/DNS (shit, more alphabet soup...) solution, all-in-one, ready to install Ubuntu onto lots of systems at scale!  Oh, and good news...  Juju supports MAAS as one of its environments, which is cool, in that you can deploy any charmed Juju workload to bare metal, in addition to AWS and OpenStack clouds.
  • AMT - Intel's Asset Management Technology.  This is a feature found on some Intel platforms (specifically, those whose CPU and motherboard support vPro technology), which enables remote management of the system.  Specifically, if you can authenticate successfully to the system, you can retrieve detailed information about the hardware, power cycle it on and off, and modify the boot sequence.  These are the essential functions that MAAS requires to support a system.
  • IPMI - Intelligent Platform Management Interface.  Also pioneered by Intel, this is a more server focused remote network management of systems, providing power on/off and other capabilities.
  • WBEM - Web Based Enterprise Management.  Remote system management technology available through a web browser, based on some internet standards, including CIM.
  • CIM - Common Information Model.  An open open standard that defines how systems in an IT environment are represented and managed.  Does that sound meta to you?  Well, yes, yes it is.
Okay, we have our vocabulary...now what?

So I actually returned all 3 of my Intel NUCs, which had the i3 processor, in favor of the more powerful (and slightly more expensive) i5 versions.  Note that I specifically bought the i5 Ivy Bridge versions, rather than the newer i5 Haswell, because only the Ivy Bridge actually supports AMT (for reasons that I cannot explain).  In fact, in comparison to Haswell, the Ivy Bridge systems:
  1. have AMT
  2. are less expensive
  3. have a higher maximum clock speed
  4. support a higher maximum memory
The only advantage I can see of the newer Haswells is a slightly lower energy footprint, and a slightly better video processor.

When 3 of my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

At this point, I split this post in two.  You're welcome to read on, to learn what you need to know about Intel AMT + Ubuntu + the i5-3427u NUC...

:-Dustin

Read more
John

Merry Christmas, from both of us here in London:

20131221-164821.jpg

Read more
Dustin Kirkland


A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those, and upgraded to the i5-3427u version, since it supports Intel AMT.  Why would I do that?  Read on...
When my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

But what followed was 6 straight hours of complete and utter frustration :-(  Like slam your fist into the keyboard and shout obscenities into cheese.
Actually, on that last point, I find it useful, when I'm mad, to open up cheese on my desktop and get visibly angry.  Once I realize how dumb I look when I'm angry, its a bit easier to stop being angry.  Seriously, try it sometime.
Okay, so I posted a couple of support requests on Intel's community forums.

Basically, I found it nearly impossible (like 1 in 100 chances) of actually getting into the AMT configuration menu using the required Ctrl-P.  And in the 2 or 3 times I did get in there, the default password, "admin", did not work.

After putting the kids to bed, downing a few pints of homebrewed beer, and attempting sleep (with a 2-week-old in the house), I lay in bed, awake in the middle of the night and it crossed my mind that...
No, no.  No way.  That couldn't be it.  Surely not.  That's really, really dumb.  Is it possible that the NUC's BIOS...  Nah.  Maybe, though.  It's worth a try at this point?  Maybe, just maybe, the NumLock key is enabled at boot???  It can't be.  The NumLock key is effin retarded, and almost as dumb as its braindead cousin, the CapsLock key.  OMFG!!!
Yep, that was it.  Unbelievable.  The system boots with the NumLock key toggled on.  My keyboard doesn't have an LED indicator that tells me such inane nonsense is the case.  And the BIOS doesn't expose a setting to toggle this behavior.  The "P" key is one of the keys that is NumLocked to "*".


So there must be some incredibly unlikely race condition that I could win 1 in 100 times where me pressing Ctrl-P frantically enough actually sneaks me into the AMT configuration.  Seriously, Intel peeps, please make this an F-key, like the rest of the BIOS and early boot options...

And once I was there, the default password, "admin", includes two more keys that are NumLocked.  For security reasons, these look like "*****" no matter what I'm typing.  When I thought I was typing "admin", I was actually typing "ad05n".  And of course, there's no scratch pad where I can test my keyboard and see that this is the case.  In fact, I'm not the only person hitting similar issues.  It seems that most people using keyboards other than US-English are quite confused when they type "admin" over and over and over again, to their frustration.

Okay, rant over.  I posted my solution back to my own questions on the forum.  And finally started playing with AMT!

The synopsis: AMT is really, really impressive!

First, you need to enter bios and ensure that it's enabled.  Then, you need to do whatever it takes to enter Intel's MEBx interface, using Ctrl-P (NumLock notwithstanding).  You'll be prompted for a password, and on your first login, this should be "admin" (NumLock notwithstanding).  Then you'll need to choose your own strong password.  Once in there, you'll need to enable a couple of settings, including networking/dhcp auto setup.  You can, at your option, also install some TLS certificates and secure your communications with your device.

AMT has a very simple, intuitive web interface.  Here are a comprehensive set of screen shots of all of the individual pages.

Once AMT is enabled on the target system, point a browser to port 16992, and click "Log On..."

The username is always "admin".  You'll set this password in the MEBx interface, using Ctrl-P just after BIOS post.

Here's the basic system status/overview.

The System Information page contains basic information about the system itself, including some of its capabilities.

The processor information page gives you the low down on your CPU.  Search ark.intel.com for your Intel CPU type to see all of its capabilities.

Check your memory capacity, type, speed, etc.

And your disk type, size, and serial number.

NUCs don't have battery information, but my Thinkpad does.

An event log has some interesting early boot and debug information here.

Arguably the most useful page, here you can power a system on, off, or hard reboot it.

If you have wireless capability, you choose whether you want that enabled/disabled when the system is off, suspended, or hibernated.

Here you can configure the network settings.  Unlike a BMC (Board Management Controller) on most server class hardware, which has its own dedicated interface, Intel AMT actually shares the network interface with the Operating System.

AMT actually supports IPv6 networking as well, though I haven't played with it yet.

Configure the hostname and Dynamic DNS here.

You can set up independent user accounts, if necessary.

And with a BIOS update, you can actually use Intel AMT over a wireless connection (if you have an Intel wireless card)
So this pointy/clicky web interface is nice, but not terribly scriptable (without some nasty screenscraping).  What about the command line interface?

The amttool command (provided by the amtterm package in Ubuntu) offers a nice command line interface into some of the functionality exposed by AMT.  You need to export an environment variable, AMT_PASSWORD, and then you can get some remote information about the system:

kirkland@x230:~⟫ amttool 10.0.0.14 info
### AMT info on machine '10.0.0.14' ###
AMT version: 7.1.20
Hostname: nuc1.
Powerstate: S0
Remote Control Capabilities:
IanaOemNumber 0
OemDefinedCapabilities IDER SOL BiosSetup BiosPause
SpecialCommandsSupported PXE-boot HD-boot cd-boot
SystemCapabilitiesSupported powercycle powerdown powerup reset
SystemFirmwareCapabilities f800

You can also retrieve the networking information:

kirkland@x230:~⟫ amttool 10.0.0.14 netinfo
Network Interface 0:
DhcpEnabled true
HardwareAddressDescription Wired0
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 31
MACAddress 00-aa-bb-cc-dd-ee
DefaultGatewayAddress 10.0.0.1
LocalAddress 10.0.0.14
PrimaryDnsAddress 10.0.0.1
SecondaryDnsAddress 0.0.0.0
SubnetMask 255.255.255.0
Network Interface 1:
DhcpEnabled true
HardwareAddressDescription Wireless1
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 0
MACAddress ee-ff-aa-bb-cc-dd
DefaultGatewayAddress 0.0.0.0
LocalAddress 0.0.0.0
PrimaryDnsAddress 0.0.0.0
SecondaryDnsAddress 0.0.0.0
SubnetMask 0.0.0.0

Far more handy than WoL alone, you can power up, power down, and power cycle the system.

kirkland@x230:~⟫ amttool 10.0.0.14 powerdown
host x220., powerdown [y/N] ? y
execute: powerdown
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powerup
host x220., powerup [y/N] ? y
execute: powerup
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powercycle
host x220., powercycle [y/N] ? y
execute: powercycle
result: pt_status: success

I was a little disappointed that amttool's info command didn't provide nearly as much information as the web interface.  However, I did find a fork of Gerd Hoffman's original Perl script in Sourceforge here.  I don't know the upstream-ability of this code, but it worked very well for my part, and I'm considering sponsoring/merging it into Ubuntu for 14.04.  Anyone have further experience with these enhancements?

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data BIOS
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'BIOS' (1 item):
(data struct.ver. 1.0)
Vendor: 'Intel Corp.'
Version: 'RKPPT10H.86A.0028.2013.1016.1429'
Release date: '10/16/2013'
BIOS characteristics: 'PCI' 'BIOS upgradeable' 'BIOS shadowing
allowed' 'Boot from CD' 'Selectable boot' 'EDD spec' 'int13h 5.25 in
1.2 mb floppy' 'int13h 3.5 in 720 kb floppy' 'int13h 3.5 in 2.88 mb
floppy' 'int5h print screen services' 'int14h serial services'
'int17h printer services'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data ComputerSystem
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'ComputerSystem' (1 item):
(data struct.ver. 1.0)
Manufacturer: ' '
Product: ' '
Version: ' '
Serial numb.: ' '
UUID: 7ae34e30-44ab-41b7-988f-d98c74ab383d

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Baseboard
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Baseboard' (1 item):
(data struct.ver. 1.0)
Manufacturer: 'Intel Corporation'
Product: 'D53427RKE'
Version: 'G87971-403'
Serial numb.: '27XC63723G4'
Asset tag: 'To be filled by O.E.M.'
Replaceable: yes

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Processor
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Processor' (1 item):
(data struct.ver. 1.0)
ID: 0x4529f9eaac0f
Max Socket Speed: 2800 MHz
Current Speed: 1800 MHz
Processor Status: Enabled
Processor Type: Central
Socket Populated: yes
Processor family: 'Intel(R) Core(TM) i5 processor'
Upgrade Information: [0x22]
Socket Designation: 'CPU 1'
Manufacturer: 'Intel(R) Corporation'
Version: 'Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data MemoryModule
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'MemoryModule' (2 items):
(* No memory device in the socket *)
(data struct.ver. 1.0)
Size: 8192 Mb
Form Factor: 'SODIMM'
Memory Type: 'DDR3'
Memory Type Details:, 'Synchronous'
Speed: 1333 MHz
Manufacturer: '029E'
Serial numb.: '123456789'
Asset Tag: '9876543210'
Part Number: 'GE86sTBF5emdppj '

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data VproVerificationTable
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'VproVerificationTable' (1 item):
(data struct.ver. 1.0)
CPU: VMX=Enabled SMX=Enabled LT/TXT=Enabled VT-x=Enabled
MCH: PCI Bus 0x00 / Dev 0x08 / Func 0x00
Dev Identification Number (DID): 0x0000
Capabilities: VT-d=NOT_Capable TXT=NOT_Capable Bit_50=Enabled
Bit_52=Enabled Bit_56=Enabled
ICH: PCI Bus 0x00 / Dev 0xf8 / Func 0x00
Dev Identification Number (DID): 0x1e56
ME: Enabled
Intel_QST_FW=NOT_Supported Intel_ASF_FW=NOT_Supported
Intel_AMT_FW=Supported Bit_13=Enabled Bit_14=Enabled Bit_15=Enabled
ME FW ver. 8.1 hotfix 40 build 1416
TPM: Disabled
TPM on board = NOT_Supported
Network Devices:
Wired NIC - PCI Bus 0x00 / Dev 0xc8 / Func 0x00 / DID 0x1502
BIOS supports setup screen for (can be editable): VT-d TXT
supports VA extensions (ACPI Op region) with maximum ver. 2.6
SPI Flash has Platform Data region reserved.

On a different note, I recently sponsored a package, wsmancli, into Ubuntu Universe for Trusty, at the request of Kent Baxley (Canonical) and Jared Dominguez (Dell), which provides the wsman command.  Jared writes more about it here in this Dell technical post.  With Kent's help, I did manage get wsman to remotely power on a system.  I must say that it's a bit less user friendly than the equivalent amttool functionality above...

kirkland@x230:~⟫  wsman invoke -a RequestPowerStateChange -J request.xml http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService?SystemCreationClassName="CIM_ComputerSystem",SystemName="Intel(r)AMT",CreationClassName="CIM_PowerManagementService",Name="Intel(r) AMT Power Management Service" --port 16992 -h 10.0.0.14 --username admin -p "ABC123abc123#" -V -v

I'm really enjoying the ability to remotely administer these systems.  And I'm really, really looking forward to the day when I can use MAAS to provision these systems!

:-Dustin

Read more
Kyle Nitzsche

Cordova 3.3 adds Ubuntu

Upstream Cordova 3.3.0 is released just in time for the holidays with a gift we can all appreciate: built-in Ubuntu support!

Cordova: multi-platform HTML5 apps

Apache Cordova is a framework for HTML5 app development that simplifies building and distributing HTML5 apps across multiple platforms, like Android and iOS. With Cordova 3.3.0, Ubuntu is an official platform!

The cool idea Cordova starts with is a single www/ app source directory tree that is built to different platforms for distribution. Behind the scenes, the app is built as needed for each target platform. You can develop your HTML5 app once and build it for many mobile platforms, with a single command.

With Cordova 3.3.0, one simply adds the Ubuntu platform, builds the app, and runs the Ubuntu app. This is done for Ubuntu with the same Cordova commands as for other platforms. Yes, it is as simple as:

$ cordova create myapp REVERSEDOMAINNAME.myapp myapp
$ cd myapp
(Optionally modify www/*)
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Plugins

Cordova is a lot more than an HTML5 cross-platform web framework though.
It provides JavaScript APIs that enable HTML5 apps to use of platform specific back-end code to access a common set of devices and capabilities. For example, you can access device Events (battery status, physical button clicks, and etc.), Gelocation, and a lot more. This is the Cordova "plugin" feature.

You can add Cordova standard plugins to an app easily with commands like this:

$ cordova plugin add org.apache.cordova.battery-status
(Optionally modify www/* to listen to the batterystatus event )
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Keep an eye out for news about how Ubuntu click package cross compilation capabilities will soon weave together with Cordova to enable deployment of plugins that are compiled to specified target architecture, like the armhf architecture used in Ubuntu touch images (for phones, tablets and etc.).

Docs

As a side note, I'm happy to note that my documentation of initial Ubuntu platform support has landed and has been published at Cordova 3.3.0 docs.


Read more
jdstrand

Excellent blog post by my colleague Marc Deslauriers where he is discussing how we are working to provide a safe and usable experience in the Ubuntu app store: http://mdeslaur.blogspot.com/2013/12/ubuntu-touch-and-user-privacy.html


Filed under: canonical, security

Read more
jdstrand

Excellent blog post by my colleague Marc Deslauriers where he is discussing how we are working to provide a safe and usable experience in the Ubuntu app store: http://mdeslaur.blogspot.com/2013/12/ubuntu-touch-and-user-privacy.html


Filed under: canonical, security

Read more
alex

I wanted somewhere easy to dump technical notes that weren’t really suitable for this blog. I wanted a static HTML generator type of blog because the place to dump my notes (people.canonical.com) isn’t really set up to run anything complex for a multitude of reasons, such as security.

I also didn’t want to just do it 1990s style and throw up plain ASCII README files (the way I used to) because I envision embedding images and possibly movies in my notes here. At the same time, the closer I can get to a README the better, and so that seems to imply markdown.

After a brief fling with blacksmith where absolutely nothing worked because of a magical web 2.0 fix-everything-but-the-zillions-of-pages-of-existing-docs rewrite, I wiped the blood and puke from my mouth and settled on octopress.

Octopress was much better, but it was still a struggle. It’s a strange state of affairs that deploying wordpress on a hosted site is actually *less* difficult than configuring what *should* be a simple static HTML generator. Oh well.

Here are some notes to make life easier for the next person to come along.

Deploying to a subdir, fully explained
One wrinkle of hosting on a shared server using Apache conventions is that your filesystem path for hosting files will probably get rewritten by the web server and displayed differently.

That is:

    unix filesystem path                 =>  address displayed in url bar
    /home/achiang/public_html/technotes  =>  http://people.canonical.com/~achiang/technotes

The subdir deployment docs talk about how to do this, but the only way I could get it to work is by issuing: rake set_root_dir[~achiang/technotes] first. So the proper sequence is:

rake set_root_dir[~achiang/technotes]

vi Rakefile	# and change:
	-ssh_user       = "user@domain.com"
	+ssh_user       = "achiang@people.canonical.com"
	-document_root  = "~/website.com/"
	+document_root  = "~/public_html/technotes"

vi _config.yml	# and change:
	-url: http://yoursite.com
	+url: http://people.canonical.com/~achiang/technotes

rake install
rake generate
rake deploy	# assuming you've setup rsync deploy properly

Once you’ve tested this is working, then optionally set rsync_delete = true. But don’t make the same mistake I made and set that option too soon, or else you will delete files you didn’t want to delete.

Finally, once you have this working, the test address for your local machine using the `rake preview` command is http://localhost:4000/~achiang/technotes.

Video tag gotchas
One nice feature of Octopress is the video plugin it uses to allow embeddable H.264 movies. I discovered that unlike the image tag which apparently allows for local paths to images, the video tag seems to require an actual URL starting with http://.

Therefore:

    {% video /images/movie.mp4 %}	# BROKEN!

However, this works:

    {% video http://people.canonical.com/~achiang/images/movie.mp4 %}

I’ll work up a patch for this at some point.

Misc gotchas
The final thing I tripped over was https://github.com/imathis/octopress/pull/1438.

I’ll update here if upstream takes the patch, but if not, then you’ll want the one-liner in the pull request above.

Summary
After the initial fiddly bits, Octopress is good enough. I can efficiently write technical content using $EDITOR, the output looks modern and stylish, and it all works on a fairly constrained, bog-standard Apache install without opening any security holes in my company’s infrastructure.

Read more
mandel

Ok, imaging that you are working with Qt 5 and using the new way to connect signals, lets for example say we are working with QNetworkReply and we want to have a slot for the QNetworkReply::error signals that takes a QNetworkReply::NetworkError, the way to do it is the following:

1
2
3
4
 
connect(_reply, static_cast<void(QNetworkReply::*)
    (QNetworkReply::NetworkError)>(&QNetworkReply::error),
        this, &MyClass::onNetworkError)

The static_cast is helping the compiler know what method (the signals or the actual method) you are talking about. I know, it is not nice at all but works at compile time better than getting a qWarning at runtime.

The problem is that without the help the compiler does not know what method error you are talking about :-/

Read more
Corey Goldberg

TLDR: I made a cool version control visualization of all the Ubuntu Touch Core Apps.

The video: https://www.youtube.com/watch?v=nAmKAgRS0tw

* Warning: abrasive techno music
* To be watched in HD, preferably at maximum volume


Making Gource visualizations of complex software projects is awesome. I love seeing a VCS commit log come to life as blooming trees and swarming workers. Normally, I do a visualization video of a single repository. But in this case, I used a bash script to create a visualization of multiple source code repositories. I wanted to see the progress of the entire stack of Ubuntu Touch Core Apps (17 projects). Ubuntu Touch Core Apps is an umbrella project for all [17] of the core apps that are available in Ubuntu on mobile devices

The Ubuntu Touch Core Apps:

  • Dropping Letters
  • Evernote Online Accounts plugin
  • QtDeclarative bindings for the Grilo media scanner
  • Stock Ticker App
  • Sudoku App
  • Ubuntu Calculator App
  • Ubuntu Calendar App
  • Ubuntu Clock App
  • Ubuntu Document Viewer App
  • Ubuntu E-mail App
  • Ubuntu Facebook App
  • Ubuntu File Manager App
  • Ubuntu Music App
  • Ubuntu Phone Commons
  • Ubuntu RSS Feed Reader App
  • Ubuntu Terminal App
  • Ubuntu Weather App

Making the visualization:

Assuming you have a bunch of source code repositories already branched/cloned locally, here is a general version of the script to generate visualization videos of multiple projects/repositories: https://gist.github.com/cgoldberg/7488521

The script I used to create the Ubuntu Touch Core Apps video: https://gist.github.com/cgoldberg/7516510

Read more