Canonical Voices

Posts tagged with 'canonical'

John

Merry Christmas, from both of us here in London:

20131221-164821.jpg

Read more
Dustin Kirkland


A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those, and upgraded to the i5-3427u version, since it supports Intel AMT.  Why would I do that?  Read on...
When my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

But what followed was 6 straight hours of complete and utter frustration :-(  Like slam your fist into the keyboard and shout obscenities into cheese.
Actually, on that last point, I find it useful, when I'm mad, to open up cheese on my desktop and get visibly angry.  Once I realize how dumb I look when I'm angry, its a bit easier to stop being angry.  Seriously, try it sometime.
Okay, so I posted a couple of support requests on Intel's community forums.

Basically, I found it nearly impossible (like 1 in 100 chances) of actually getting into the AMT configuration menu using the required Ctrl-P.  And in the 2 or 3 times I did get in there, the default password, "admin", did not work.

After putting the kids to bed, downing a few pints of homebrewed beer, and attempting sleep (with a 2-week-old in the house), I lay in bed, awake in the middle of the night and it crossed my mind that...
No, no.  No way.  That couldn't be it.  Surely not.  That's really, really dumb.  Is it possible that the NUC's BIOS...  Nah.  Maybe, though.  It's worth a try at this point?  Maybe, just maybe, the NumLock key is enabled at boot???  It can't be.  The NumLock key is effin retarded, and almost as dumb as its braindead cousin, the CapsLock key.  OMFG!!!
Yep, that was it.  Unbelievable.  The system boots with the NumLock key toggled on.  My keyboard doesn't have an LED indicator that tells me such inane nonsense is the case.  And the BIOS doesn't expose a setting to toggle this behavior.  The "P" key is one of the keys that is NumLocked to "*".


So there must be some incredibly unlikely race condition that I could win 1 in 100 times where me pressing Ctrl-P frantically enough actually sneaks me into the AMT configuration.  Seriously, Intel peeps, please make this an F-key, like the rest of the BIOS and early boot options...

And once I was there, the default password, "admin", includes two more keys that are NumLocked.  For security reasons, these look like "*****" no matter what I'm typing.  When I thought I was typing "admin", I was actually typing "ad05n".  And of course, there's no scratch pad where I can test my keyboard and see that this is the case.  In fact, I'm not the only person hitting similar issues.  It seems that most people using keyboards other than US-English are quite confused when they type "admin" over and over and over again, to their frustration.

Okay, rant over.  I posted my solution back to my own questions on the forum.  And finally started playing with AMT!

The synopsis: AMT is really, really impressive!

First, you need to enter bios and ensure that it's enabled.  Then, you need to do whatever it takes to enter Intel's MEBx interface, using Ctrl-P (NumLock notwithstanding).  You'll be prompted for a password, and on your first login, this should be "admin" (NumLock notwithstanding).  Then you'll need to choose your own strong password.  Once in there, you'll need to enable a couple of settings, including networking/dhcp auto setup.  You can, at your option, also install some TLS certificates and secure your communications with your device.

AMT has a very simple, intuitive web interface.  Here are a comprehensive set of screen shots of all of the individual pages.

Once AMT is enabled on the target system, point a browser to port 16992, and click "Log On..."

The username is always "admin".  You'll set this password in the MEBx interface, using Ctrl-P just after BIOS post.

Here's the basic system status/overview.

The System Information page contains basic information about the system itself, including some of its capabilities.

The processor information page gives you the low down on your CPU.  Search ark.intel.com for your Intel CPU type to see all of its capabilities.

Check your memory capacity, type, speed, etc.

And your disk type, size, and serial number.

NUCs don't have battery information, but my Thinkpad does.

An event log has some interesting early boot and debug information here.

Arguably the most useful page, here you can power a system on, off, or hard reboot it.

If you have wireless capability, you choose whether you want that enabled/disabled when the system is off, suspended, or hibernated.

Here you can configure the network settings.  Unlike a BMC (Board Management Controller) on most server class hardware, which has its own dedicated interface, Intel AMT actually shares the network interface with the Operating System.

AMT actually supports IPv6 networking as well, though I haven't played with it yet.

Configure the hostname and Dynamic DNS here.

You can set up independent user accounts, if necessary.

And with a BIOS update, you can actually use Intel AMT over a wireless connection (if you have an Intel wireless card)
So this pointy/clicky web interface is nice, but not terribly scriptable (without some nasty screenscraping).  What about the command line interface?

The amttool command (provided by the amtterm package in Ubuntu) offers a nice command line interface into some of the functionality exposed by AMT.  You need to export an environment variable, AMT_PASSWORD, and then you can get some remote information about the system:

kirkland@x230:~⟫ amttool 10.0.0.14 info
### AMT info on machine '10.0.0.14' ###
AMT version: 7.1.20
Hostname: nuc1.
Powerstate: S0
Remote Control Capabilities:
IanaOemNumber 0
OemDefinedCapabilities IDER SOL BiosSetup BiosPause
SpecialCommandsSupported PXE-boot HD-boot cd-boot
SystemCapabilitiesSupported powercycle powerdown powerup reset
SystemFirmwareCapabilities f800

You can also retrieve the networking information:

kirkland@x230:~⟫ amttool 10.0.0.14 netinfo
Network Interface 0:
DhcpEnabled true
HardwareAddressDescription Wired0
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 31
MACAddress 00-aa-bb-cc-dd-ee
DefaultGatewayAddress 10.0.0.1
LocalAddress 10.0.0.14
PrimaryDnsAddress 10.0.0.1
SecondaryDnsAddress 0.0.0.0
SubnetMask 255.255.255.0
Network Interface 1:
DhcpEnabled true
HardwareAddressDescription Wireless1
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 0
MACAddress ee-ff-aa-bb-cc-dd
DefaultGatewayAddress 0.0.0.0
LocalAddress 0.0.0.0
PrimaryDnsAddress 0.0.0.0
SecondaryDnsAddress 0.0.0.0
SubnetMask 0.0.0.0

Far more handy than WoL alone, you can power up, power down, and power cycle the system.

kirkland@x230:~⟫ amttool 10.0.0.14 powerdown
host x220., powerdown [y/N] ? y
execute: powerdown
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powerup
host x220., powerup [y/N] ? y
execute: powerup
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powercycle
host x220., powercycle [y/N] ? y
execute: powercycle
result: pt_status: success

I was a little disappointed that amttool's info command didn't provide nearly as much information as the web interface.  However, I did find a fork of Gerd Hoffman's original Perl script in Sourceforge here.  I don't know the upstream-ability of this code, but it worked very well for my part, and I'm considering sponsoring/merging it into Ubuntu for 14.04.  Anyone have further experience with these enhancements?

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data BIOS
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'BIOS' (1 item):
(data struct.ver. 1.0)
Vendor: 'Intel Corp.'
Version: 'RKPPT10H.86A.0028.2013.1016.1429'
Release date: '10/16/2013'
BIOS characteristics: 'PCI' 'BIOS upgradeable' 'BIOS shadowing
allowed' 'Boot from CD' 'Selectable boot' 'EDD spec' 'int13h 5.25 in
1.2 mb floppy' 'int13h 3.5 in 720 kb floppy' 'int13h 3.5 in 2.88 mb
floppy' 'int5h print screen services' 'int14h serial services'
'int17h printer services'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data ComputerSystem
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'ComputerSystem' (1 item):
(data struct.ver. 1.0)
Manufacturer: ' '
Product: ' '
Version: ' '
Serial numb.: ' '
UUID: 7ae34e30-44ab-41b7-988f-d98c74ab383d

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Baseboard
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Baseboard' (1 item):
(data struct.ver. 1.0)
Manufacturer: 'Intel Corporation'
Product: 'D53427RKE'
Version: 'G87971-403'
Serial numb.: '27XC63723G4'
Asset tag: 'To be filled by O.E.M.'
Replaceable: yes

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Processor
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Processor' (1 item):
(data struct.ver. 1.0)
ID: 0x4529f9eaac0f
Max Socket Speed: 2800 MHz
Current Speed: 1800 MHz
Processor Status: Enabled
Processor Type: Central
Socket Populated: yes
Processor family: 'Intel(R) Core(TM) i5 processor'
Upgrade Information: [0x22]
Socket Designation: 'CPU 1'
Manufacturer: 'Intel(R) Corporation'
Version: 'Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data MemoryModule
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'MemoryModule' (2 items):
(* No memory device in the socket *)
(data struct.ver. 1.0)
Size: 8192 Mb
Form Factor: 'SODIMM'
Memory Type: 'DDR3'
Memory Type Details:, 'Synchronous'
Speed: 1333 MHz
Manufacturer: '029E'
Serial numb.: '123456789'
Asset Tag: '9876543210'
Part Number: 'GE86sTBF5emdppj '

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data VproVerificationTable
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'VproVerificationTable' (1 item):
(data struct.ver. 1.0)
CPU: VMX=Enabled SMX=Enabled LT/TXT=Enabled VT-x=Enabled
MCH: PCI Bus 0x00 / Dev 0x08 / Func 0x00
Dev Identification Number (DID): 0x0000
Capabilities: VT-d=NOT_Capable TXT=NOT_Capable Bit_50=Enabled
Bit_52=Enabled Bit_56=Enabled
ICH: PCI Bus 0x00 / Dev 0xf8 / Func 0x00
Dev Identification Number (DID): 0x1e56
ME: Enabled
Intel_QST_FW=NOT_Supported Intel_ASF_FW=NOT_Supported
Intel_AMT_FW=Supported Bit_13=Enabled Bit_14=Enabled Bit_15=Enabled
ME FW ver. 8.1 hotfix 40 build 1416
TPM: Disabled
TPM on board = NOT_Supported
Network Devices:
Wired NIC - PCI Bus 0x00 / Dev 0xc8 / Func 0x00 / DID 0x1502
BIOS supports setup screen for (can be editable): VT-d TXT
supports VA extensions (ACPI Op region) with maximum ver. 2.6
SPI Flash has Platform Data region reserved.

On a different note, I recently sponsored a package, wsmancli, into Ubuntu Universe for Trusty, at the request of Kent Baxley (Canonical) and Jared Dominguez (Dell), which provides the wsman command.  Jared writes more about it here in this Dell technical post.  With Kent's help, I did manage get wsman to remotely power on a system.  I must say that it's a bit less user friendly than the equivalent amttool functionality above...

kirkland@x230:~⟫  wsman invoke -a RequestPowerStateChange -J request.xml http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService?SystemCreationClassName="CIM_ComputerSystem",SystemName="Intel(r)AMT",CreationClassName="CIM_PowerManagementService",Name="Intel(r) AMT Power Management Service" --port 16992 -h 10.0.0.14 --username admin -p "ABC123abc123#" -V -v

I'm really enjoying the ability to remotely administer these systems.  And I'm really, really looking forward to the day when I can use MAAS to provision these systems!

:-Dustin

Read more
Kyle Nitzsche

Cordova 3.3 adds Ubuntu

Upstream Cordova 3.3.0 is released just in time for the holidays with a gift we can all appreciate: built-in Ubuntu support!

Cordova: multi-platform HTML5 apps

Apache Cordova is a framework for HTML5 app development that simplifies building and distributing HTML5 apps across multiple platforms, like Android and iOS. With Cordova 3.3.0, Ubuntu is an official platform!

The cool idea Cordova starts with is a single www/ app source directory tree that is built to different platforms for distribution. Behind the scenes, the app is built as needed for each target platform. You can develop your HTML5 app once and build it for many mobile platforms, with a single command.

With Cordova 3.3.0, one simply adds the Ubuntu platform, builds the app, and runs the Ubuntu app. This is done for Ubuntu with the same Cordova commands as for other platforms. Yes, it is as simple as:

$ cordova create myapp REVERSEDOMAINNAME.myapp myapp
$ cd myapp
(Optionally modify www/*)
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Plugins

Cordova is a lot more than an HTML5 cross-platform web framework though.
It provides JavaScript APIs that enable HTML5 apps to use of platform specific back-end code to access a common set of devices and capabilities. For example, you can access device Events (battery status, physical button clicks, and etc.), Gelocation, and a lot more. This is the Cordova "plugin" feature.

You can add Cordova standard plugins to an app easily with commands like this:

$ cordova plugin add org.apache.cordova.battery-status
(Optionally modify www/* to listen to the batterystatus event )
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Keep an eye out for news about how Ubuntu click package cross compilation capabilities will soon weave together with Cordova to enable deployment of plugins that are compiled to specified target architecture, like the armhf architecture used in Ubuntu touch images (for phones, tablets and etc.).

Docs

As a side note, I'm happy to note that my documentation of initial Ubuntu platform support has landed and has been published at Cordova 3.3.0 docs.


Read more
jdstrand

Excellent blog post by my colleague Marc Deslauriers where he is discussing how we are working to provide a safe and usable experience in the Ubuntu app store: http://mdeslaur.blogspot.com/2013/12/ubuntu-touch-and-user-privacy.html


Filed under: canonical, security

Read more
jdstrand

Excellent blog post by my colleague Marc Deslauriers where he is discussing how we are working to provide a safe and usable experience in the Ubuntu app store: http://mdeslaur.blogspot.com/2013/12/ubuntu-touch-and-user-privacy.html


Filed under: canonical, security

Read more
mandel

Ok, imaging that you are working with Qt 5 and using the new way to connect signals, lets for example say we are working with QNetworkReply and we want to have a slot for the QNetworkReply::error signals that takes a QNetworkReply::NetworkError, the way to do it is the following:

1
2
3
4
 
connect(_reply, static_cast<void(QNetworkReply::*)
    (QNetworkReply::NetworkError)>(&QNetworkReply::error),
        this, &MyClass::onNetworkError)

The static_cast is helping the compiler know what method (the signals or the actual method) you are talking about. I know, it is not nice at all but works at compile time better than getting a qWarning at runtime.

The problem is that without the help the compiler does not know what method error you are talking about :-/

Read more
Dustin Kirkland

Last week, I posed a question on Google+, looking for suggestions on a minimal physical format, x86 machine.  I was looking for something like a Raspberry Pi (of which I already have one), but really it had to be x86.

I was aware of a few options out there, but I was very fortunately introduced to one spectacular little box...the Intel NUC!

The unboxing experience is nothing short of pure marketing genius!



The "NUC" stands for Intel's Next Unit of Computing.  It's a compact little device, that ships barebones.  You need to add DDR3 memory (up to 16GB), an mSATA hard drive (if you want to boot locally), and an mSATA WiFi card (if you want wireless networking).

The physical form factor of all models is identical:

  • 4.6" x 4.4" x 1.6"
  • 11.7cm x 11.2cm x 4.1cm

There are 3 different processor options:


And there are three different peripheral setups:

  • HDMI 1.4a (x2) + USB 2.0 (x3) + Gigabit ethernet
  • HDMI 1.4a (x1) + Thunderbolt supporting DisplayPort 1.1a (x1) + USB 2.0 (x3)
  • HDMI 1.4a (x1) + Mini DisplayPort 1.1a (x2) + USB 2.0 (x2); USB 3.0 (x1)
I ended up buying 3 of these last week, and reworked my audio/video and baby monitoring setup in the house last week.  I bought 2 of these (i3 + Ethernet) , and 1 of these (i3 + Thunderbolt)

Quite simply, I couldn't be happier with these little devices!

I used one of these to replace the dedicated audio/video PC (an x201 Thinkpad) hooked up in my theater.  The x201 was a beefy machine, with plenty of CPU and video capability.  But it was pretty bulky, rather noisy, and drew too much power.

And the other two are Baby-buntu baby monitors, as previously blogged here, replacing a real piece-of-crap Lenovo Q100 (Atom + SiS307DV and all the horror maligned with that sick chip set).

All 3 are now running Ubuntu 13.10, spectacularly I might add!  All of the hardware cooperated perfectly.




Here are the two views that I really wanted Amazon to show me, as I was buying the device...what the inside looks like!  You can see two mSATA ports and red/black WiFi antenna leads on the left, and two DDR3 slots on the right.


On the left, you can now see a 24GB mSATA SSD, and beneath it (not visible) is an Intel Centrino Advanced-N 6235 WiFi adapter.  On the right, I have two 8GB DDR3 memory modules.

Note, to get wireless working properly I did have to:

echo "options iwlwifi 11n_disable=1" | sudo tee -a /etc/modprobe.d/iwlwifi.conf


The BIOS is really super fancy :-)  There's a mouse and everything.  I made a few minor tweaks, to the boot order, assigned 512MB of memory to the display adapter, and configured it to power itself back on at any power loss.


Speaking of power, it sustains about 10 watts of power, at idle, which costs me about $11/year in electricity.


Some of you might be interested in some rough disk IO statistics...

kirkland@living:~⟫ sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 11306 MB in 2.00 seconds = 5657.65 MB/sec
Timing buffered disk reads: 1478 MB in 3.00 seconds = 492.32 MB/sec

And the lshw output...

    description: Desktop Computer
product: (To be filled by O.E.M.)
width: 64 bits
capabilities: smbios-2.7 dmi-2.7 vsyscall32
configuration: boot=normal chassis=desktop family=To be filled by O.E.M. sku=To be filled by O.E.M. uuid=[redacted]
*-core
description: Motherboard
product: D33217CK
vendor: Intel Corporation
physical id: 0
version: G76541-300
serial: [redacted]
*-firmware
description: BIOS
vendor: Intel Corp.
physical id: 0
version: GKPPT10H.86A.0025.2012.1011.1534
date: 10/11/2012
size: 64KiB
capacity: 6336KiB
capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int14serial int17printer acpi usb biosbootspecification uefi
*-cache:0
width: 32 bits
clock: 66MHz
capabilities: storage msi pm ahci_1.0 bus_master cap_list
configuration: driver=ahci latency=0
resources: irq:40 ioport:f0b0(size=8) ioport:f0a0(size=4) ioport:f090(size=8) ioport:f080(size=4) ioport:f060(size=32) memory:f6906000-f69067ff
*-serial UNCLAIMED
description: SMBus
product: 7 Series/C210 Series Chipset Family SMBus Controller
vendor: Intel Corporation
physical id: 1f.3
bus info: pci@0000:00:1f.3
version: 04
width: 64 bits
clock: 33MHz
configuration: latency=0
resources: memory:f6905000-f69050ff ioport:f040(size=32)
*-scsi
physical id: 1
logical name: scsi0
capabilities: emulated
*-disk
description: ATA Disk
product: BP4 mSATA SSD
physical id: 0.0.0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: S8FM
serial: [redacted]
size: 29GiB (32GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=be0ab026-45c1-4bd5-a023-1182fe75194e sectorsize=512
*-volume:0
description: Windows FAT volume
vendor: mkdosfs
physical id: 1
bus info: scsi@0:0.0.0,1
logical name: /dev/sda1
logical name: /boot/efi
version: FAT32
serial: 2252-bc3f
size: 486MiB
capacity: 486MiB
capabilities: boot fat initialized
configuration: FATs=2 filesystem=fat mount.fstype=vfat mount.options=rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro state=mounted
*-volume:1
description: EXT4 volume
vendor: Linux
physical id: 2
bus info: scsi@0:0.0.0,2
logical name: /dev/sda2
logical name: /
version: 1.0
serial: [redacted]
size: 25GiB
capabilities: journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized
configuration: created=2013-11-06 13:01:57 filesystem=ext4 lastmountpoint=/ modified=2013-11-12 15:38:33 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered mounted=2013-11-12 15:38:33 state=mounted
*-volume:2
description: Linux swap volume
vendor: Linux
physical id: 3
bus info: scsi@0:0.0.0,3
logical name: /dev/sda3
version: 1
serial: [redacted]
size: 3994MiB
capacity: 3994MiB
capabilities: nofs swap initialized
configuration: filesystem=swap pagesize=4095

It also supports: virtualization technology, S3/S4/S5 sleep states, Wake-on-LAN, and PXE boot.  Sadly, it does not support IPMI :-(

Finally, it's worth noting that I bought the model with the i3 for a specific purpose...  These three machines all have full virtualization capabilities (KVM).  Which means these little boxes, with their dual-core hyper-threaded CPUs and 16GB of RAM are about to become Nova compute nodes in my local OpenStack cluster ;-)  That will be a separate blog post ;-)

Dustin

Read more
Michael Hall

A funny thing happened on the way to the forums, I was elected to serve on the Ubuntu Community Council. First of all I would like to thank those who voted for me, your support is a tremendous morale booster, and I look forward to representing your interests in the council.  I’d also like to congratulate the other council members on their election or re-election, I can’t imagine a better group of people to be working with.

That’s it, short and sweet.  Thanks again and let’s all get back to building awesome things!

Read more
Kyle Nitzsche

Ubuntu HTML5 API docs

HTML5 API docs published

I'm pleased to note that the Ubuntu HTML5 API docs I wrote are now done and published on developer.ubuntu.com. These cover the complete set of JavaScript objects that are involved in the UbuntuUI framework for HTML5 apps (at this time). For each object, the docs show how the corresponding HTML is declared and, of course, all public methods are documented.

A couple notes:
  • I wrote an html5APIexerciser app that implements every available public method in the framework. This was helpful to ensure that what I wrote matched reality ;) It may be useful to folks exploring development of  Ubuntu HTML5 apps. The app can be run directly in a browser by opening its index.html, but it is also an Ubuntu SDK project, so it can be opened and run from the Ubuntu SDK, locally and on an attached device.
  • The html5APIexerciser app does not demonstrate the full set of Ubuntu CSS styles available. For example, the styles provide gorgeous toggle buttons and progress spinnners, but since they have no JavaScript objects and methods they are not included in the API docs. So be sure to explore the Gallery by installing the ubuntu-html5-theme-examples package and then checking out /usr/share/ubuntu-html5-theme/0.1/examples/
  • I decided to use yuidoc as the framework for adding source code comments as the basis for auto generated web docs.  After you install yuidoc using npm you can build the docs from source as follows:
  1. Get the ubuntu-html5-theme branch: bzr branch lp:ubuntu-html5-theme
  2. Move to the JavaScript directory: cd ubuntu-html5-theme/0.1/ambiance/js/
  3. Build the docs: yuidoc -c yuidoc.json . This creates the ./build directory.
  4. Launch the docs by opening build/index.html in your browser. They should look something like this 
Thanks to +Adnane Belmadiaf for some theme work and his always helpful consultation, to +Daniel Beck for his initial writeup of the Ubuntu HTML5 framework, and of course to the developer.ubuntu.com team for their always awesome work!




Read more
mandel

I the last few months I have been working on the Ubuntu Download Manager, one of The Big Rocks of August. The u-d-m provide a dbus service that allows applications to request downloads to be performed, to such download requests it adds some nice features that a user on a mobile phone, and probably a desktop, is interested to have. Some of those features are:

  • Apparmor isolation per download. That means that only you application can interact with its downloads.
  • Pause/Resume downloads
  • Autodetect network connection.
  • WIFI only downloads.
  • Hash check support.
  • Allow downloads to be performed while an application has been paused or killed.
  • Group downloads, where a bunch of files are provided and the different downloads are performed as a single atomic operation.

A download might seem a simple action to perform, right? Well, as soon as you start supporting all the above a single download operation becomes a fairly complicated matter. The following is a state machine that identifies the states of a download that would support such features:

Download

As you can see, it is a complicated matter and all these has to be tested and check by the QA team. By providing u-d-m (and later a client library to use approach in C and in the Ubuntu SDK, I’m terribly sorry but I did not have the time to finish it on time for the release) we are helping developers to perform simple downloads with robust code and do not worry about all the corner cases. Performing a download is as simple as requesting it and listen to the different signals. This kind of service is also provided by FirefoxOs, WEbOs and Tizan (but no in IOS or SailFish) but I believe we are doing a better job at exposing a richer API. Of course all this is open source and at least our friend at Jolla (and I really mean friends, I think they are doing an awesome work!!! and competition + collaboration is great).

In the following days I’ll be posting on hot to use the API via C, C++ and DBus.

Read more
mandel

Long time no posts!

I have not updating this page lately for a very simple reason: TO MUCH TO CODE.

After a crazy amount of work to push the Ubuntu Download Manager to be the centralized daemon used to perform downloads in Ubuntu Touch I have some more time to write. In the following weeks I’m going to be focusing on explaining some issues and patterns I have found using Qt while developing u-d-m (short of ubuntu download manager from now on).

PS: Is quite embarrasing that my last post was the ‘She got me dancing’ videoclip… you should take a look nevertheless :)

Read more
jdstrand

Last time I discussed AppArmor, I gave an overview of how AppArmor is used in Ubuntu. With the release of Ubuntu 13.10, a number of features have been added:

  • Support for fine-grained DBus mediation for bus, binding name, object path, interface and member/method
  • The return of named AF_UNIX socket mediation
  • Integration with several services as part of the ApplicationConfinement work in support of click packages and the Ubuntu appstore
  • Better support for policy generation via the aa-easyprof tool and apparmor-easyprof-ubuntu policy
  • Native AppArmor support in Upstart

DBus mediation

 
Prior to Ubuntu 13.10, access to the DBus system bus was on/off and there was no mediation of the session bus or any other DBus buses, such as the accessibility bus. 13.10 introduces fine-grained DBus mediation. In a nutshell, you define ‘dbus’ rules in your AppArmor policy just like any other rules. When an application that is confined by AppArmor uses DBus, the dbus-daemon queries the kernel on if the application is allowed to perform this action. If it is, DBus proceeds normally, if not, DBus denies the access and logs it to syslog. An example denial is:
 
Oct 18 16:02:50 localhost dbus[3626]: apparmor="DENIED" operation="dbus_method_call" bus="session" path="/ca/desrt/dconf/Writer/user" interface="ca.desrt.dconf.Writer" member="Change" mask="send" name="ca.desrt.dconf" pid=30538 profile="/usr/lib/firefox/firefox{,*[^s][^h]}" peer_pid=3927 peer_profile="unconfined"

We can see that firefox tried to access gsettings (dconf) but was denied.

DBus rules are a bit more involved than most other AppArmor rules, but they are still quite readable and understandable. For example, consider the following rule:
 
dbus (send)
   bus=session
   path=/org/freedesktop/DBus
   interface=org.freedesktop.DBus
   member=Hello
   peer=(name=org.freedesktop.DBus),

This rule says that the application is allowed to use the ‘Hello’ method on the ‘org.freedesktop.DBus’ interface of the ‘/org/freedesktop/DBus’ object for the process bound to the ‘org.freedesktop.DBus’ name on the ‘session’ bus. That is fine-grained indeed!

However, rules don’t have to be that fine-grained. For example, all of the following are valid rules:
 
dbus,
dbus bus=accessibility,
dbus (send) bus=session peer=(name=org.a11y.Bus),

Couple of things to keep in mind:

  • Because dbus-daemon is the one performing the mediation, DBus denials are logged to syslog and not kern.log. Recent versions of Ubuntu log kernel messages to /var/log/syslog, so I’ve gotten in the habit of just looking there for everything
  • The message content of DBus traffic is not examined
  • The userspace tools don’t understand DBus rules yet. That means aa-genprof, aa-logprof and aa-notify don’t work with these new rules. The userspace tools are being rewritten and support will be added in a future release.
  • The less fine-grained the rule, the more access is permitted. So ‘dbus,’ allows unrestricted access to DBus.
  • Responses to messages are implicitly allowed, so if you allow an application to send a message to a service, the service is allowed to respond without needing a corresponding rule.
  • dbus-daemon is considered a trusted helper (it integrates with AppArmor to enforce the mediation) and is not confined by default.

As a transitional step, existing policy for packages in the Ubuntu archive that use DBus will continue to have full access to DBus, but future Ubuntu releases may provide fine-grained DBus rules for this software. See ‘man 5 apparmor.d’ for more information on DBus mediation and AppArmor.

Application confinement

 
Ubuntu will support an app store model where software that has not gone through the traditional Ubuntu archive process is made available to users. While this greatly expands the quantity of quality software available to Ubuntu users, it also introduces new security risks. An important part of addressing these risks is to run applications under confinement. In this manner, apps are isolated from each other and are limited in what they can do on the system. AppArmor is at the heart of the Ubuntu ApplicationConfinement story and is already working on Ubuntu 13.10 for phones in the appstore. A nice introduction for developers on what the Ubuntu trust model is and how apps work within it can be found at http://developer.ubuntu.com.

In essence, a developer will design software with the Ubuntu SDK, then declare what type of application it is (which determines the AppArmor template to use), then declares any addition policy groups that the app needs. The templates and policy groups define AppArmor file, network, DBus and anything other rules that are needed. The software is packaged as a lightwight click package and when it is installed, an AppArmor click hook is run which creates a versioned profile for the application based on the templates and policy groups. On Unity 8, application lifecycle makes sure that the app is launched under confinement via an upstart job. For other desktop environments, a desktop file is generated in ~/.local/share/applications that prepends ‘aa-exec-click’ to the Exec line. The upstart job and ‘aa-exec-click’ not only launch the app under confinement, but also setup the environment (eg, set TMPDIR to an application specific directory). Various APIs have been implemented so apps can access files (eg, Pictures via the gallery app), connect to services (eg, location and online accounts) and work within Unity (eg, the HUD) safely and in a controlled and isolated manner.

The work is not done of course and serveral important features need to be implemented and bugs fixed, but application confinement has already added a very significant security improvement on Ubuntu 13.10 for phones.

14.04

As mentioned, work remains. Some of the things we’d like to do for 14.04 include:

  • Finishing IPC mediation for things like signals, networking and abstract sockets
  • Work on APIs and AppArmor integration of services to work better on the converged device (ie, with traditional desktop applications)
  • Work with the upstream kernel on kdbus so we are ready for when that is available
  • Finish the LXC stacking work to allow different host and container policy for the same binary at the same time
  • While Mir already handles keyboard and mouse sniffing, we’d like to integrate with Mir in other ways where applicable (note, X mediation for keyboard/mouse sniffing, clipboard, screen grabs, drag and drop, and xsettings is not currently scheduled nor is wayland support. Both are things we’d like to have though, so if you’d like to help out here, join us on #apparmor on OFTC to discuss how to contribute)

Until next time, enjoy!


Filed under: canonical, security, ubuntu

Read more
jdstrand

Last time I discussed AppArmor, I gave an overview of how AppArmor is used in Ubuntu. With the release of Ubuntu 13.10, a number of features have been added:

  • Support for fine-grained DBus mediation for bus, binding name, object path, interface and member/method
  • The return of named AF_UNIX socket mediation
  • Integration with several services as part of the ApplicationConfinement work in support of click packages and the Ubuntu appstore
  • Better support for policy generation via the aa-easyprof tool and apparmor-easyprof-ubuntu policy
  • Native AppArmor support in Upstart

DBus mediation

 
Prior to Ubuntu 13.10, access to the DBus system bus was on/off and there was no mediation of the session bus or any other DBus buses, such as the accessibility bus. 13.10 introduces fine-grained DBus mediation. In a nutshell, you define ‘dbus’ rules in your AppArmor policy just like any other rules. When an application that is confined by AppArmor uses DBus, the dbus-daemon queries the kernel on if the application is allowed to perform this action. If it is, DBus proceeds normally, if not, DBus denies the access and logs it to syslog. An example denial is:
 
Oct 18 16:02:50 localhost dbus[3626]: apparmor="DENIED" operation="dbus_method_call" bus="session" path="/ca/desrt/dconf/Writer/user" interface="ca.desrt.dconf.Writer" member="Change" mask="send" name="ca.desrt.dconf" pid=30538 profile="/usr/lib/firefox/firefox{,*[^s][^h]}" peer_pid=3927 peer_profile="unconfined"

We can see that firefox tried to access gsettings (dconf) but was denied.

DBus rules are a bit more involved than most other AppArmor rules, but they are still quite readable and understandable. For example, consider the following rule:
 
dbus (send)
   bus=session
   path=/org/freedesktop/DBus
   interface=org.freedesktop.DBus
   member=Hello
   peer=(name=org.freedesktop.DBus),

This rule says that the application is allowed to use the ‘Hello’ method on the ‘org.freedesktop.DBus’ interface of the ‘/org/freedesktop/DBus’ object for the process bound to the ‘org.freedesktop.DBus’ name on the ‘session’ bus. That is fine-grained indeed!

However, rules don’t have to be that fine-grained. For example, all of the following are valid rules:
 
dbus,
dbus bus=accessibility,
dbus (send) bus=session peer=(name=org.a11y.Bus),

Couple of things to keep in mind:

  • Because dbus-daemon is the one performing the mediation, DBus denials are logged to syslog and not kern.log. Recent versions of Ubuntu log kernel messages to /var/log/syslog, so I’ve gotten in the habit of just looking there for everything
  • The message content of DBus traffic is not examined
  • The userspace tools don’t understand DBus rules yet. That means aa-genprof, aa-logprof and aa-notify don’t work with these new rules. The userspace tools are being rewritten and support will be added in a future release.
  • The less fine-grained the rule, the more access is permitted. So ‘dbus,’ allows unrestricted access to DBus.
  • Responses to messages are implicitly allowed, so if you allow an application to send a message to a service, the service is allowed to respond without needing a corresponding rule.
  • dbus-daemon is considered a trusted helper (it integrates with AppArmor to enforce the mediation) and is not confined by default.

As a transitional step, existing policy for packages in the Ubuntu archive that use DBus will continue to have full access to DBus, but future Ubuntu releases may provide fine-grained DBus rules for this software. See ‘man 5 apparmor.d’ for more information on DBus mediation and AppArmor.

Application confinement

 
Ubuntu will support an app store model where software that has not gone through the traditional Ubuntu archive process is made available to users. While this greatly expands the quantity of quality software available to Ubuntu users, it also introduces new security risks. An important part of addressing these risks is to run applications under confinement. In this manner, apps are isolated from each other and are limited in what they can do on the system. AppArmor is at the heart of the Ubuntu ApplicationConfinement story and is already working on Ubuntu 13.10 for phones in the appstore. A nice introduction for developers on what the Ubuntu trust model is and how apps work within it can be found at http://developer.ubuntu.com.

In essence, a developer will design software with the Ubuntu SDK, then declare what type of application it is (which determines the AppArmor template to use), then declares any addition policy groups that the app needs. The templates and policy groups define AppArmor file, network, DBus and anything other rules that are needed. The software is packaged as a lightwight click package and when it is installed, an AppArmor click hook is run which creates a versioned profile for the application based on the templates and policy groups. On Unity 8, application lifecycle makes sure that the app is launched under confinement via an upstart job. For other desktop environments, a desktop file is generated in ~/.local/share/applications that prepends ‘aa-exec-click’ to the Exec line. The upstart job and ‘aa-exec-click’ not only launch the app under confinement, but also setup the environment (eg, set TMPDIR to an application specific directory). Various APIs have been implemented so apps can access files (eg, Pictures via the gallery app), connect to services (eg, location and online accounts) and work within Unity (eg, the HUD) safely and in a controlled and isolated manner.

The work is not done of course and serveral important features need to be implemented and bugs fixed, but application confinement has already added a very significant security improvement on Ubuntu 13.10 for phones.

14.04

As mentioned, work remains. Some of the things we’d like to do for 14.04 include:

  • Finishing IPC mediation for things like signals, networking and abstract sockets
  • Work on APIs and AppArmor integration of services to work better on the converged device (ie, with traditional desktop applications)
  • Work with the upstream kernel on kdbus so we are ready for when that is available
  • Finish the LXC stacking work to allow different host and container policy for the same binary at the same time
  • While Mir already handles keyboard and mouse sniffing, we’d like to integrate with Mir in other ways where applicable (note, X mediation for keyboard/mouse sniffing, clipboard, screen grabs, drag and drop, and xsettings is not currently scheduled nor is wayland support. Both are things we’d like to have though, so if you’d like to help out here, join us on #apparmor on OFTC to discuss how to contribute)

Until next time, enjoy!


Filed under: canonical, security, ubuntu

Read more
ssweeny

Smarter and Faster

This is a very exciting release for me, not least because it’s the first official release of Ubuntu for Phones, which was the big focus for my team at Canonical this cycle. We worked on making it easy to spin up your own custom build of Ubuntu and helped out with fixing bugs wherever we could.

If you’re comfortable flashing your phone you can install Ubuntu with these instructions.

Of course, Ubuntu still rocks your socks on your desktop or laptop, so take the tour or go grab it!

Read more
John

My Macbook currently hosts Ubuntu, and there is no copy of OSX on it.

I keep a USB stick with OSX installed on it, and today I got to test if I could use this to install a firmware update.

Short version: It just works. Slightly longer version: I just rebooted a few times, holding down alt/option whenever I needed to boot OSX from the USB stick. The OSX software updater took care of the rest.

Read more
Matt Fischer

There are a myriad of ways to do cross-compiles and a smaller myriad that can do chrooted debian package builds. One of my favorite tools for this is pbuilder and I’d like to explain how (and why) I use it.

A pbuilder environment is a chrooted environment which can have a different distroseries or architecture than your host system. This is very useful, for example, when your laptop is running raring x64 and you need to build binaries for saucy armhf to run on Ubuntu Touch. Typically pbuilders are used to build debian packages, but they can also provide you a shell in which you can do non-package compilations. When you exit a pbuilder (typically) any packages you’ve installed or changes you’ve made are dropped. This makes it the perfect testing ground when building packages to ensure that you’ve defined all your dependencies correctly. pbuilder is also smart enough to install deps for you for package builds, which makes your life easier and also avoids polluting your development system with lots of random -dev packages. So if you’re curious, I recommend that you follow along below and try a pbuilder out, it’s pretty simple to get started.

Getting Setup

First install pbuilder and pbuilder-scripts. The scripts add-on really simplifies setup and usage and I highly recommend it. This guide makes heavy use of these scripts, although you can use pbuilder without them.

sudo apt-get install pbuilder pbuilder-scripts

Second, you need to setup your ~/.pbuilderrc file. This file defines a few things, mainly a set of extra default packages that your pbuilder will install and what directories are bind-mounted into your pbuilder. By default pbuilder scripts looks in ~/Projects, so make that directory at this point as well and set it in the .pbuilderrc file.

Add the following to .pbuilderrc, substitute your username for user:

BINDMOUNTS="${BINDMOUNTS} /home/user/Projects"
EXTRAPACKAGES="${EXTRAPACKAGES} pbuilder devscripts gnupg patchutils vim-tiny openssh-client"

I like having the openssh-client in my pbuilder so I can copy stuff out easier to target boxes, but it’s not strictly necessary. A full manpage for ~/.pbbuilderrc is also available to read about setting more advanced stuff.

Don’t forget to make the folder:
mkdir ~/Projects

Making your First Pbuilder

Now that you’re setup, it’s time to make your first pbuilder. You need to select a distroseries (saucy, raring, etc) and an architecture. I’m going to make one for the raring i386. To do this we use pcreate. I use a naming scheme here so that when I see the 10 builders I have, I can keep some sanity, I recommend you do the same, but if you want to call your pbuilder “bob” that’s fine too.

cd ~/Projects
pcreate -a i386 -d raring raring-i386

Running this will drop you into an editor. Here you can add extra sources, for example, if you need packages from a PPA. Any sources list you add here will be permanent anytime you use this pbuilder. If you have no idea what I mean by PPA, then just exit your editor here.

At this point pcreate will be downloading packages and setting up the chroot. This may take 10-30 minutes depending on your connection speed.

This is a good time to make coffee or play video games

This is a good time to make coffee or play video games

Using your pbuilder

pbuilders have two main use cases that I will cover here:

Package Builds

pbuilder for package builds is dead simple. If you place the package code inside ~/Projects/raring-x86, pbuilder will automagically guess the right pbuilder to use. Elsewhere and you’ll need to specify.

Aside: To avoid polluting the root folder, I generally lay the folders out like this:

~/Projects/raring-i386/project/project-0.52

Then I just do this


cd ~/Projects/raring-i386/project/project-0.52
pbuild

This will unpack the pbuilder, install all the deps for “project” and then attempt to build it. It will exit the pbuilder (and repack it) whether it succeeds or fails. Any debs built will be up one level.

Other – via a Shell

The above method works great for building a package, but if you are building over and over to iterate on changes, it’s inefficient. This is because every time it needs to unpack and install dependencies (it is at least smart enough to cache the deps). In this case, it’s faster to drop into a shell and stay there after the build.

cd ~/Projects/raring-i386
ptest

This drops you into a shell inside the chroot, so you’ll need to manually install build-deps.

apt-get build-dep project
dpkg-buildpackage

ptest also works great when you need to do non-package builds, for example, I build all my armhf test code in a pbuilder shell that I’ll leave open for weeks at a time.

Updating your pbuilder

Over time the packages in your pbuilder may get out of date. You can update it simply by running:

pupdate -p raring-i386

This is the equivalent of running apt-get upgrade on your system.

Caveats

A few caveats for starting with pbuilder.

  • Ownership – files built by pbuilder will end up owned as root, if you want to manipulate them later, you’ll need to chown them back or deal with using sudo
  • Signing – unless you bind mount your key into your pbuilder you cannot sign packages in the pbuilder. I think the wiki page may cover other solutions.
  • Segfaults – I use pbuilders on top of qemu a lot so that I can build for ARM devices, however, it seems that the more complex the compile (perhaps the more memory intensive?) the more likely it is to segfault qemu, thereby killing the pbuilder. This happened to a colleague this week when trying to pbuild Unity8 for armhf. It’s happened to me in the past. The only solution I know for this issue is to build on real hardware.
  • Speed – For emulated builds, like armhf on top of x86_64 hardware (which I do all the time), pbuilds can be slow. Even for non-emulated builds, the pbuilder needs to uncompress itself and install deps every time. For this reason if you plan on doing multiple builds, I’d start with ptest.
  • Cleanup – When you tire of your pbuilder, you need to remove it from /var/cache/pbuilder. It also caches debs in here and some other goodies. You may need to clean those up manually depending on disk space constraints.

Summary

I’ve really only scratched the surface here on what you can do with pbuilder. Hopefully you can use it for package builds or non-native builds. The Ubuntu wiki page for pbuilder has lots more details, tips, and info. If you have any favorite tips, please leave them as a comment.

Read more
Michael Hall

Last month I announced a contest to win a new OPPO Find 5 by porting Ubuntu Touch to it.  Today I’m pleased to announce that we have a winner!

Below is a picture tour of what Ubuntu Touch running on the device, along with descriptions of what works and what doesn’t. If you’re impatient, you can find links to download the images and instructions for flashing them here.

First a disclaimer, these aren’t professional pictures.  They were taken with my Nexus 4, also running Ubuntu Touch, and the colors are slightly shifted horizontally for some reason.  I didn’t notice it until I had already gone through and taken 58 pictures and downloaded them to my laptop.  Apologies for that.  But you can still get a feel for it, so let’s carry on!

Edge Swiping

The touch screen and edge swiping worked perfectly, as was neatly demonstrated by going through the new introduction tour.

Dash & Launcher

The Dash also works exactly as expected.  This build has a low enough pixel/grid-unit, and high enough resolution, that it fits 4 icons per row, the same as you get on Asus Nexus 4. The icons on the Launcher felt a little small, but everything there worked perfectly too.

Indicators

The indicators were missing some functionality, which I assume is a result of Ubuntu Touch not working with all of the Find 5′s hardware.  Specifically the WiFi isn’t working, so you don’t see anything for it in the Network indicator, and the screen brightness slider was non-functional in the Battery indicator.  Sound, however, worked perfectly.

Apps

Not having WiFi limited the number of apps I could play with, but most of the ones I could try worked fine.  Sudoku and Dropping letters don’t work for some reason, but the Core Apps (except Weather, which requires network access) worked fine.

 

Hardware

As I already mentioned, WiFi doesn’t work on this build, nor does screen brightness.  The camera, however, is a different story. Both the front and back cameras worked, including the flash on the back.

Final Thoughts

While this build didn’t meet all the criteria I had initially set out, it did so much more than any other image I had received up until now that I am happy to call it the winner.  The developer who built it has also committed to continuing his porting work, and getting the remaining items working.  I hope that having this Find 5 will help him in that work, and so all Find 5 owners will have the chance to run Ubuntu Touch on their device.

Read more
Dustin Kirkland


Necessity is truly the mother of invention.  I was working from the Isle of Man recently, and really, really enjoyed my stay!  There's no better description for the Isle of Man than "quaint":
quaint
kwānt/
adjective1. attractively unusual or old-fashioned.
"quaint country cottages"
synonyms:picturesquecharmingsweetattractiveold-fashionedold-world
Though that description applies to the Internet connectivity, as well :-)  Truth be told, most hotel WiFi is pretty bad.  But nestle a lovely little old hotel on a forgotten little Viking/Celtic island and you will really see the problem exacerbated.

I worked around most of my downstream issues with a couple of new extensions to the run-one project, and I'm delighted as always to share these with you in Ubuntu's package!

As a reminder, the run-one package already provides:
  • run-one COMMAND [ARGS]
    • This is a wrapper script that runs no more than one unique instance of some command with a unique set of arguments.
    • This is often useful with cronjobs, when you want no more than one copy running at a time.
  • run-this-one COMMAND [ARGS]
    • This is exactly like run-one, except that it will use pgrep and kill to find and kill any running processes owned by the user and matching the target commands and arguments.
    • Note that run-this-one will block while trying to kill matching processes, until all matching processes are dead.
    • This is often useful when you want to kill any previous copies of the process you want to run (like VPN, SSL, and SSH tunnels).
  • keep-one-running COMMAND [ARGS]
    • This command operates exactly like run-one except that it respawns the command with its arguments if it exits for any reason (zero or non-zero).
    • This is useful when you want to ensure that you always have a copy of a command or process running, in case it dies or exits for any reason.
Newly added, you can now:
  • run-one-constantly COMMAND [ARGS]
    • This is simply an alias for keep-one-running.
    • I've never liked the fact that this command started with "keep-" instead of "run-one-", from a namespace and discoverability perspective.
  • run-one-until-success COMMAND [ARGS]
    • This command operates exactly like run-one-constantly except that it respawns "COMMAND [ARGS]" until COMMAND exits successfully (ie, exits zero).
    • This is useful when downloading something, perhaps using wget --continue or rsync, over a crappy quaint hotel WiFi connection.
  • run-one-until-failure COMMAND [ARGS]
    •  This command operates exactly like run-one-constantly except that it respawns "COMMAND [ARGS]" until COMMAND exits with failure (ie, exits non-zero).
    • This is useful when you want to run something until something goes wrong.
I am occasionally asked about the difference between these tools and the nohup command...
  1. First, the "one" part of run-one-constantly is important, in that it uses run-one to protect you from running more than one instances of the specified command. This is handy for something like an ssh tunnel, that you only really want/need one of.
  2. Second, nohup doesn't rerun the specified command if it exits cleanly, or forcibly gets killed. nohup only ignores the hangup signal.
So you might say that the run-one tools are a bit more resilient than nohup.

You can use all of these as of Ubuntu 13.10 (Saucy), by simply:

sudo apt-get install run-one

Or, for older Ubuntu releases:

sudo apt-add-repository ppa:run-one/ppa
sudo apt-get update
sudo apt-get install run-one

I was also asked about the difference between these tools and upstart...

Upstart is Ubuntu's event driven replacement for sysvinit.  It's typically used to start daemons and other scripts, utilities, and "jobs" at boot time.  It has a really cool feature/command/option called respawn, which can be used to provide a very similar effect as run-one-constantly.  In fact, I've used respawn in several of the upstart jobs I've written for the Ubuntu server, so I'm happy to credit upstart's respawn for the idea.

That said, I think the differences between upstart and run-one are certainly different enough to merit both tools, at least on my servers.

  1. An upstart job is defined by its own script-like syntax.  You can see many examples in Ubuntu's /etc/init/*.conf.  On my system the average upstart job is 25 lines long.  The run-one commands are simply prepended onto the beginning of any command line program and arguments you want to run.  You can certainly use run-one and friends inside of a script, but they're typically used in an interactive shell command line.
  2. An upstart job typically runs at boot time, or when "started" using the start command, and these start jobs located in the root-writable /etc/init/.  Can a non-root user write their own upstart job, and start and stop it?  Not that I can tell (and I'm happy to be corrected here)...    Turns out I was wrong about that, per a set of recently added features to Upstart (thanks, James, and Stuart for pointing out!), non-root users can now write and run their own upstart jobs..   Still, any user on the system can launch run-one jobs, and their own command+arguments namespace is unique to them.
  3. run-one is easily usable on systems that do not have upstart available; the only hard dependency is on the flock(1) utility.
Hope that helps!


Happy running,
:-Dustin

Read more
John

(One of the nice things about my new job is working ‘in public’. I’m tagging posts like these with ‘Canonical’, if you want to filter them)

Hybris (aka libHybris) is a piece of enabling technology that lets an OS distribution like Ubuntu use parts of Android software in binary form, without needing a recompile of those binaries. Ubuntu is using it for its ARM based Ubuntu Touch distributions, which re-use the Android BSP for the underlying hardware platform.

libHybris is two things: A dynamic linker, which provides the generic functionality, and then a set of wrapper libraries that provide particular Android libraries to the other OS. Whilst the code is clearly organised, it might be helpful to have a separate sample which demonstrates how to use the core Hybris features. That is where Bionic JPEG comes in.

Bionic is the name of Android’s C runtime library. On Ubuntu the conventional C runtime is glibC. So another way of looking at Hybris is to regard it as a way to use bionic based binaries in a glibC based OS. Hence the name of this project.

Bionic JPEG aims to demonstrate how to call a library compiled for use on Android (In this case the IJG JPEG library) from an Ubuntu binary. To do this, I’ve divided the IJG code into its library and client components, then compiled the library with the Android NDK, and the client with the Ubuntu Touch toolchain. In order for the clients to use the Android library, a small ‘bridge’ library that calls libHybris glues the two together. By designing this bridge library to present the same API as the IJG library, no changes are needed to any of the IJG source code.

The core of Hybris is clearly derived from the Bionic source, and is a port of the Android dynamic linker to Ubuntu. This knows how to load an Android Elf32 binary into an Ubuntu process, which is a trick the standard Ubuntu dynamic linker can’t do. From the look of the code, Android can effectively link/prelink binaries in several ways, and it is loading these binaries that is the key Hybris feature. In addition, as it resolves symbols present in the binary it loads, it can hook them to somewhere else. Hybris then uses this to hook all the bionic entrypoints, and re-direct them to glibC. This isn’t always a simple symbol substitution – there are differences between the C libraries that means Hybris has some implementations within it, that then call on to glibC.

Whilst the IJG code is unmodified, I have changed the name of the library produced. It turned out that the Android images I was using (derived from CyanogenMod 10.1 images for the Nexus 4) already have a copy of libJPEG, from an earlier version of the IJG code. In order to avoid a collision, Bionic JPEG names its library libjpeg2.

For more details, see the README in the source tarball (available in the project downloads). Note that I don’t anticipate Bionic JPEG demonstrates something commonly done by Ubuntu SDK users. Over time (several release cycles), I think the Ubuntu community hope to phase out our dependence on Android, in favour of the common Linux upstreams Ubuntu and Android share. That will take time, and need us to be successful enough for hardware manufacturers to offer direct support. For now, using Android gets us that support for the cost of maintaining Hybris.

That being said, even while libHybris exists, I don’t think it will be a common thing for people to extend it: for most cases, the Android libraries on a given device will already have a complete libHybris bridge.

For the teams that need to maintain those Android parts, there may be a need to extend libHybris, or even just to understand it a little better. It is with that use case in mind that I created this example.

Bionic JPEG’s homepage on Launchpad.

I intend to refine the sample with clearer instructions on how to add it to the system image, as Ubuntu’s tools here finalise in the run up to the 13.10 release.

Read more
Dustin Kirkland

tl;dr? 
From within byobu, just run:
byobu-enable-prompt

Still reading?

I've helped bring a touch of aubergine to the Ubuntu server before.  Along those lines, it has long bothered me that Ubuntu's bash package, out of the box, creates a situation where full color command prompts are almost always disabled.

Of course I carry around my own, highly customized ~/.bashrc on my desktop, but whenever I start new instances of the Ubuntu server in the cloud, without fail, I end up back at a colorless, drab command prompt, like this:


You can, however, manually override this by setting color_prompt=yes at the top of your ~/.bashrc, or your administrator can set that system-wide in /etc/bash.bashrc.  After which, you'll see your plain, white prompt now show two new colors, bright green and blue.


That's a decent start, but there's two things I don't like about this prompt:
  1. There's 3 disparate pieces of information, but only two color distinctions:
    • a user name
    • a host name
    • a current working directory
  2. The colors themselves are
    • a little plain
    • 8-color
    • and non-communicative
Both of these problems are quite easy to solve.  Within Ubuntu, our top notch design team has invested countless hours defining a spectacular color palette and extensive guidelines on their usage.  Quoting our palette guidelines:


"Colour is an effective, powerful and instantly recognisable medium for visual communications. To convey the brand personality and brand values, there is a sophisticated colour palette. We have introduced a palette which includes both a fresh, lively orange, and a rich, mature aubergine. The use of aubergine indicates commercial involvement, while orange is a signal of community engagement. These colours are used widely in the brand communications, to convey the precise, reliable and free personality."
With this inspiration, I set out to apply these rules to a beautiful, precise Ubuntu server command prompt within Byobu.

First, I needed to do a bit of research, as I would really need a 256-color palette to accomplish anything reasonable, as the 8-color and 16-color palettes are really just atrocious.

The 256-color palette is actually reasonable.  I would have the following color palette to chose from:


That's not quite how these colors are rendered on a modern Ubuntu system, but it's close enough to get started.

I then spent quite a bit of time trying to match Ubuntu color tints against this chart and narrowed down the color choices that would actually fit within the Ubuntu design team's color guidelines.


This is the color balance choice that seemed most appropriate to me:


A majority of white text, on a darker aubergine background.  In fact, if you open gnome-terminal on an Ubuntu desktop, this is exactly what you're presented with.  White text on a dark aubergine background.  But we're missing the orange, grey, and lighter purple highlights!


That number I cited above -- the 3 distinct elements of [user, host, directory] -- are quite important now, as they map exactly to our 3 supporting colors.

Against our 256-color mapping above, I chose:
  • Username: 245 (grey)
  • Hostname: 5 (light aubergine)
  • Working directory: 5 (orange)
  • Separators: 256 (white)
And in the interest of being just a little more "precise", I actually replaced the trailing $ character with the UTF-8 symbol ❭.  This is Unicode's U+276D character, "MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT".  This is a very pointed, attention-grabbing character.  It directs your eye straight to the flashing cursor, or the command at your fingertips.


Gnome-terminal is, by default, set to use the system's default color scheme, but you can easily change that to several other settings.  I often use the higher-contrast white-on-black or white-on-light-yellow color schemes when I'm in a very bright location, like outdoors.


I took great care in choosing those 3 colors that they were readable across each of the stock schemes shipped by gnome-terminal.



I also tested it in Terminator and Konsole, where it seemed to work well enough, while xterm and putty aren't as pretty.

Currently, this functionality is easy to enable from within your Byobu environment.  If you're on the latest Byobu release (currently 5.57), which you can install from ppa:byobu/ppa, simply run the command:

byobu-enable-prompt

Of course, this prompt most certainly won't be for everyone :-)  You can easily disable the behavior at any time with:

byobu-disable-prompt

While new installations of Byobu (where there is no ~/.byobu directory) will automatically see the new prompt, starting in Ubuntu 13.10 (unless you've modified your $PS1 in your ~/.bashrc). But existing, upgraded Byobu users will need to run byobu-enable-prompt to add this into their environment.

As will undoubtedly be noted in the comments below, your mileage may vary on non-Ubuntu systems.  However, if /etc/issue does not start with the string "Ubuntu", byobu-enable-prompt will provide a tri-color prompt, but employs a hopefully-less-opinionated primary colors, green, light blue, and red:



If you want to run this outside of Byobu, well that's quite doable too :-)  I'll leave it as an exercise for motivated users to ferret out the one-liner you need from lp:byobu and paste into your ~/.bashrc ;-)

Cheers,
:-Dustin

Read more