Canonical Voices

Posts tagged with 'ubuntu-server'

Dustin Kirkland


A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those, and upgraded to the i5-3427u version, since it supports Intel AMT.  Why would I do that?  Read on...
When my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

But what followed was 6 straight hours of complete and utter frustration :-(  Like slam your fist into the keyboard and shout obscenities into cheese.
Actually, on that last point, I find it useful, when I'm mad, to open up cheese on my desktop and get visibly angry.  Once I realize how dumb I look when I'm angry, its a bit easier to stop being angry.  Seriously, try it sometime.
Okay, so I posted a couple of support requests on Intel's community forums.

Basically, I found it nearly impossible (like 1 in 100 chances) of actually getting into the AMT configuration menu using the required Ctrl-P.  And in the 2 or 3 times I did get in there, the default password, "admin", did not work.

After putting the kids to bed, downing a few pints of homebrewed beer, and attempting sleep (with a 2-week-old in the house), I lay in bed, awake in the middle of the night and it crossed my mind that...
No, no.  No way.  That couldn't be it.  Surely not.  That's really, really dumb.  Is it possible that the NUC's BIOS...  Nah.  Maybe, though.  It's worth a try at this point?  Maybe, just maybe, the NumLock key is enabled at boot???  It can't be.  The NumLock key is effin retarded, and almost as dumb as its braindead cousin, the CapsLock key.  OMFG!!!
Yep, that was it.  Unbelievable.  The system boots with the NumLock key toggled on.  My keyboard doesn't have an LED indicator that tells me such inane nonsense is the case.  And the BIOS doesn't expose a setting to toggle this behavior.  The "P" key is one of the keys that is NumLocked to "*".


So there must be some incredibly unlikely race condition that I could win 1 in 100 times where me pressing Ctrl-P frantically enough actually sneaks me into the AMT configuration.  Seriously, Intel peeps, please make this an F-key, like the rest of the BIOS and early boot options...

And once I was there, the default password, "admin", includes two more keys that are NumLocked.  For security reasons, these look like "*****" no matter what I'm typing.  When I thought I was typing "admin", I was actually typing "ad05n".  And of course, there's no scratch pad where I can test my keyboard and see that this is the case.  In fact, I'm not the only person hitting similar issues.  It seems that most people using keyboards other than US-English are quite confused when they type "admin" over and over and over again, to their frustration.

Okay, rant over.  I posted my solution back to my own questions on the forum.  And finally started playing with AMT!

The synopsis: AMT is really, really impressive!

First, you need to enter bios and ensure that it's enabled.  Then, you need to do whatever it takes to enter Intel's MEBx interface, using Ctrl-P (NumLock notwithstanding).  You'll be prompted for a password, and on your first login, this should be "admin" (NumLock notwithstanding).  Then you'll need to choose your own strong password.  Once in there, you'll need to enable a couple of settings, including networking/dhcp auto setup.  You can, at your option, also install some TLS certificates and secure your communications with your device.

AMT has a very simple, intuitive web interface.  Here are a comprehensive set of screen shots of all of the individual pages.

Once AMT is enabled on the target system, point a browser to port 16992, and click "Log On..."

The username is always "admin".  You'll set this password in the MEBx interface, using Ctrl-P just after BIOS post.

Here's the basic system status/overview.

The System Information page contains basic information about the system itself, including some of its capabilities.

The processor information page gives you the low down on your CPU.  Search ark.intel.com for your Intel CPU type to see all of its capabilities.

Check your memory capacity, type, speed, etc.

And your disk type, size, and serial number.

NUCs don't have battery information, but my Thinkpad does.

An event log has some interesting early boot and debug information here.

Arguably the most useful page, here you can power a system on, off, or hard reboot it.

If you have wireless capability, you choose whether you want that enabled/disabled when the system is off, suspended, or hibernated.

Here you can configure the network settings.  Unlike a BMC (Board Management Controller) on most server class hardware, which has its own dedicated interface, Intel AMT actually shares the network interface with the Operating System.

AMT actually supports IPv6 networking as well, though I haven't played with it yet.

Configure the hostname and Dynamic DNS here.

You can set up independent user accounts, if necessary.

And with a BIOS update, you can actually use Intel AMT over a wireless connection (if you have an Intel wireless card)
So this pointy/clicky web interface is nice, but not terribly scriptable (without some nasty screenscraping).  What about the command line interface?

The amttool command (provided by the amtterm package in Ubuntu) offers a nice command line interface into some of the functionality exposed by AMT.  You need to export an environment variable, AMT_PASSWORD, and then you can get some remote information about the system:

kirkland@x230:~⟫ amttool 10.0.0.14 info
### AMT info on machine '10.0.0.14' ###
AMT version: 7.1.20
Hostname: nuc1.
Powerstate: S0
Remote Control Capabilities:
IanaOemNumber 0
OemDefinedCapabilities IDER SOL BiosSetup BiosPause
SpecialCommandsSupported PXE-boot HD-boot cd-boot
SystemCapabilitiesSupported powercycle powerdown powerup reset
SystemFirmwareCapabilities f800

You can also retrieve the networking information:

kirkland@x230:~⟫ amttool 10.0.0.14 netinfo
Network Interface 0:
DhcpEnabled true
HardwareAddressDescription Wired0
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 31
MACAddress 00-aa-bb-cc-dd-ee
DefaultGatewayAddress 10.0.0.1
LocalAddress 10.0.0.14
PrimaryDnsAddress 10.0.0.1
SecondaryDnsAddress 0.0.0.0
SubnetMask 255.255.255.0
Network Interface 1:
DhcpEnabled true
HardwareAddressDescription Wireless1
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 0
MACAddress ee-ff-aa-bb-cc-dd
DefaultGatewayAddress 0.0.0.0
LocalAddress 0.0.0.0
PrimaryDnsAddress 0.0.0.0
SecondaryDnsAddress 0.0.0.0
SubnetMask 0.0.0.0

Far more handy than WoL alone, you can power up, power down, and power cycle the system.

kirkland@x230:~⟫ amttool 10.0.0.14 powerdown
host x220., powerdown [y/N] ? y
execute: powerdown
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powerup
host x220., powerup [y/N] ? y
execute: powerup
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powercycle
host x220., powercycle [y/N] ? y
execute: powercycle
result: pt_status: success

I was a little disappointed that amttool's info command didn't provide nearly as much information as the web interface.  However, I did find a fork of Gerd Hoffman's original Perl script in Sourceforge here.  I don't know the upstream-ability of this code, but it worked very well for my part, and I'm considering sponsoring/merging it into Ubuntu for 14.04.  Anyone have further experience with these enhancements?

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data BIOS
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'BIOS' (1 item):
(data struct.ver. 1.0)
Vendor: 'Intel Corp.'
Version: 'RKPPT10H.86A.0028.2013.1016.1429'
Release date: '10/16/2013'
BIOS characteristics: 'PCI' 'BIOS upgradeable' 'BIOS shadowing
allowed' 'Boot from CD' 'Selectable boot' 'EDD spec' 'int13h 5.25 in
1.2 mb floppy' 'int13h 3.5 in 720 kb floppy' 'int13h 3.5 in 2.88 mb
floppy' 'int5h print screen services' 'int14h serial services'
'int17h printer services'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data ComputerSystem
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'ComputerSystem' (1 item):
(data struct.ver. 1.0)
Manufacturer: ' '
Product: ' '
Version: ' '
Serial numb.: ' '
UUID: 7ae34e30-44ab-41b7-988f-d98c74ab383d

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Baseboard
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Baseboard' (1 item):
(data struct.ver. 1.0)
Manufacturer: 'Intel Corporation'
Product: 'D53427RKE'
Version: 'G87971-403'
Serial numb.: '27XC63723G4'
Asset tag: 'To be filled by O.E.M.'
Replaceable: yes

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Processor
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Processor' (1 item):
(data struct.ver. 1.0)
ID: 0x4529f9eaac0f
Max Socket Speed: 2800 MHz
Current Speed: 1800 MHz
Processor Status: Enabled
Processor Type: Central
Socket Populated: yes
Processor family: 'Intel(R) Core(TM) i5 processor'
Upgrade Information: [0x22]
Socket Designation: 'CPU 1'
Manufacturer: 'Intel(R) Corporation'
Version: 'Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data MemoryModule
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'MemoryModule' (2 items):
(* No memory device in the socket *)
(data struct.ver. 1.0)
Size: 8192 Mb
Form Factor: 'SODIMM'
Memory Type: 'DDR3'
Memory Type Details:, 'Synchronous'
Speed: 1333 MHz
Manufacturer: '029E'
Serial numb.: '123456789'
Asset Tag: '9876543210'
Part Number: 'GE86sTBF5emdppj '

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data VproVerificationTable
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'VproVerificationTable' (1 item):
(data struct.ver. 1.0)
CPU: VMX=Enabled SMX=Enabled LT/TXT=Enabled VT-x=Enabled
MCH: PCI Bus 0x00 / Dev 0x08 / Func 0x00
Dev Identification Number (DID): 0x0000
Capabilities: VT-d=NOT_Capable TXT=NOT_Capable Bit_50=Enabled
Bit_52=Enabled Bit_56=Enabled
ICH: PCI Bus 0x00 / Dev 0xf8 / Func 0x00
Dev Identification Number (DID): 0x1e56
ME: Enabled
Intel_QST_FW=NOT_Supported Intel_ASF_FW=NOT_Supported
Intel_AMT_FW=Supported Bit_13=Enabled Bit_14=Enabled Bit_15=Enabled
ME FW ver. 8.1 hotfix 40 build 1416
TPM: Disabled
TPM on board = NOT_Supported
Network Devices:
Wired NIC - PCI Bus 0x00 / Dev 0xc8 / Func 0x00 / DID 0x1502
BIOS supports setup screen for (can be editable): VT-d TXT
supports VA extensions (ACPI Op region) with maximum ver. 2.6
SPI Flash has Platform Data region reserved.

On a different note, I recently sponsored a package, wsmancli, into Ubuntu Universe for Trusty, at the request of Kent Baxley (Canonical) and Jared Dominguez (Dell), which provides the wsman command.  Jared writes more about it here in this Dell technical post.  With Kent's help, I did manage get wsman to remotely power on a system.  I must say that it's a bit less user friendly than the equivalent amttool functionality above...

kirkland@x230:~⟫  wsman invoke -a RequestPowerStateChange -J request.xml http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService?SystemCreationClassName="CIM_ComputerSystem",SystemName="Intel(r)AMT",CreationClassName="CIM_PowerManagementService",Name="Intel(r) AMT Power Management Service" --port 16992 -h 10.0.0.14 --username admin -p "ABC123abc123#" -V -v

I'm really enjoying the ability to remotely administer these systems.  And I'm really, really looking forward to the day when I can use MAAS to provision these systems!

:-Dustin

Read more
Dustin Kirkland


Necessity is truly the mother of invention.  I was working from the Isle of Man recently, and really, really enjoyed my stay!  There's no better description for the Isle of Man than "quaint":
quaint
kwānt/
adjective1. attractively unusual or old-fashioned.
"quaint country cottages"
synonyms:picturesquecharmingsweetattractiveold-fashionedold-world
Though that description applies to the Internet connectivity, as well :-)  Truth be told, most hotel WiFi is pretty bad.  But nestle a lovely little old hotel on a forgotten little Viking/Celtic island and you will really see the problem exacerbated.

I worked around most of my downstream issues with a couple of new extensions to the run-one project, and I'm delighted as always to share these with you in Ubuntu's package!

As a reminder, the run-one package already provides:
  • run-one COMMAND [ARGS]
    • This is a wrapper script that runs no more than one unique instance of some command with a unique set of arguments.
    • This is often useful with cronjobs, when you want no more than one copy running at a time.
  • run-this-one COMMAND [ARGS]
    • This is exactly like run-one, except that it will use pgrep and kill to find and kill any running processes owned by the user and matching the target commands and arguments.
    • Note that run-this-one will block while trying to kill matching processes, until all matching processes are dead.
    • This is often useful when you want to kill any previous copies of the process you want to run (like VPN, SSL, and SSH tunnels).
  • keep-one-running COMMAND [ARGS]
    • This command operates exactly like run-one except that it respawns the command with its arguments if it exits for any reason (zero or non-zero).
    • This is useful when you want to ensure that you always have a copy of a command or process running, in case it dies or exits for any reason.
Newly added, you can now:
  • run-one-constantly COMMAND [ARGS]
    • This is simply an alias for keep-one-running.
    • I've never liked the fact that this command started with "keep-" instead of "run-one-", from a namespace and discoverability perspective.
  • run-one-until-success COMMAND [ARGS]
    • This command operates exactly like run-one-constantly except that it respawns "COMMAND [ARGS]" until COMMAND exits successfully (ie, exits zero).
    • This is useful when downloading something, perhaps using wget --continue or rsync, over a crappy quaint hotel WiFi connection.
  • run-one-until-failure COMMAND [ARGS]
    •  This command operates exactly like run-one-constantly except that it respawns "COMMAND [ARGS]" until COMMAND exits with failure (ie, exits non-zero).
    • This is useful when you want to run something until something goes wrong.
I am occasionally asked about the difference between these tools and the nohup command...
  1. First, the "one" part of run-one-constantly is important, in that it uses run-one to protect you from running more than one instances of the specified command. This is handy for something like an ssh tunnel, that you only really want/need one of.
  2. Second, nohup doesn't rerun the specified command if it exits cleanly, or forcibly gets killed. nohup only ignores the hangup signal.
So you might say that the run-one tools are a bit more resilient than nohup.

You can use all of these as of Ubuntu 13.10 (Saucy), by simply:

sudo apt-get install run-one

Or, for older Ubuntu releases:

sudo apt-add-repository ppa:run-one/ppa
sudo apt-get update
sudo apt-get install run-one

I was also asked about the difference between these tools and upstart...

Upstart is Ubuntu's event driven replacement for sysvinit.  It's typically used to start daemons and other scripts, utilities, and "jobs" at boot time.  It has a really cool feature/command/option called respawn, which can be used to provide a very similar effect as run-one-constantly.  In fact, I've used respawn in several of the upstart jobs I've written for the Ubuntu server, so I'm happy to credit upstart's respawn for the idea.

That said, I think the differences between upstart and run-one are certainly different enough to merit both tools, at least on my servers.

  1. An upstart job is defined by its own script-like syntax.  You can see many examples in Ubuntu's /etc/init/*.conf.  On my system the average upstart job is 25 lines long.  The run-one commands are simply prepended onto the beginning of any command line program and arguments you want to run.  You can certainly use run-one and friends inside of a script, but they're typically used in an interactive shell command line.
  2. An upstart job typically runs at boot time, or when "started" using the start command, and these start jobs located in the root-writable /etc/init/.  Can a non-root user write their own upstart job, and start and stop it?  Not that I can tell (and I'm happy to be corrected here)...    Turns out I was wrong about that, per a set of recently added features to Upstart (thanks, James, and Stuart for pointing out!), non-root users can now write and run their own upstart jobs..   Still, any user on the system can launch run-one jobs, and their own command+arguments namespace is unique to them.
  3. run-one is easily usable on systems that do not have upstart available; the only hard dependency is on the flock(1) utility.
Hope that helps!


Happy running,
:-Dustin

Read more
Dustin Kirkland

tl;dr? 
From within byobu, just run:
byobu-enable-prompt

Still reading?

I've helped bring a touch of aubergine to the Ubuntu server before.  Along those lines, it has long bothered me that Ubuntu's bash package, out of the box, creates a situation where full color command prompts are almost always disabled.

Of course I carry around my own, highly customized ~/.bashrc on my desktop, but whenever I start new instances of the Ubuntu server in the cloud, without fail, I end up back at a colorless, drab command prompt, like this:


You can, however, manually override this by setting color_prompt=yes at the top of your ~/.bashrc, or your administrator can set that system-wide in /etc/bash.bashrc.  After which, you'll see your plain, white prompt now show two new colors, bright green and blue.


That's a decent start, but there's two things I don't like about this prompt:
  1. There's 3 disparate pieces of information, but only two color distinctions:
    • a user name
    • a host name
    • a current working directory
  2. The colors themselves are
    • a little plain
    • 8-color
    • and non-communicative
Both of these problems are quite easy to solve.  Within Ubuntu, our top notch design team has invested countless hours defining a spectacular color palette and extensive guidelines on their usage.  Quoting our palette guidelines:


"Colour is an effective, powerful and instantly recognisable medium for visual communications. To convey the brand personality and brand values, there is a sophisticated colour palette. We have introduced a palette which includes both a fresh, lively orange, and a rich, mature aubergine. The use of aubergine indicates commercial involvement, while orange is a signal of community engagement. These colours are used widely in the brand communications, to convey the precise, reliable and free personality."
With this inspiration, I set out to apply these rules to a beautiful, precise Ubuntu server command prompt within Byobu.

First, I needed to do a bit of research, as I would really need a 256-color palette to accomplish anything reasonable, as the 8-color and 16-color palettes are really just atrocious.

The 256-color palette is actually reasonable.  I would have the following color palette to chose from:


That's not quite how these colors are rendered on a modern Ubuntu system, but it's close enough to get started.

I then spent quite a bit of time trying to match Ubuntu color tints against this chart and narrowed down the color choices that would actually fit within the Ubuntu design team's color guidelines.


This is the color balance choice that seemed most appropriate to me:


A majority of white text, on a darker aubergine background.  In fact, if you open gnome-terminal on an Ubuntu desktop, this is exactly what you're presented with.  White text on a dark aubergine background.  But we're missing the orange, grey, and lighter purple highlights!


That number I cited above -- the 3 distinct elements of [user, host, directory] -- are quite important now, as they map exactly to our 3 supporting colors.

Against our 256-color mapping above, I chose:
  • Username: 245 (grey)
  • Hostname: 5 (light aubergine)
  • Working directory: 5 (orange)
  • Separators: 256 (white)
And in the interest of being just a little more "precise", I actually replaced the trailing $ character with the UTF-8 symbol ❭.  This is Unicode's U+276D character, "MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT".  This is a very pointed, attention-grabbing character.  It directs your eye straight to the flashing cursor, or the command at your fingertips.


Gnome-terminal is, by default, set to use the system's default color scheme, but you can easily change that to several other settings.  I often use the higher-contrast white-on-black or white-on-light-yellow color schemes when I'm in a very bright location, like outdoors.


I took great care in choosing those 3 colors that they were readable across each of the stock schemes shipped by gnome-terminal.



I also tested it in Terminator and Konsole, where it seemed to work well enough, while xterm and putty aren't as pretty.

Currently, this functionality is easy to enable from within your Byobu environment.  If you're on the latest Byobu release (currently 5.57), which you can install from ppa:byobu/ppa, simply run the command:

byobu-enable-prompt

Of course, this prompt most certainly won't be for everyone :-)  You can easily disable the behavior at any time with:

byobu-disable-prompt

While new installations of Byobu (where there is no ~/.byobu directory) will automatically see the new prompt, starting in Ubuntu 13.10 (unless you've modified your $PS1 in your ~/.bashrc). But existing, upgraded Byobu users will need to run byobu-enable-prompt to add this into their environment.

As will undoubtedly be noted in the comments below, your mileage may vary on non-Ubuntu systems.  However, if /etc/issue does not start with the string "Ubuntu", byobu-enable-prompt will provide a tri-color prompt, but employs a hopefully-less-opinionated primary colors, green, light blue, and red:



If you want to run this outside of Byobu, well that's quite doable too :-)  I'll leave it as an exercise for motivated users to ferret out the one-liner you need from lp:byobu and paste into your ~/.bashrc ;-)

Cheers,
:-Dustin

Read more
Dustin Kirkland


Relaxation.  Simply put, I am really bad at it.  My wife, Kim, a veritable expert, has come to understand that, while she can, I can't sit still.  For better or worse, I cannot lay on a beach, sip a cerveza, and watch the waves splash at my feet for hours.  10 minutes, tops.  You'd find me instead going for a run in the sand or kayaking or testing the limits of my SCUBA dive table.  Oh, and I can't take naps.  I stay up late and wake up early.  I spend my nights and weekends seeking adventure, practicing any one of my countless hobbies.  Or picking up a new one.


So here I am on my first-ever 5-week sabbatical, wide awake late tonight at the spectacular Prince of Wales Hotel in Waterton-Glacier International Peace Park.  Kimi, Cami, and I are on an ambitious, month-long, 5,000+ mile road trip from Austin, Texas to Banff, Alberta, Canada, visiting nearly every National Park in between.  Most of our accommodations are far more modest than this chalet -- we're usually in motels, cabins, or cottages.  In any case, this place is incredible.  Truly awe-inspiring, and very much befitting of the entire experience of grandeur which is Glacier and Waterton National Parks.


But this is only one night's stop of 30 amazing days with my loving wife and beautiful daughter.  30 days, covering over a dozen national parks, monuments, and forests.


And with that, I am most poignantly reminded of Ralph Waldo Emerson's sage advice, that, "Life's a journey, not a destination."

And speaking of, this brings me back to said sabbatical...

July 8th, 2013 marks my first day back at Canonical, after a 19 month hiatus for "An Unexpected Journey", and I couldn't be more excited about it!

I spent the last year-and-a-half on an intriguing, educational, enlightening journey with a fast-growing, fun startup, called Gazzang.  Presented with a once-in-a-lifetime opportunity, I took a chance and joined a venture-funded startup, based in my hometown of Austin, Texas, and built on top of an open source project, eCryptfs, that I have co-authored and co-maintained.

I joined the team very early, as the Chief Architect, and was eventually promoted to Chief Technical Officer.  It was an incredibly difficult decision to leave a job I loved at Canonical, but the nature of the opportunity at Gazzang was just too unique to pass up.

Introducing this team to many of the engineering processes we have long practiced within Ubuntu (time-based release cycles, bzr, Launchpad, IRC, Google+ hangouts, etc.), we drastically improved our engineering effectiveness and efficiency.  We took Gazzang's first product -- zNcrypt: an encrypted filesystem utilizing eCryptfs (and eventually dm-crypt) -- to the enterprise with encryption for Cloud and Big Data.  We also designed and implemented, from scratch, a purely software (and thus, cloud-ready), innovative key management system, called zTrustee, that is now rivaling the best hardware security modules (HSMs) in the business.  As CTO, I wrote thousands of lines of code, architected multiple products, assisted scores of sales calls as a sales engineer, spoke at a number of conferences, assisted our CEO with investor pitches, and provided detailed strategic and product advice to our leadership team.

Gazzang was a special journey, and I'll maintain many of the relationships I forged there for a lifetime.




I am quite proud of the team and products that we built, I will continue to support Gazzang in an advisory capacity, as a Technical Advisor, and a shareholder.  Austin has a very healthy startup scene, and I feel quite fortunate to have finally participated actively in it.  With this experience, I have earned an MBA-compatible understanding of venture funded startups that, otherwise, might have cost 3 years and $60K+ of graduate school.

Of all of the hats I wore at Gazzang, I think the role where I felt most alive, where I thrived at my fullest, was in the product innovation and strategy capacity.  And so I'm absolutely thrilled to re-join Canonical on the product strategy team, and help extend Ubuntu's industry leadership and creativity across both existing and new expressions of Cloud platforms.

In 1932, Waterton-Glacier became the world's first jointly administered national park.  This international endeavor reminds me how much I have missed the global nature of the work we do within Ubuntu.  The elegance in engineering of this Price of Wales Hotel and the Glacier Lodge rekindles appreciation of the precision and quality of Ubuntu.  And the scale of the glacial magnificence here recalls the size of the challenge before Ubuntu and the long term effect of persistence, perseverance, and precision.

I am grateful to Mark and all of Canonical for giving me this chance once again.  And I'm looking forward to extending Ubuntu's tradition of excellence as platform and guest in cloud computing!

Please excuse me, as I struggle to relax for another 3 weeks...


:-Dustin

Read more
Dustin Kirkland


Servers in Concert!

Ubuntu Orchestra is one of the most exciting features of the Ubuntu 11.10 Server release, and we're already improving upon it for the big 12.04 LTS!

I've previously given an architectural introduction to the design of Orchestra.  Now, let's take a practical look at it in this how-to guide.

Prerequisites

To follow this particular guide, you'll need at least two physical systems and administrative access rights on your local DHCP server (perhaps on your network's router).  With a little ingenuity, you can probably use two virtual machines and work around the router configuration.  I'll follow this guide with another one using entirely virtual machines.

To build this demonstration, I'm using two older ASUS (P1AH2) desktop systems.  They're both dual-core 2.4GHz AMD processors and 2GB of RAM each.  I'm also using a Linksys WRT310n router flashed with DD-WRT.  Most importantly, at least one of the systems must be able to boot over the network using PXE.

Orchestra Installation

You will need to manually install Ubuntu 11.10 Server on one of the systems, using an ISO or a USB flash disk.  I used the 64-bit Ubuntu 11.10 Server ISO, and my no-questions-asked uquick installation method.  This took me a little less than 10 minutes.

After this system reboots, update and upgrade all packages on the system, and then install the ubuntu-orchestra-server package.

sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install -y ubuntu-orchestra-server

You'll be prompted to enter a couple of configuration parameters, such as setting the cobbler user's password.  It's important to read and understand each question.  The default values are probably acceptable, except for one, which you'll want to be very careful about...the one that asks about DHCP/DNS management.

In this post, I selected "No", as I want my DD-WRT router to continue handling DHCP/DNS.  However, in a production environment (and if you want to use Orchestra with Juju), you might need to select "Yes" here.


And a about five minutes later, you should have an Ubuntu Orchestra Server up and running!

Target System Setup

Once your Orchestra Server is installed, you're ready to prepare your target system for installation.  You will need to enter your target system's BIOS settings, and ensure that the system is set to first boot from PXE (netboot), and then to local disk (hdd).  Orchestra uses Cobbler (a project maintained by our friends at Fedora) to prepare the network installation using PXE and TFTP, and thus your machine needs to boot from the network.  While you're in your BIOS configuration, you might also ensure that Wake on LAN (WoL) is also enabled.

Next, you'll need to obtain the MAC address of the network card in your target system.  One of many ways to obtain this is by booting that Ubuntu ISO, pressing ctrl-alt-F2, and running ip addr show.

Now, you should add the system to Cobbler.  Ubuntu 11.10 ships a feature called cobbler-enlist that automates this, however, for this guide, we'll use the Cobbler web interface.  Give the system a hostname (e.g., asus1), select its profile (e.g., oneiric-x86_64), IP address (e.g. 192.168.1.70), and MAC address (e.g., 00:1a:92:88:b7:d9).  In the case of this system, I needed to tweak the Kernel Options, since this machine has more than one attached hard drive, and I want to ensure that Ubuntu installs onto /dev/sdc, so I set the Kernel Options to partman-auto/disk=/dev/sdc.  You might have other tweaks on a system-by-system basis that you need or want to adjust here (like IPMI configuration).


Finally, I adjusted my DD-WRT router to add a static lease for my target system, and point dnsmasq to PXE boot against the Orchestra Server.  You'll need to do something similar-but-different here, depending on how your network handles DHCP.


NOTE: As of October 27, 2011, Bug #882726 must be manually worked around, though this should be fixed in oneiric-updates any day now.  To work around this bug, login to the Orchestra Server and run:

RELEASES=$(distro-info --supported)
ARCHES="x86_64 i386"
KSDIR="/var/lib/orchestra/kickstarts"
for r in $RELEASES; do
for a in $ARCHES; do
sudo cobbler profile edit --name="$r-$a" \
--kickstart="$KSDIR/orchestra.preseed"
done
done

Target Installation

All set!  Now, let's trigger the installation.  In the web interface, enable the machine for netbooting.


If you have WoL working for this system, you can even use the web interface to power the system on.  If not, you'll need to press the power button yourself.

Now, we can watch the installation remotely, from an SSH session into our Orchestra Server!  For extra bling, install these two packages:

sudo apt-get install -y tmux ccze

Now launch byobu-tmux (which handles splits much better than byobu-screen).  In the current window, run:

tail -f /var/log/syslog | ccze

Now, split the screen vertically with ctrl-F2.  In the new split, run:

sudo tail -f /var/log/squid/access.log | ccze

Move back and forth between splits with shift-F3 and shift-F4.  The ccze command colorizes log files.

syslog progress of your installation scrolling by.  In the right split, you'll see your squid logs, as your Orchestra server caches the binary deb files it downloads.  On your first installation, you'll see a lot of TCP_MISS messages.  But if you try this installation a second time, subsequent installs will roll along much faster and you should see lots of TCP_HIT messages.


It takes me about 5 minutes to install these machines with a warm squid cache (and maybe 10 mintues to do that first installation downloading all of those debs over the Internet).  More importantly, I have installed as many as 30 machines simultaneously in a little over 5 minutes with a warm cache!  I'd love to try more, but that's as much hardware as I've had concurrent access to, at this point.

Post Installation

Most of what you've seen above is the provisioning aspect of Orchestra -- how to get the Ubuntu Server installed to bare metal, over the network, and at scale.  Cobbler does much of the hard work there,  but remarkably, that's only the first pillar of Orchestra.

What you can do after the system is installed is even more exciting!  Each system installed by Orchestra automatically uses rsyslog to push logs back to the Orchestra server.  To keep the logs of multiple clients in sync, NTP is installed and running on every Orchestra managed system.  The Orchestra Server also includes the Nagios web front end, and each installed client runs a Nagios client.  We're working on improving the out-of-the-box Nagios experience for 12.04, but the fundamentals are already there.  Orchestra clients are running PowerNap in power-save mode, by default, so that Orchestra installed servers operate as energy efficiently as possible.

Perhaps most importantly, Orchestra can actually serve as a machine provider to Juju, which can then offer complete Service Orchestration to your physical servers.  I'll explain in another post soon how to point Juju to your Orchestra infrastructure, and deploy services directly to your bare metal servers.

Questions?  Comments?

I won't be able to offer support in the comments below, but if you have questions or comments, drop by the friendly #ubuntu-server IRC channel on irc.freenode.net, where we have at least a dozen Ubuntu Server developers with Orchestra expertise, hanging around and happy to help!

Cheers,
:-Dustin

Read more
Dustin Kirkland

I learned earlier this morning that Dennis Ritchie, one of the fathers of the C programming and UNIX as we know it, passed away.  Thank you so much, Mr. Ritchie, for the immeasurable contributions you've made to the modern world of computing!  I think I'm gainfully employed and love computer technology in the way I do, and am in no small ways indebted to your innovation and open contributions to that world.

Sadly, I've never met "dmr", but I did have a very small conversation with him, via a mutual friend -- Jon "maddog" Hall (who wrote his own farewell in this heartfelt article).

A couple of years ago, I created the update-motd utility for Ubuntu systems, whereby the "message of the day", traditionally located at /etc/motd could be dynamically generated, rather than a static message composed by the system's administrator.  The initial driver for this was Canonical's Landscape project, but numerous others have found it useful, especially in Cloud environments.

A while back, a colleague of mine complemented the sheer simplicity of the idea of placing executable scripts in /etc/update-motd.d/ and collating the results at login into /etc/motd.  He asked if any Linux or UNIX distribution had ever provided a simple framework for dynamically generating the MOTD.  I've only been around Linux/UNIX for ~15 years, so I really had no idea.  This would take a bit of old school research into the origins of the MOTD!

I easily traced it back through every FHS release, back to the old fsstnd-1.0.  The earliest reference I could find in print that specifically referred to the path /etc/motd was Using the Unix System by Richard L. Gauthier (1981).

At this point, I reached out to colleagues Rusty Russell and Jon "maddog" Hall, and asked if they could help me a bit more with my search.  Rusty said that I would specifically need someone with a beard, and CC'd "maddog" (who I had also emailed :-)

Maddog did a bit of digging himself...if by "digging" you mean emailing the author of C and Unix!  I had a smile from ear to ear when this message appeared in my inbox:

Jon 'maddog' Hall to Dustin on Tue, Apr 20, 2010 at 10:08 PM: 

> A young friend of mine is investigating the origins of /etc/motd.  I
> think he is working on a mechanism to easily update that file.
>
> I think I can remember it in AT&T Unix of 1977, when I joined the labs,
> but we do not know how long it was in Unix before that, and if it was
> inspired by some other system.
>
> Can you help us out with this piece of trivia?


Ah, a softball!
MOTD is quite old.  The same thing was in CTSS and then
Multics, and doubtless in other systems.  I suspect
even the name is pretty old.  It came into Unix early on.


I haven't looked for the best  citation, but I bet it's easily
findable:  one of the startling things that happened
on CTSS was that someone was editing the password
file (at that time with no encryption) and managed
to save the password file as the MOTD.


Hope you're well,
 Regards,
 Dennis
Well sure enough, Dennis was (of course) right.  The "message of the day" does actually predate UNIX itself!  I would eventually find Time-sharing Computer Systems, by Maurice Wilkes (1968), which says:

"There is usually also a message of the day, a feature designed to keep users in touch with new facilities introduced and with other changes in the system"


As well as the Second National Symposium on Engineering Information, New York, October 27, 1965 proceedings:
"When a user sits down at his desk (console), he finds a "message of the day".  It is tailored to his specific interests, which are of course known by the system."

Brilliant!  So it wasn't so much that update-motd had introduced something that no one had ever thought of, but rather that it had re-introduced an old idea that had long since been forgotten in the annals of UNIX history.

I must express a belated "thank you" to Dennis (and maddog), for the nudges in the right direction.  Thank you for so many years of C and UNIX innovation.  Few complex technologies have stood the test of time as well as C, UNIX and the internal combustion engine.

RIP, Dennis.

-Dustin

Read more
Dustin Kirkland

eCryptfs in the Wild

Perhaps you're aware of my involvement in the eCryptfs project, as the maintainer of the ecryptfs-utils userspace tools...

This post is just a collection of some recent news and headlines about the project...

  1. I'm thrilled that eCryptfs' kernel maintainer, Tyler Hicks, joined Canonical's Ubuntu Security Team last month!  He'll be working on the usual Security Updates for stable Ubuntu releases, but he'll also be helping develop, triage and fix eCryptfs kernel bugs, both in the Upstream Linux Kernel, and in Ubuntu's downstream Linux kernel packages.  Welcome Tyler!
  2. More and more and more products seem to be landing in the market, using eCryptfs encryption!  This is, all at the same time, impressive/intimidating/frightening to me :-)
    • Google's ChromeOS uses eCryptfs to securely store browser cache locally (this feature was in fact modeled after Ubuntu Encrypted Private Directory feature, and the guys over at Google even sent me a Cr48 to play with!)
    • We've spotted several NAS solutions on the market running eCryptfs, such as this Synology DS1010+ and the BlackArmor NAS 220 from Seagate
    • Do you know of any others?
  3. I've had several conversations with Android developers recently, who are also quite interested in using eCryptfs to efficiently secure local storage on their devices.  As an avid Android user, I'd love to see this!
  4. There's a company here in Austin, called Gazzang, that's developing Cloud Storage solutions (mostly database backends) backed by eCryptfs.
  5. And there's a start-up in the Bay Area investingating eCryptfs + LXC + MongoDB for added security to their personal storage solution.
Exciting times in eCryptfs-land, for sure!

Which brings me to the point of this post...  We could really use some more community interaction and developer involvement around eCryptfs!
  • Do you know anything about encryption?
  • What about Linux filesystems?
  • Perhaps you're a user who's interested in helping with some bug triage, or willing to help support some other users?
  • We have both kernel, and user space bug-fixing and new development to be done!
  • There's code in both C and Shell that need some love.
  • Heck, even our documentation has plenty of room for improvement!
If you'd like to get involved, drop by #ecryptfs in irc.oftc.net, and poke kirkland or tyhicks.

Cheers,
:-Dustin

Read more
Dustin Kirkland

Ubuntu Monospace Font

At long last, we have a Beta of the Ubuntu Monospace font available!  (Request membership to the  ubuntu-typeface-interest team in Launchpad for access.)

Here's a screenshot of some code open in Byobu in the new font!


It really has a light, modern feel to it.  I like the distinct differences between "0" and "O", and "1" and "l", which are often tricky with monospace fonts.  Cheers to the team working on this -- I really appreciate the efforts, and hope these land on the console/tty at some point too!

I've only encountered one bug so far, which looks to have been filed already, so I added a comment to: https://bugs.launchpad.net/ubuntu-font-family/+bug/677134  I think the "i" and "l" are a little too similar.  if-fi statements in shell are kind of hard to read.

Anyway, nice job -- looking forward to using this font more in the future!

Enjoy,
:-Dustin

Read more
Dustin Kirkland

for Ubuntu Servers

I've long had a personal interest in the energy efficiency of the Ubuntu Server.  This interest has manifested in several ways.  From founding the PowerNap Project to using tiny netbooks and notebooks as servers, I'm just fascinated with the idea of making computing more energy efficient.

It wasn't so long ago, in December 2008 at UDS-Jaunty in Mountain View that I proposed just such a concept, and was nearly ridiculed out of the room.  (Surely no admin in his right mind would want enterprise infrastructure running on ARM processors!!! ...  Um, well, yeah, I do, actually....)  Just a little over two years ago, in July 2009, I longed for the day when Ubuntu ARM Servers might actually be a reality...

My friends, that day is here at last!  Ubuntu ARM Servers are now quite real!


The affable Chris Kenyon first introduced the world to Canonical's efforts in this space with his blog post, Ubuntu Server for ARM Processors a little over a week ago.  El Reg picked up the story quickly, and broke the news in Canonical ARMs Ubuntu for microserver wars.   And ZDNet wasn't far behind, running an article this week, Ubuntu Linux bets on the ARM server.  So the cat is now officially out of the bag -- Ubuntu Servers on ARM are here :-)

Looking for one?  This Geek.com article covers David Mandala's 42-core ARM cluster, based around TI Panda boards.  I also recently came across the ZT Systems R1801e Server, boasting 8 x Dual ARM Cortex A9 processors.  The most exciting part is that this is just the tip of the iceberg.  We've partnered with companies like Calxeda (here in Austin) and others to ensure that the ARM port of the Ubuntu Server will be the most attractive OS option for quite some time.

A huge round of kudos goes to the team of outstanding engineers at Canonical (and elsewhere) doing this work.  I'm sure I'm leaving off a ton of people (feel free to leave comments about who I've missed), but the work that's been most visible to me has been by:


So I'm looking forward to reducing my servers' energy footprint...are you?

:-Dustin

Read more
Dustin Kirkland


I was honored to speak at LinuxCon North America in beautiful Vancouver yesterday, about one of my favorite topics -- energy efficiency opportunities using Ubuntu Servers in the data center (something I've blogged before).

I'm pleased to share those slides with you today!  The talk is entitled PowerNap Your Data Center, and focused on Ubuntu's innovative PowerNap suite, from the system administrator's or data center manager's perspective.

We discussed the original, Cloud motivations for PowerNap, its evolution from the basic process monitoring and suspend/hibernate methods of PowerNap1, to our complete rewrite of PowerNap2 (thanks, Andres!) which added nearly a dozen monitors and the ubiquitously useful PowerSave mode.  PowerNap is now more useful and configurable than ever!

Flip through the presentation below, or download the PDF here.



Get Adobe Flash player


Stay tuned for another PowerNap presentation I'm giving at Linux Plumbers next month in California.  That one should be a bit deeper dive into the technical implementation, and hopefully generate some plumbing layer discussion and improvement suggestions.

:-Dustin

Read more
Dustin Kirkland

I've seen Ensemble evolve from a series of design-level conversations (Brussels May 2010), through a year of fast-paced Canonical-style development, and participated in Ensemble sprints (Cape Town March 2011, and Dublin June 2011).  I've observed Ensemble at first as an outsider, then provided feedback as a stake-holder, and have now contributed code as a developer to Ensemble and authored Formulas.


Think about bzr or git circa 2004/2005, or apt circa 1998/1999, or even dpkg circa 1993/1994...  That's where we are today with Ensemble circa 2011. 

Ensemble is a radical, outside-of-the-box approach to a problem that the Cloud ecosystem is just starting to grok: Service Orchestration.  I'm quite confident that in a few years, we're going to look back at 2011 and the work we're doing with Ensemble and Ubuntu and see an clear inflection point in the efficiency of workload management in The Cloud.

From my perspective as the leader of Canonical's Systems Integration Team, Ensemble is now the most important tool in our software tool belt when building complex cloud solutions.

Period.

Juan, Marc, Brian, and I are using Ensemble to generate modern solutions around new service deployments to the cloud.  We have contributed many formulas already to Ensemble's collection, and continue to do so every day.

There's a number of novel ideas and unique approaches in Ensemble.  You can deep dive into the technical details here.  For me, there's one broad concept in Ensemble that just rocks my world...  Ensemble deals in individual service units, with the ability to replicate, associate, and scale those units quite dynamically.  Service units in practice are cloud instances (or if you're using Orchestra + Ensemble, bare metal systems!).  Service units are federated together to deliver a (perhaps large and complicated) user facing service.

Okay, that's a lot of words, and at a very high level.  Let to me try to break that down into something a bit more digestable...

I've been around Red Hat and Debian packaging for many years now.  Debian packaging is particularly amazing at defining prerequisites packages, pre- and post- installation procedures, and are just phenomenal at rolling upgrades.  I've worked with hundreds (thousands?) of packages at this point, including some mind bogglingly complex ones!

It's truly impressive how much can be accomplished within traditional Debian packaging.  But it has its limits.  These limits really start to bare their teeth when you need to install packages on multiple separate systems, and then federate those services together.  It's one thing if you need to install a web application on a single, local system:  depend on Apache, depend on MySQL, install, configure, restart the services...

sudo apt-get install your-web-app

...

Profit!

That's great.  But what if you need to install MySQL on two different nodes, set them up in a replicating configuration, install your web app and Apache on a third node, and put a caching reverse proxy on a fourth?  Oh, and maybe you want to do that a few times over.  And then scale them out.  Ummmm.....

sudo apt-get errrrrrr....yeah, not gonna work :-(

But these are exactly the type(s) of problems that Ensemble solves!  And quite elegantly in fact.

Once you've written your Formula, you'd simply:

ensemble bootstrap

ensemble deploy your-web-app
...
Profit!

Stay tuned here and I'll actually show some real Ensemble examples in a series of upcoming posts.  I'll also write a bit about how Ensemble and Orchestra work together.

In the mean time, get primed on the Ensemble design and usage details here, and definitely check out some of Juan's awesome Ensemble how-to posts!

After that, grab the nearest terminal and come help out!

We are quite literally at the edge of something amazing here, and we welcome your contributions!  All of Ensemble and our Formula Repository are entirely free software, building on years of best practice open source development on Ubuntu at Canonical.  Drop into the #ubuntu-ensemble channel in irc.freenode.net, introduce yourself, and catch one of the earliest waves of something big.  Really, really big.

:-Dustin

Read more
Dustin Kirkland



I recently gave an introduction to the CloudFoundry Client application (vmc),  which is already in Ubuntu 11.10's Universe archive.

Here, I'd like to introduce the far more interesting server piece -- how to run the CloudFoundry Server, on top of Ubuntu 11.10!  As far as I'm aware, this is the most complete PaaS solution we've made available on top of Ubuntu Servers, to date.

A big thanks to the VMWare CloudFoundry Team who has been helping us along with the deployment instructions.  Also, all of the packaging credit goes straight to Brian Thomason, Juan Negron, and Marc Cluet.

For testing purposes, I'm going to run this in Amazon's EC2 Cloud.  I'll need a somewhat larger instance to handle all the services and dependencies (ie, Java) required by the platform.  I find an m1.large seems to work pretty well, for $0.34/hour.  I'm using the Oneiric (Ubuntu 11.10) AMI's listed at http://uec-images.ubuntu.com/oneiric/current/.

Installation

To install CloudFoundry Server, add the PPA, update, and install:

sudo apt-add-repository ppa:cloudfoundry/ppa

sudo apt-get update
sudo apt-get install cloudfoundry-server


During the installation, there are a couple of debconf prompts, including:
  • a mysql password
    • required for configuration of the MySQL database (enter twice)
All in all, the install took me less than 7 minutes!

Next, install the client tools, either on your local system, or even on the server, so that we can test our server:

sudo apt-get install cloudfoundry-client


Configuration

Now, you'll need to target your vmc client against your installed server, rather than CloudFoundry.com, as I demonstrated in my last post.

In production, you're going to need access to a wildcard based DNS server, either your own, or a DynDNS service.  If you have a DynDNS.com standard account ($30/year), CloudFoundry actually supports dynamically adding DNS entries for your applications.  We've added debconf hooks in the cloudfoundry-server Ubuntu packaging to set this up for you.  So if you have a paid DynDNS account, just sudo dpkg-reconfigure cloudfoundry-server.

For this example, though, we're going to take the poor man's approach, and just edit our /etc/hosts file, BOTH locally on our laptop and on our CloudFoundry server.

First, look up your server's external IP address.  If you're running Byobu in EC2, it'll be the lower right corner.

Otherwise, grab your IPv4 address from the metadata service.

$ wget -q -O- http://169.254.169.254/latest/meta-data/public-ipv4

174.129.119.101

And you'll need to add an entry to your /etc/hosts for api.vcap.me, AND every application name you deploy.  Make sure you do this both on your laptop, and the server!  Our test application here will be called testing123. Don't forget to change my IP address to yours ;-)

echo "174.129.119.101  api.vcap.me testing123.vcap.me" | sudo tee -a /etc/hosts


Target

Now, let's target our vmc client at our vcap (CloudFoundry) server:

$ vmc target api.vcap.me

Succesfully targeted to [http://api.vcap.me]

Adding Users

And add a user.

$ vmc add-user 

Email: kirkland@example.com
Password: ********
Verify Password: ********
Creating New User: OK
Successfully logged into [http://api.vcap.me]

Logging In

Now we can login.

$ vmc login 

Email: kirkland@example.com
Password: ********
Successfully logged into [http://api.vcap.me]

Deploying an Application


At this point, you can jump over to my last post in the vmc client tool for a more comprehensive set of examples.  I'll just give one very simple one here, the Ruby/Sinatra helloworld + environment example.

Go to the examples directory, find an example, and push!

$ cd /usr/share/doc/ruby-vmc/examples/ruby/hello_env

$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: testing123
Application Deployed URL: 'testing123.vcap.me'?
Detected a Sinatra Application, is this correct? [Yn]: y
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M, 1G or 2G)
Creating Application: OK
Would you like to bind any services to 'testing123'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

Again, make absolutely sure that you edit your local /etc/hosts and add the testing123.vcap.me to the right IP address, and then just point a browser to http://testing123.vcap.me/


And there you have it!  An application pushed, and running on your CloudFoundry Server  -- Ubuntu's first packaged PaaS!

What's Next?

So the above setup is a package-based, all-in-one PaaS.  That's perhaps useful for your first CloudFoundry Server, and your initial experimentation.  But a production PaaS will probably involve multiple, decoupled servers, with clustered databases, highly available storage, and enterprise grade networking.

The Team is hard at work breaking CloudFoundry down to its fundamental components and creating a set of Ensemble formulas for deploying CloudFoundry itself as a scalable service.  Look for more news on that front very soon!

In the meantime, try the packages at ppa:cloudfoundry/ppa (or even the daily builds at ppa:cloudfoundry/daily) and let us know what you think!

:-Dustin

Read more
jdstrand

So yesterday I rebooted a Lucid server I administer, and fsck ran. Ok, that’s cool. Granted it takes about 45 minutes on my RAID1 terabyte filesystem, but so be it. As in the past, I was slightly annoyed again that upstart/plymouth did not tell me that it was fscking my drive like my desktop does (may be it’s because this was an upgrade from Hardy and not a fresh install? It would be nice if I looked into why), but I knew that was what was happening, so I went about my business .

Until… there was a failure that had to be manually resolved by fsck. Looking at the path, it was no big deal (easily restoreable), so I just needed to run ‘fsck /dev/md2′. Hmmm, /dev/md2 is /var on this system, and mountall got stuck cause /var couldn’t be mounted. Getting slightly more annoyed, I tried to reboot with ‘Ctrl+Alt+Del’, but that didn’t work, so I had to SysRq my way out (using Alt+SysRq+R, Alt+SysRq+S, Alt+SysRq+U, and finally Alt+SysRq+B) and boot into single usermode. Surely I could reboot to runlevel 1 and get a prompt…. 45 minutes later (ie, after another failed fsck on /var) I was shown to be wrong. Thankfully I had an amd64 10.04 Server CD handy and booted into rescue mode, which in the end worked fine for fscking my drive manually.

Why was running fsck manually so hard? Why is plymouth/upstart so quiet on my server?

It turns out because I removed ‘splash quiet’ from my kernel boot options, plymouth wouldn’t show the message to ‘S’ (skip) or ‘M’ (maunually recover) /var. It was still running, so I could press ‘m’ to get to a maintenance shell (you might need to change your tty for this).

For the plymouth/upstart silence I came across the following:

In short, the above has you add to /etc/modprobe.d/blacklist-custom.conf:
blacklist vga16fb

Then adjust your kernel boot line to have:
ro nosplash INIT_VERBOSE=yes init=/sbin/init noplymouth -v

This is not as pretty as SysV bootup, but is verbose enough to let you know what is happening. The problem is that because there is no plymouth, there is no way enter a maintenance shell when you need to, so the above is hard to recommend. :(

To me, it boils down to the following two choices:

  • Boot with ‘splash’ but without ‘quiet’ and lose boot messages but gain fsck feedback
  • Boot without ‘splash quiet’ and lose fsck feedback and remember you can press ‘m’ to enter a maintenance shell when there is a problem

It would really be nice to have both fsck feedback and no splash, but this doesn’t seem possible at this time. If someone knows a way to do this on Lucid, please let me know. In the meantime, I have filed bug #613562.


Filed under: ubuntu, ubuntu-server

Read more
jdstrand

Upstream ClamAV pushed out an update via freshclam that crashed versions of 0.95 and earlier on 32 bit systems (Ubuntu 9.10 and earlier are affected). Upstream issued an update via freshclam within 15 minutes, but affected users’ clamd daemon will not restart automatically. People running ClamAV should check that it is still running. For details see:

http://lurker.clamav.net/message/20100507.110656.573e90d7.en.html


Filed under: security, ubuntu, ubuntu-server

Read more
jdstrand

Background
Ubuntu has been using libvirt as part of its recommended virtualization management toolkit since Ubuntu 8.04 LTS. In short, libvirt provides a set of tools and APIs for managing virtual machines (VMs) such as creating a VM, modifying its hardware configuration, starting and stopping VMs, and a whole lot more. For more information on using libvirt in Ubuntu, see http://doc.ubuntu.com/ubuntu/serverguide/C/libvirt.html.

Libvirt greatly eases the deployment and management of VMs, but due to the fact that it has traditionally been limited to using POSIX ACLs and sometimes needs to perform privileged actions, using libvirt (or any virtualization technology for that matter) can create a security risk, especially when the guest VM isn’t trusted. It is easy to imagine a bug in the hypervisor which would allow a compromised VM to modify other guests or files on the host. Considering that when using qemu:///system the guest VM process runs as a root (this is configurable in 0.7.0 and later, but still the default in Fedora and Ubuntu 9.10), it is even more important to contain what a guest can do. To address these issues, SELinux developers created a fork of libvirt, called sVirt, which when using kvm/qemu, allows libvirt to add SELinux labels to files required for the VM to run. This work was merged back into upstream libvirt in version 0.6.1, and it’s implementation, features and limitations can be seen in a blog post by Dan Walsh and an article on LWN.net. This is inspired work, and the sVirt folks did a great job implementing it by using a plugin framework so that others could create different security drivers for libvirt.

AppArmor Security Driver for Libvirt
While Ubuntu has SELinux support, by default it uses AppArmor. AppArmor is different from SELinux and some other MAC systems available on Linux in that it is path-based, allows for mixing of enforcement and complain mode profiles, uses include files to ease development, and typically has a far lower barrier to entry than other popular MAC systems. It has been an important security feature of Ubuntu since 7.10, where CUPS was confined by AppArmor in the default installation (more profiles have been added with each new release).

Since virtualization is becoming more and more prevalent, improving the security stance for libvirt users is of primary concern. It was very natural to look at adding an AppArmor security driver to libvirt, and as of libvirt 0.7.2 and Ubuntu 9.10, users have just that. In terms of supported features, the AppArmor driver should be on par with the SELinux driver, where the vast majority of libvirt functionality is supported by both drivers out of the box.

Implementation
First, the libvirtd process is confined with a lenient profile that allows the libvirt daemon to launch VMs, change into another AppArmor profile and use virt-aa-helper to manipulate AppArmor profiles. virt-aa-helper is a helper application that can add, remove, modify, load and unload AppArmor profiles in a limited and restricted way. Specifically, libvirtd is not allowed to adjust anything in /sys/kernel/security directly, or modify the profiles for the virtual machines directly. Instead, libvirtd must use virt-aa-helper, which is itself run under a very restrictive AppArmor profile. Using this architecture helps prevent any opportunities for a subverted libvirtd to change its own profile (especially useful if the libvirtd profile is adjusted to be restrictive) or modify other AppArmor profiles on the system.

Next, there are several profiles that comprise the system:

  • /etc/apparmor.d/usr.sbin.libvirtd
  • /etc/apparmor.d/usr.bin.virt-aa-helper
  • /etc/apparmor.d/abstractions/libvirt-qemu
  • /etc/apparmor.d/libvirt/TEMPLATE
  • /etc/apparmor.d/libvirt/libvirt-<uuid>
  • /etc/apparmor.d/libvirt/libvirt-<uuid>.files

/etc/apparmor.d/usr.sbin.libvirtd and /etc/apparmor.d/usr.bin.virt-aa-helper define the profiles for libvirtd and virt-aa-helper (note that in libvirt 0.7.2, virt-aa-helper is located in /usr/lib/libvirt/virt-aa-helper). /etc/apparmor.d/libvirt/TEMPLATE is consulted when creating a new profile when one does not already exist. /etc/apparmor.d/abstractions/libvirt-qemu is the abstraction shared by all running VMs. /etc/apparmor.d/libvirt/libvirt-<uuid> is the unique base profile for an individual VM, and /etc/apparmor.d/libvirt/libvirt-<uuid>.files contains rules for the guest-specific files required to run this individual VM.

The confinement process is as follows (assume the VM has a libvirt UUID of ‘a22e3930-d87a-584e-22b2-1d8950212bac’):

  1. When libvirtd is started, it determines if it should use a security driver. If so, it checks which driver to use (eg SELinux or AppArmor). If libvirtd is confined by AppArmor, it will use the AppArmor security driver
  2. When a VM is started, libvirtd decides whether to ask virt-aa-helper to create a new profile or modify an existing one. If no profile exists, libvirtd asks virt-aa-helper to generate the new base profile, in this case /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac, which it does based on /etc/apparmor.d/libvirt/TEMPLATE. Notice, the new profile has a profile name that is based on the guest’s UUID. Once the base profile is created, virt-aa-helper works the same for create and modify: virt-aa-helper will determine what files are required for the guest to run (eg kernel, initrd, disk, serial, etc), updates /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files, then loads the profile into the kernel.
  3. libvirtd will proceed as normal at this point, until just before it forks a qemu/kvm process, it will call aa_change_profile() to transition into the profile ‘libvirt-a22e3930-d87a-584e-22b2-1d8950212bac’ (the one virt-aa-helper loaded into the kernel in the previous step)
  4. When the VM is shutdown, libvirtd asks virt-aa-helper to remove the profile, and virt-aa-helper unloads the profile from the kernel

It should be noted that due to current limitations of AppArmor, only qemu:///system is confined by AppArmor. In practice, this is fine because qemu:///session is run as a normal user and does not have privileged access to the system like qemu:///system does.

Basic Usage
By default in Ubuntu 9.10, both AppArmor and the AppArmor security driver for libvirt are enabled, so users benefit from the AppArmor protection right away. To see if libvirtd is using the AppArmor security driver, do:

$ virsh capabilities
Connecting to uri: qemu:///system
<capabilities>
 <host>
  ...
  <secmodel>
    <model>apparmor</model>
    <doi>0</doi>
  </secmodel>
 </host>
 ...
</capabilities>

Next, start a VM and see if it is confined:

$ virsh start testqemu
Connecting to uri: qemu:///system
Domain testqemu started

$ virsh domuuid testqemu
Connecting to uri: qemu:///system
a22e3930-d87a-584e-22b2-1d8950212bac

$ sudo aa-status
apparmor module is loaded.
16 profiles are loaded.
16 profiles are in enforce mode.
...
  /usr/bin/virt-aa-helper
  /usr/sbin/libvirtd
  libvirt-a22e3930-d87a-584e-22b2-1d8950212bac
...
0 profiles are in complain mode.
6 processes have profiles defined.
6 processes are in enforce mode :
...
  libvirt-a22e3930-d87a-584e-22b2-1d8950212bac (6089)
...
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

$ ps ww 6089
PID TTY STAT TIME COMMAND
6089 ? R 0:00 /usr/bin/qemu-system-x86_64 -S -M pc-0.11 -no-kvm -m 64 -smp 1 -name testqemu -uuid a22e3930-d87a-584e-22b2-1d8950212bac -monitor unix:/var/run/libvirt/qemu/testqemu.monitor,server,nowait -boot c -drive file=/var/lib/libvirt/images/testqemu.img,if=ide,index=0,boot=on -drive file=,if=ide,media=cdrom,index=2 -net nic,macaddr=52:54:00:86:5b:6e,vlan=0,model=virtio,name=virtio.0 -net tap,fd=17,vlan=0,name=tap.0 -serial none -parallel none -usb -vnc 127.0.0.1:1 -k en-us -vga cirrus

Here is the unique, restrictive profile for this VM:

$ cat /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac
#
# This profile is for the domain whose UUID
# matches this file.
#
 
#include <tunables/global>
 
profile libvirt-a22e3930-d87a-584e-22b2-1d8950212bac {
   #include <abstractions/libvirt-qemu>
   #include <libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files>
}

$ cat /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files
# DO NOT EDIT THIS FILE DIRECTLY. IT IS MANAGED BY LIBVIRT.
  "/var/log/libvirt/**/testqemu.log" w,
  "/var/run/libvirt/**/testqemu.monitor" rw,
  "/var/run/libvirt/**/testqemu.pid" rwk,
  "/var/lib/libvirt/images/testqemu.img" rw,

Now shut it down:

$ virsh shutdown testqemu
Connecting to uri: qemu:///system
Domain testqemu is being shutdown

$ virsh domstate testqemu
Connecting to uri: qemu:///system
shut off

$ sudo aa-status | grep 'a22e3930-d87a-584e-22b2-1d8950212bac'
[1]

Advanced Usage
In general, you can forget about AppArmor confinement and just use libvirt like normal. The guests will be isolated from each other and user-space protection for the host is provided. However, the design allows for a lot of flexibility in the system. For example:

  • If you want to adjust the profile for all future, newly created VMs, adjust /etc/apparmor.d/libvirt/TEMPLATE
  • If you need to adjust access controls for all VMs, new or existing, adjust /etc/apparmor.d/abstractions/libvirt-qemu
  • If you need to adjust access controls for a single guest, adjust /etc/apparmor.d/libvirt-<uuid>, where <uuid> is the UUID of the guest
  • To disable the driver, either adjust /etc/libvirt/qemu.conf to have ‘security_driver = “none”‘ or remove the AppArmor profile for libvirtd from the kernel and restart libvirtd

Of course, you can also adjust the profiles for libvirtd and virt-aa-helper if desired. All the files are simple text files. See AppArmor for more information on using AppArmor in general.

Limitations and the Future
While the sVirt framework provides good guest isolation and user-space host protection, the framework does not provide protection against in-kernel attacks (eg, where a guest process is able to access the host kernel memory). The AppArrmor security driver as included in Ubuntu 9.10 also does not handle access to host devices as well as it could. Allowing a guest to access a local pci device or USB disk is a potentially dangerous operation anyway, and the driver will block this access by default. Users can work around this by adjusting the base profile for the guest.

There are few missing features in the sVirt model, such as labeling state files. The AppArmor driver also needs to better support host devices. Once AppArmor provides the ability for regular users to define profiles, then qemu:///session can be properly supported. Finally, it will be great when distributions take advantage of libvirt’s recently added ability to run guests as non-root when using qemu:///system (while the sVirt framework largely mitigates this risk, it is good to have security in depth).

Summary
While cloud computing feels like it is talked about everywhere and virtualization becoming even more important in the data center, leveraging technologies like libvirt and AppArmor is a must. Virtualization removes the traditional barriers afforded to stand-alone computers, thus increasing the attack surface for hostile users and compromised guests. By using the sVirt framework in libvirt, and in particular AppArmor on Ubuntu 9.10, administrators can better defend themselves against virtualization-specific attacks. Have fun and be safe!

More Information
http://libvirt.org/
https://wiki.ubuntu.com/AppArmor
https://wiki.ubuntu.com/SecurityTeam/Specifications/AppArmorLibvirtProfile


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

Recently I decided to replace NFS on a small network with something that was more secure and more resistant to network failures (as in, Nautilus wouldn’t hang because of a symlink to a directory in an autofs mounted NFS share). Most importantly, I wanted something that was secure, simple and robust. I naturally thought of SFTP, but there were at least two problems with a naive SFTP implementation, both of which I decided I must solve to meet the ‘secure’ criteria:

  • shell access to the file server
  • SFTP can’t restrict users to a particular directory

Of course, there are other alternatives that I could have pursued: sshfs, shfs, SFTP with scponly, patching OpenSSH to support chrooting (see ‘Update’ below), NFSv4, IPsec, etc. Adding all the infrastructure to properly support NFSv4 and IPsec did not meet the simplicity requirement, and neither did running a patched OpenSSH server. sshfs, shfs, and SFTP with scponly did not really fit the bill either.

What did I come up with? A combination of SFTP, a hardened sshd configuration and AppArmor on Ubuntu.

SFTP setup
This was easy because sftp is enabled by default on Ubuntu and Debian systems. This should be enough:

$ sudo apt-get install openssh-server
Just make sure you have the following set in /etc/ssh/sshd_config (see man sshd_config for details).

Subsystem sftp /usr/lib/openssh/sftp-server

With that in place, all I needed to do was add users with strong passwords:

$ sudo adduser sftupser1
$ sudo adduser sftpuser2

Then you can test if it worked with:

$ sftp sftpuser1@server
sftp> ls /
/bin ...

Hardened sshd
Now that SFTP is working, we need to limit access. One way to do this is via a Match rule that uses a ForceCommand. Combined with AllowUsers, adding something like this to /etc/ssh/sshd_config is pretty powerful:

AllowUsers adminuser sftpuser1@192.168.0.10 sftpuser2
Match User sftpuser1,sftpuser2
    AllowTcpForwarding no
    X11Forwarding no
    ForceCommand /usr/lib/openssh/sftp-server -l INFO

Remember to restart ssh with ‘/etc/init.d/ssh restart’. The above allows normal shell access for the adminuser, SFTP-only access to sftpuser1 from 192.168.0.10 and to sftupser2 from anywhere. One can imagine combining this with ‘PasswordAuthentication no’ or GSSAPI to enforce more stringent authentication so access is even more tightly controlled.

AppArmor
The above does a lot to increase security over the standard NFS shares that I had before. Good encryption, strong authentication and reliable UID mappings for DAC (POSIX discretionary access controls) are all in place. However, it doesn’t have the ability to confine access to a certain directory like NFS does. A simple AppArmor profile can achieve this and give even more granularity than just DAC. Imagine the following directory structure:

  • /var/exports (top-level ‘exported’ filesystem (where all the files you want to share are))
  • /var/exports/users (user-specific files, only to be accessed by the users themselves)
  • /var/exports/shared (a free-for-all shared directory where any user can put stuff. The ‘shared’ directory has ’2775′ permissions with group ‘shared’)

Now add to /etc/apparmor.d/usr.lib.openssh.sftp-server (and enable with ‘sudo apparmor_parser -r /etc/apparmor.d/usr.lib.openssh.sftp-server’):

#include <tunables/global>
/usr/lib/openssh/sftp-server {
  #include <abstractions/base>
  #include <abstractions/nameservice>
 
  # Served files
  # Need read access for every parent directory
  / r,
  /var/ r,
  /var/exports/ r,
 
  /var/exports/**/ r,
  owner /var/exports/** rwkl,
 
  # don't require ownership to read shared files
  /var/exports/shared/** rwkl,
}

This is a very simple profile for sftp-server itself, with access to files in /var/exports. Notice that by default the owner must match for any files in /var/exports, but in /var/exports/shared it does not. AppArmor works in conjunction with DAC, so that if DAC denies access, AppArmor is not consulted. If DAC permits access, then AppArmor is consulted and may deny access.

Summary
This is but one man’s implementation for a simple, secure and robust file service. There are limitations with the method as described, notably managing the sshd_config file and not supporting some traditional setups such as $HOME on NFS. That said, with a little creativity, a lot of possibilities exist for file serving with this technique. For my needs, the combination of standard OpenSSH and AppArmor on Ubuntu was very compelling. Enjoy!

Update
OpenSSH 4.8 and higher (available in Ubuntu 8.10 and later) contains the ChrootDirectory option, which may be enough for certain environments. It is simpler to setup (ie AppArmor is not required), but doesn’t have the same granularity and sftp-server protection that the AppArmor method provides. See comment 32 and comment 34 for details. Combining ChrootDirectory and AppArmor would provide even more defense in depth. It’s great to have so many options for secure file sharing! :)


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

After a lot of hard work by John Johansen and the Ubuntu kernel team, bug #375422 is well on its way to be fixed. More than just forward ported for Ubuntu, AppArmor has been reworked to use the updated kernel infrastructure for LSMs. As seen in #apparmor on Freenode a couple of days ago:

11:24 < jjohansen> I am working to a point where I can try upstreaming again, base off of the security_path_XXX patches instead of the vfs patches
11:24 < jjohansen> so the module is mostly self contained again

These patches are in the latest 9.10 kernel, and help testing AppArmor in Karmic is needed. To get started, verify you have at least 2.6.31-3.19-generic:

$ cat /proc/version_signature
Ubuntu 2.6.31-3.19-generic

AppArmor will be enabled by default for Karmic just like in previous Ubuntu releases, but it is off for now until a few kinks are worked out. To test it right away, you’ll need to reboot, adding ‘security=apparmor’ to the kernel command line. Then fire up ‘aa-status’ to see if it is enabled. A fresh install of 9.10 as of today should look something like:

$ sudo aa-status
apparmor module is loaded.
8 profiles are loaded.
8 profiles are in enforce mode.
/usr/lib/connman/scripts/dhclient-script
/usr/share/gdm/guest-session/Xsession
/usr/sbin/tcpdump
/usr/lib/cups/backend/cups-pdf
/sbin/dhclient3
/usr/sbin/cupsd
/sbin/dhclient-script
/usr/lib/NetworkManager/nm-dhcp-client.action
0 profiles are in complain mode.
2 processes have profiles defined.
2 processes are in enforce mode :
/sbin/dhclient3 (3271)
/usr/sbin/cupsd (2645)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Please throw all your crazy profiles at it as well as testing the packages with existing profiles, then file bugs:

  • For the kernel, add your comments (positive and negative) to bug #375422
  • AppArmor tools bugs should be filed with ‘ubuntu-bug apparmor’
  • Profile bugs should be filed against the individual source package with ‘ubuntu-bug <source package name>’. See DebuggingApparmor for details.

Thank you Ubuntu Kernel team and especially John for all the hard work on getting the “MAC system for human beings” (as I like to call it) not only working again, but upstreamable — this is really great stuff! :)


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

Short answer: Of course not!

Longer answer:
Ubuntu has had AppArmor1 on by default for a while now, and with each new release more and more profiles are added. The Ubuntu community has worked hard to make the installed profiles work well, and by far and large, most people happily use their Ubuntu systems without noticing AppArmor is even there.

Of course, like with any software, there are bugs. I know these AppArmor profile bugs can be frustrating, but because AppArmor is a path-based system, diagnosing, fixing and even working around profile bugs is actually quite easy. AppArmor has the ability to disable specific profiles rather than simply turning it on or off, yet I’ve seen people in IRC and forums advise others to disable AppArmor completely. This is totally misguided and YOU SHOULD NEVER DISABLE APPARMOR ENTIRELY to work around a profiling problem. That is like trying to open your front door with dynamite– it will work, but it’ll leave a big hole and you’ll likely hurt yourself. Think about it, on my regular ol’ Jaunty laptop system, I have 4 profiles in place installed via Ubuntu packages (to see the profiles on your system, look in /etc/apparmor.d). Why would I want to disable all of AppArmor (and therefore all of those profiles) instead of dealing with just the one that is causing me problems? Obviously, the more software you install with AppArmor protection, the more you have to lose by disabling AppArmor completely.

So, when dealing with a profile bug, there are only a few things you need to know:

  1. AppArmor messages show up in /var/log/kern.log (by default)
  2. AppArmor profiles are located in /etc/apparmor.d
  3. The profile name is, by convention, <absolute path with ‘/’ replaced by ‘.’>. E.g ‘/etc/apparmor.d/sbin.dhclient3′ is the profile for ‘/sbin/dhclient3′.
  4. Profiles are simple text files

With this in mind, let’s say tcpdump is misbehaving. You can check /var/log/kern.log for entries like:

Jul 7 12:21:15 laptop kernel: [272480.175323] type=1503 audit(1246987275.634:324): operation="inode_create" requested_mask="a::" denied_mask="a::" fsuid=0 name="/opt/foo.out" pid=24113 profile="/usr/sbin/tcpdump"

That looks complicated, but it isn’t really, and it tells you everything you need to know to file a bug and fix the problem yourself. Specifically, “/usr/sbin/tcpdump” was denied “a” access to “/opt/foo.out”.

So now what?

If you are using the program with a default configuration or non-default but common configuration, then by all means, file a bug. If unsure, ask on IRC, on a mailing list or just file it anyway.

If you are a non-technical user or just need to put debugging this issue on hold, then you can disable this specific profile (there are others ways of doing this, but this method works best):

$ sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.tcpdump
$ sudo ln -s /etc/apparmor.d/usr.sbin.tcpdump /etc/apparmor.d/disable/usr.sbin.tcpdump

What that does is remove the profile for tcpdump from the kernel, then disables the profile such that AppArmor won’t load it when it is started (eg, on reboot). Now you can use the application without AppArmor protection, but leaving all those other applications with profiles protected.

If you are technically minded, dive into /etc/apparmor.d/usr.sbin.tcpdump and adjust the profile, then reload it:

$ sudo <your favorite editor> /etc/apparmor.d/usr.sbin.tcpdump
$ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.tcpdump

This will likely be an iterative process, but you can base your new or updated rules on what is already the profile– it is pretty straightforward. After a couple of times, it will be second nature and you might want to start contributing to developing new profiles. Once the profile is working for you, please add your proposed fix to the bug report you filed earlier.

The DebuggingApparmor page has information on how to triage, fix and work-around AppArmor profile bugs. To learn more about AppArmor and the most frequently used access rules, install the apparmor-docs package, and read /usr/share/doc/apparmor-docs/techdoc.pdf.gz.

1. For those of you who don’t know, AppArmor is a path-based (as opposed to SELinux, which is inode-based) mandatory access control (MAC) system that limits access a process has to a predefined set of files and operations. These access controls are known as ‘profiles’ in AppArmor parlance.


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

It all started back in the good ol’ days of the Jaunty development cycle when I heard this new fangled filesystem thingy called ext4 was going to be an option in Jaunty. It claimed to be faster with much shorter fsck times. So, like any good Ubuntu developer, I tried it. It was indeed noticeably faster and fsck times were much improved.

The honeymoon was over when I ended up hitting bug #317781. That was no fun as it ate several virtual machines and quite a few other things (I had backups for all but the VMs). This machine is on a UPS, uses raid1, and on modern hardware (dual core Intel system with 4GB of RAM). In other words, this is not some flaky system but one that normally is only rebooted when there is a kernel upgrade (well, that is a white lie, but you get the point– it is a stable machine). According to Ted Ts’o, I shouldn’t be seeing this. Frustrated, I spent the better part of a weekend shuffling disks around to try to move my data off of ext4 and reformat the drive back to ext3. I was, how shall I put it, disappointed.

Some welcome patches were applied to Jaunty’s kernel soon after that to make ext4 behave more like ext3. By all accounts this stops ext4 from eating files under adverse conditions. So, now not only does the filesystem perform well, it doesn’t eat files. Life was good…

… until I noticed that under certain conditions I would get a total system freeze. Naturally, there was nothing in the logs (something I always appreciate ;). I thought it might have been several things, but I couldn’t prove any of them. Yesterday, however, I was able to reliably freeze my system. Basically, I was compiling a Jaunty kernel (2.6.28) in a schroot and using this command:

$ CONCURRENCY_LEVEL=2 AUTOBUILD=1 NOEXTRAS=1 skipabi=true fakeroot debian/rules binary-generic
Things were going along fine until I tried to delete a ton of files in a deep directory during the compile, then it would freeze. I was able to reproduce this 3 times in a row. Finally, I shuffled some things around and put my /home partition back on ext3 and I could not reproduce the freeze. There are several bugs in Launchpad talking about ext4 and system freezes, and after a bit more research I will add my comments, but for now, I am simply hopeful I will not see the freezes any more.

To be fair, ext4 is not the default filesystem on 9.04, and while it is supposed to be in 9.10, people I know running ext4 on 9.10 aren’t seeing these problems (yes, I’ve asked around). I do continue to use ext4 for ‘/’ on Jaunty systems with a separate /home partition as ext3, because it really does perform better, and this seems to be a good compromise. Having been burned by ext4 a couple of times now, I think it’ll probably be awhile before I trust ext4 for my important data though. Time will tell. :)


Posted in ubuntu, ubuntu-server

Read more
jdstrand

While not exactly news as it happened sometime last month, ufw is now in Debian and is even available in Squeeze. What is new is that the fine folks in Debian have started to translate the debconf strings in ufw, and during the process the strings are much better. Thanks Debian!

In other news, ufw trunk now has support for filtering by interface. To use it, do something like:

$ sudo ufw allow in on eth0 to any port 80

See the man page for more information. This feature will be in ufw 0.28 which is targeted for Ubuntu Karmic and I also hope to add egress filtering this cycle. I haven’t started on egress filtering yet, but I have a good idea on how to proceed. Stay tuned!


Posted in security, ubuntu, ubuntu-server

Read more