Canonical Voices

Louis

I have seen this setup documented a few places, but not for Ubuntu so here it goes.

I have used this many time to verify or diagnose Device Mapper Multipath (DM-MPIO) since it is rather easy to fail a path by switching off one of the network interfaces. Nowaday, I use two KVM virtual machines with two NIC each.

Those steps have been tested on Ubuntu 12.04 (Precise) and Ubuntu 14.04 (Trusty). The DM-MPIO section is mostly a cut and paste of the Ubuntu Server Guide

The virtual machine that will act as the iSCSI target provider is called PreciseS-iscsitarget. The VM that will connect to the target is called PreciseS-iscsi. Each one is configured with two network interfaces (NIC) that get their IP addresses from DHCP. Here is an example of the network configuration file :

$ cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
#
auto eth1
iface eth1 inet dhcp

The second NIC resolves to the same hostname with a « 2″ appended (i.e. PreciseS-iscsitarget2 and PreciseS-iscsi2)

Setting up the iSCSI Target VM

This is done by installing the following packages :

$ sudo apt-get install iscsitarget iscsitarget-dkms

Edit /etc/default/iscsitarget and change the following line to enable the service :

ISCSITARGET_ENABLE=true

We now proceed to create an iSCSI target (aka disk). This is done by creating a 50 Gb sparse file that will act as our disk :

$ sudo dd if=/dev/zero of=/home/ubuntu/iscsi_disk.img count=0 obs=1 seek=50G

This container is used in the definition of the iSCSI target. Edit the file /etc/iet/ietd.conf. At the bottom, add :

Target iqn.2014-09.PreciseS-iscsitarget:storage.sys0
        Lun 0 Path=/home/ubuntu/iscsi_disk.img,Type=fileio,ScsiId=lun0,ScsiSN=lun0

The iSCSI target service must be restarted for the new target to be accessible

$ sudo service iscsitarget restart


Setting up the iSCSI initiator

To be able to access the iSCSI target, only one package is required :

$ sudo apt-get install open-iscsi

Edit /etc/iscsi/iscsid.conf changing the following:

node.startup = automatic

This will ensure that the iSCSI targets that we discover are enabled automatically upon reboot.

Now we will proceed to discover and connect to the device that we setup in the previous section

$ sudo iscsiadm -m discovery -t st -p PreciseS-iscsitarget
$ sudo iscsiadm -m node --login
$ dmesg | tail
[   68.461405] iscsid (1458): /proc/1458/oom_adj is deprecated, please use /proc/1458/oom_score_adj instead.
[  189.989399] scsi2 : iSCSI Initiator over TCP/IP
[  190.245529] scsi 2:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[  190.245785] sd 2:0:0:0: Attached scsi generic sg0 type 0
[  190.249413] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)
[  190.250487] sd 2:0:0:0: [sda] Write Protect is off
[  190.250495] sd 2:0:0:0: [sda] Mode Sense: 77 00 00 08
[  190.251998] sd 2:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  190.257341]  sda: unknown partition table
[  190.258535] sd 2:0:0:0: [sda] Attached SCSI disk

We can see in the dmesg output that the new device /dev/sda has been discovered. Format the new disk & create a file system. Then verify that everything is correct by mounting and unmounting the new file system.

$ fdisk /dev/sda
n
p
1
<ret>
<ret>
w
$  mkfs -t ext4 /dev/sda1
$ mount /dev/sda1 /mnt
$ umount /mnt

 

Setting up DM-MPIO

Since each of our virtual machines have been configured with two network interfaces, it is possible to reach the iSCSI target through the second interface :

$ iscsiadm -m discovery -t st -p
192.168.1.193:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
192.168.1.43:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
$ iscsiadm -m node -T iqn.2014-09.PreciseS-iscsitarget:storage.sys0 --login

Now that we have two paths toward our iSCSI target, we can proceed to setup DM-MPIO.

First of all, a /etc/multipath.conf file must exist.  Then we install the needed package :

$ sudo -s
# cat << EOF > /etc/multipath.conf
defaults {
        user_friendly_names yes
}
EOF
# exit
$ sudo apt-get -y install multipath-tools

Two paths to the iSCSI device created previously need to exist for the multipath device to be seen.

# multipath -ll
mpath0 (149455400000000006c756e30000000000000000000000000) dm-2 IET,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:0:0 sda 8:0   active ready  running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:0 sdb 8:16  active ready  running

The two paths are indeed visible. We can move forward and verify that the partition table created previously is accessible :

$ sudo fdisk -l /dev/mapper/mpath0

Disk /dev/mapper/mpath0: 53.7 GB, 53687091200 bytes
64 heads, 32 sectors/track, 51200 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0e5e5db1

              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/mpath0p1            2048   104857599    52427776   83  Linux

All that is remaining is to add an entry to the /etc/fstab file so the file system that we created is mounted automatically at boot.  Notice the _netdev entry : this is required otherwise the iSCSI device will not be mounted.

$ sudo -s
# cat << EOF >> /etc/fstab
/dev/mapper/mpath0-part1        /mnt    ext4    defaults,_netdev        0 0
EOF
exit
$ sudo mount -a
$ df /mnt
Filesystem               1K-blocks   Used Available Use% Mounted on
/dev/mapper/mpath0-part1  51605116 184136  48799592   1% /mnt

Read more
Louis

After three years of hard work, it is time to retire my 8440p and let the family enjoy its availability. For my new workhorse, I have chosen the HP Elitebook EVO 850 that fit my budget and performance requirement.

TL;DR : The Elitebook EVO 850 basic functionalities work fine with Ubuntu 14.04.1

Before hosing the Windows 7 installation, I thought of testing the basic functionalities. So after booting Win7 I checked that most of the thing (sound, light, webcam, etc) did work as expected.

Never underestimate the power of the bug : if there is some hardware issue, then it is better to do a first diagnostic on Windows. The HP tech will love you for that (been there, done that). Otherwise, there will always be a doubt that Ubuntu is the culprit & they will not try to look any further.

After a successful Windows boot, I created a bootable USB stick with the latest Ubuntu release on it to verify that Ubuntu itself runs fine. No need to wipe out Windows and install Ubuntu on it only to find out that the hardware fails miserably. Here is the command I used to create the bootable USB stick, since the USB creator has been buggy for years on Ubuntu :

$ dd if=ubuntu-14.04.1-desktop-amd64.iso of=/dev/sdc bs=4M

One important note : this is a laptop that is factory installed with a secure boot configuration in the BIOS. I did not have to change anything to boot Ubuntu so you should not have to.

Since everything looked good, I went ahead & restarted the laptop & installed Ubuntu Trusty Tahr 14.04..1 on the laptop, using a full disk install with full disk encryption.  Installation was flawless and completed in less than five minutes, thanks to the 250Gb SSD drive !

Read more
Louis

A couple of weeks ago I announced that I was working on a new remote functionality for kdump-tools, the kernel crash dump tool used on Debian and Ubuntu.

I am now done with the development of the new functionality, so the package is ready for testing. If you are interested, just read the previous post which has all the gory details on how to set it up & test it.

Read more
Louis

A few years ago, while I started to participate to the packaging of makedumpfile and kdump-tools for Debian and ubuntu. I am currently applying for the formal status of Debian Maintainer to continue that task.

For a while now, I have been noticing that our version of the kernel dump mechanism was lacking from a functionality that has been available on RHEL & SLES for a long time : remote kernel crash dumps. On those distribution, it is possible to define a remote server to be the receptacle of the kernel dumps of other systems. This can be useful for centralization or to capture dumps on systems with limited or no local disk space.

So I am proud to announce the first functional beta-release of kdump-tools with remote kernel crash dump functionality for Debian and Ubuntu !

For those of you eager to test or not interested in the details, you can find a packaged version of this work in a Personal Package Archive (PPA) here :

https://launchpad.net/~louis-bouchard/+archive/networked-kdump

New functionality : remote SSH and NFS

In the current version available in Debian and Ubuntu, the kernel crash dumps are stored on local filesystems. Starting with version 1.5.1, they are stored in a timestamped directory under /var/crash. The new functionality allow to either define a remote host accessible through SSH or an NFS mount point to be the receptacle for the kernel crash dumps.

A new section of the /etc/default/kdump-tools file has been added :

# ---------------------------------------------------------------------------
# Remote dump facilities:
# SSH - username and hostname of the remote server that will receive the dump
# and dmesg files.
# SSH_KEY - Full path of the ssh private key to be used to login to the remote
# server. use kdump-config propagate to send the public key to the
# remote server
# HOSTTAG - Select if hostname of IP address will be used as a prefix to the
# timestamped directory when sending files to the remote server.
# 'ip' is the default.
# NFS - Hostname and mount point of the NFS server configured to receive
# the crash dump. The syntax must be {HOSTNAME}:{MOUNTPOINT} 
# (e.g. remote:/var/crash)
#
# SSH="<user@server>"
#
# SSH_KEY="<path>"
#
# HOSTTAG="hostname|[ip]"
# 
# NFS="<nfs mount>"
#

The kdump-config command also gains a new option : propagate which is used to send a public ssh key to the remote server so passwordless ssh commands can be issued to the remote SSH host.

Those options and commands are nothing new : I simply based my work on existing functionality from RHEL & SLES. So if you are well acquainted with RHEL remote kernel crash dump mechanisms, you will not be lost on Debian and Ubuntu. So I want to thank those who built the functionality on those distributions; it was a great help in getting them ported to Debian.

Testing on Debian

First of all, you must enable the kernel crash dump mechanism at the kernel level. I will not go in details as it is slightly off topic but you should :

  1. Add crashkernel=128M to /etc/default/grub in GRUB_CMDLINE_LINUX_DEFAULT
  2. Run udpate-grub
  3. reboot

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ apt-get install software-properties-common
$ add-apt-repository ppa:louis-bouchard/networked-kdump

Since you are on Debian, the result of the last command will be wrong, as the serie defined in the PPA is for Utopic. Just use the following command to fix that :

$ sed -i -e 's/sid/utopic/g' /etc/apt/sources.list.d/louis-bouchard-networked-kdump-sid.list 
$ apt-get update
$ apt-get install kdump-tools makedumpfile

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

root@sid:~# kdump-config propagate
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (192.168.122.70)' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash

If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :

SSH_KEY="/root/.ssh/mykey_id_rsa"

Then run the propagate command as previously :

root@sid:~/.ssh# kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

root@sid:~/.ssh# ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd
/home/ubuntu

If the passwordless connection can be achieved, then everything should be all set. You can proceed with a real crash dump test if your setup allows for it (not a production environment for instance).

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually :

root@sid:~/.ssh# mount -t nfs TrustyS-netcrash:/var/crash /mnt
root@sid:~/.ssh# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167360 5278848 19% /mnt
root@sid:~/.ssh# umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

Testing on Ubuntu

As you would expect, setting things on Ubuntu is quite similar to Debian.

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ sudo add-apt-repository ppa:louis-bouchard/networked-kdump

Packages are available for Trusty and Utopic.

$ sudo apt-get update
$ sudo apt-get -y install linux-crashdump

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

ubuntu@TrustyS-testdump:~$ sudo kdump-config propagate
[sudo] password for ubuntu: 
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (192.168.122.70)' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash
ubuntu@TrustyS-testdump:~$
If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :
SSH_KEY="/root/.ssh/mykey_id_rsa"

Then run the propagate command as previously :

ubuntu@TrustyS-testdump:~$ kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

ubuntu@TrustyS-testdump:~$sudo ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd
/home/ubuntu

If the passwordless connection can be achieved, then everything should be all set.

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually (you might need to install the nfs-common package) :

ubuntu@TrustyS-testdump:~$ sudo mount -t nfs TrustyS-netcrash:/var/crash /mnt 
ubuntu@TrustyS-testdump:~$ df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167488 5278720 19% /mnt
ubuntu@TrustyS-testdump:~$ sudo umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

 Miscellaneous commands and options

A few other things are under the control of the administrator

The HOSTTAG modifier

When sending the kernel crash dump, kdump-config will use the IP address of the server to as a prefix to the timestamped directory on the remote host. You can use the HOSTTAG variable to change that default. Simply define in /etc/default/kdump-tools :

HOSTTAG="hostname"

The hostname of the server will be used as a prefix instead of the IP address.

Currently, this is only implemented for the SSH method, but it will be available for NFS as well in the final version.

kdump-config show

To verify the configuration that you have defined in /etc/default/kdump-tools, you can use kdump-config’s show command to review your options.

ubuntu@TrustyS-testdump:~$ sudo kdump-config show
USE_KDUMP: 1
KDUMP_SYSCTL: kernel.panic_on_oops=1
KDUMP_COREDIR: /var/crash
crashkernel addr: 0x2d000000
SSH: ubuntu@TrustyS-netcrash
SSH_KEY: /root/.ssh/kdump_id_rsa
HOSTTAG: ip
current state: ready to kdump
kexec command:
 /sbin/kexec -p --command-line="BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/TrustyS--vg-root ro console=ttyS0,115200 irqpoll maxcpus=1 nousb" --initrd=/boot/initrd.img-3.13.0-24-generic /boot/vmlinuz-3.13.0-24-generic

If the remote crash kernel dump functionality is setup, you will see the options listed in the output of the commands.

Conclusion

As outlined at the beginning, this is the first functional beta version of the code. If you are curious, you can find the code I am working on here :

http://anonscm.debian.org/gitweb/?p=collab-maint/makedumpfile.git;a=shortlog;h=refs/heads/networked_kdump_beta1

Don’t hesitate to test & let me know if you find issues

Read more
caribou

Juju, maas and agent-name

While working on a procedure with colleagues, I ran into an issue that I prefer to document somewhere. Since my blog has been sleeping for close to a year, I thought I’d put it here.

I was running a set of tests on a version of juju that will resemble 1.16.4. This version, as each one after 1.16.2 implement the agent-name identifier that is, as I understand it, used to discriminate between multiple bootstrapped environments.

I had a bootstrapped environment on a maas 1.4 server (1.4+bzr1693+dfsg-0ubuntu2~ctools0) with a few services running on two machines. Many hours of testing led us to identify that even though juju was using agent-name, it appeared that maas was not.

After upgrading to the latest and greatest version of maas (1.4+bzr1693+dfsg-0ubuntu2.2~ctools0), I ran a « juju status » and to my dismay I got this :

$ juju status

Please check your credentials or use ‘juju bootstrap’ to create a new environment.

Error details:

no instances found

Using –show log to get a bit more information gave me the following :

$ juju status –show-log

2013-11-28 15:33:05 ERROR juju supercommand.go:282 Unable to connect to environment « maas14″.

Please check your credentials or use ‘juju bootstrap’ to create a new environment.

 

Error details:

no instances found

Going to the maas server to look at a few log, I found that juju was indeed sending the agent-name to maas :

GET /MAAS/api/1.0/files/97fbdbaa-c727-4f19-8710-dbc6d361e8db-provider-state/ HTTP/1.1″ 200 516 « - » « Go 1.1 package http »

192.168.124.1 – - [28/Nov/2013:16:33:32 +0100] « GET /MAAS/api/1.0/nodes/?agent_name=97fbdbaa-c727-4f19-8710-dbc6d361e8db&id=node-17ad8da0-5361-11e3-bd0a-525400fd96d7&op=list HTTP/1.1″ 200 203 « - » « Go 1.1 package http »

Well, it turns out that after the upgrade of the maas packages, maas will start to honour the agent-name, but since the instances were created without any agent-name, it was no longer able to provide the information.

Raphael Badin who works on the maas team suggested to use the following maas shell commands to fix things up :

$ sudo maas shell

>>> from maasserver.models import Node
>>> # This assumes one wants to update *all* the nodes present in this MAAS.
>>> Node.objects.all().update(agent_name="AGENT_NAME_FROM_JUJU")

Be very careful with such a command if your maas environment is in production as this will change all the nodes present in MAAS.

Read more
Louis

Planes, Trains & Automobiles

Ok, the title shows my age. But a facebook post from a colleague (oh, yeah, who happens to be my boss), writing his thoughts on a train to Québec city made me think of my own experience of today, sitting in a plane for 8 hours on my way to Montréal.

Not because we were late leaving by one hour, wait that was only enlightened by the pilot’s humour in describing in almost real time the reasons for us being late (mixup in the baggage loading, then nobody to remove the boarding gate, then the guy who’s supposed to move the plane out of the  boarding area just left without notice). But because this plane was taking me from he country I elected to live from the country were I was born, the city where I spent some of my best time.

Flying for me is rarely a burden, once I’m in the plane.  Actually, once I’ve cleared security and am waiting to board.  Then the calm reaches me; all I need to do is sit and be taken care of. Even when, while watching the beginning of « The Fight Club » on the onboard video system, I watch an « in flight collision », three days after dreaming of experiencing an airline crash on take off.  I am not a frequent flyer by any measure, but air travelling is not a problem for me.

But every now and then, I get to fly back home from home.  I get to return to Montréal, Québec, from Le Chesnay France.  This does often put me in curious positions. Like an hour ago when, in the hotel’s elevator, I meet two people from France. I recognise the accent, then automatically switch to my « France » accent and behave just like any other person from France visiting.  But earlier, coming back from the airport, I talked to the taxi driver as any other quebéquois would have done.  I do like this situation. Makes me think that I have taken out the best of both worlds. I also remember this big map of France that I put up on my appartment wall, back when going to France was not even a possibility yet.  Back then it was a wish.

It also take me away from my family, my two beloved daughter and Nathalie, the woman I love. And bring me back to my family, my two other brothers and my parents who still live here.  Even though I am briefly far away from the family I have participated to build, I am also briefly closer to the family who saw me grow up.  Airplanes get me to go from one to another in only a few hours.

Automobiles will not see me much here. This may be a proof that I am only a visitor here nowadays.  I used to drive the streets of Montréal daily, without hesitation, knowing where to go.  I have this feeling driving in the Paris area nowadays.  I’m much more familiar with Versailles and Paris than I would be in Laval or the east end of Montréal.  But if I was to come back here, it would all come back very quickly.

Planes, trains & Automobiles take us to our lives at the speed at which they evolve, or at the distances where they happen.

Read more
caribou

For historical reasons, I have been installing my Ubuntu laptop on a fully encrypted disk for years.  Up until now, I needed to use the alternate CD since this was the only possibility to install with full disk encryption.

This is no longer the case.  If you want to use full disk encryption with Quantal Quetzal, you can use the standard installation CD and will be presented with the following options :

You can then select the « Encrypt the new Ubuntu installation for security » option to request full disk encryption. Alternatively, you can elect to use LVM as I did, but this is not a requirement in order to get full disk encryption.

Kudos to the Ubuntu development team for making this option so simple now !

Read more
caribou

If you are part of those people who are reluctant to upgrade to newer kernels, here is an example of how this can make your life miserable every 209 days.

There is a specific kernel bug in Lucid that will provoke a kernel panic after 208 days, which is regular behavior on a server (and a cloud instance ?). Here is the kernel GIT commit related to this :

http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-lucid.git;a=commit;h=595378ac1dd449e5c379bf6caa9cdfab008974c8

This has been fixed in the ubuntu kernel since 2.6.32-38 months ago but if you prefer not to upgrade to earlier kernels on Lucid, you will be hit by this bug.

Read more
caribou

One nice thing about Banshee was the seemless integration of the Amazon MP3 store. Since I reinstalled my laptop on Precise, I know have Rhythmbox instead of Banshee which does not seems to offer the same kind of integration.

Luckily for me, I found a nice little hack that will help me get my favorite non-DRMed MP3 files on Ubuntu :

http://ubuntuforums.org/showthread.php?t=1955149

In short, install clamz that will allow you to read & download the .amz files that the Amazon music store sends when you buy music. 

Hope this helps

Read more
caribou

One year ago, I had done my last day after thirteen years with Digital Equipment Corp which became Compaq, then HP.  After starting on Digital Unix/Tru64, I had evolved to a second level support position in the Linux Global Competency Center.

In a few days, on the 18th, I will have completed my first full year as a Canonical employee. I think it is time to take a few minutes to look back at that year.

Coming from a RHEL/SLES environment with a bit of Debian, my main asset was the fact that I  had been an Ubuntu user since 5.04, using it as my sole operating system on my corporate laptop. The first week in the new job was also a peculiar experience, as it brought me back to my native country and to Montréal, a city that I love and where I lived for three years.  So I was not totally lost in my new environment. I also had the chance of ramping up my knowledge of Ubuntu Server, which was an easy task.  What was more surprizing and became one of the most exciting part of the new job is to work in a completely dedicated opensource environment from day one.

Rapidly I became aware of the fact that, participating in the Ubuntu community was not only possible, but it was expected.  That if I were to find a bug, I needed to report it and, if possible find ways to fix it.  In my previous job I was looking for existing solutions, or bringing in enough elements to my L3 counterpart that they would be able to request a fix to Red Hat or Novell.  Here if I was able to identify the problem and suggest a solution, I was encouraged to propose it as the final fix.  I also rapidly found out that the developpers were no longer the remote engineers in some changelog file, but IRC nicks that I could chat with and eventually meet.

Then came about Openstack in the summer : a full week of work with colleagues aimed at getting to know the technology, trying to master concepts that were very vague back then and making things work.  Getting Swift Object Store up and running and trying to figure out how best this could be used.  Here I was asked to do one of the think I like best : learning by getting things to work. This lead to a better understanding of what a cloud architecture was all about and really made me understand how useful and interesting a cloud infrastructure can be. Oh, and I did get to build my first openstack cloud.

This was another of this past year’s great experience : UDS-P. I had heard of UDS-O when I joined but it was too early for me to attend.  But after six months around it was time for UDS-P and, this time, I would be there.  Not only I had time to meet a good chunk of developpers, but I also got a lot of work done.  Like helping Michael Terry fix a bug on Deja-Dup that would only appear on localized systems, get advices on fixing kdump with the kernel team and some of the foundation engineers and a whole lot more.

Then came back the normal work for our customers, fixing their issues, trying to help improve their support experience and get better at what we do. And also seeing some of my fixes make it into our upcoming distribution and also back to the existing ones.  This was a great thrill and an objective that I did not think would come by so fast.

Being part of the Ubuntu community has been a great addition to my career. This makes me want to do even more and get the best out of our collective efforts.

This was a great year. Sure hope that the next one will be even better.

Read more
caribou

Recently, I have realized a major difference in how customer support is done on Ubuntu.

As you know, Canonical provides official customer support for Ubuntu both on server and desktop. This is the work I do : provice customer with the best level of support on the Ubuntu distribution.  This is also what I was doing on my previous job, but for the Red Hat Enterprise Linux and SuSE Linux Enterprise Server distributions.

The major difference that I recently realized is that, unlike my previous work with RHEL & SLES, the result of my work is now available to the whole Ubuntu community, not just to the customers that may for our support.

Here is an example. Recently one of our customer identified a bug with vm-builder in a very specific case.  The work that I did on this bug resulted in a patch that I submitted to the developers who accepted its inclusion in the code. In my previous life, this fix would have been made available only to customers paying a subscription to the vendors through their official update or service pack services.

With Ubuntu, through Launchpad and the regular community activity, this fix will become available to the whole community through the standard -updates channel of our public archives.

This is true for the vast majority of the fixes that are provided to our customers. As a matter of fact, the public archives are almost the only channel that we have to provide fixes to our customers, hence making them available to the whole Ubuntu community at the same time.  This is different behavior and something that makes me a bit prouder of the work I’m doing.

Read more
caribou

I’m not a big fan^c^c^c user of Twitter, but this morning, there was a retweet of an interesting blog post that I want to share here :

Remote worker vs distributed team

Being a remote worker myself working in a somewhat distributed team, I can very well relate to what the author is talking about. I thought that it was worth sharing in a different way.

Read more
caribou

Profession : support engineer

A while back, Mark talked about some of the professions in the computer industry. Unfortunately I can’t seem trackback those articles.  Instead of waiting for him to talk about the on I am part of, I thought of doing it myself.

In a train back from Paris, where I have spent the day doing an intensive session of troubleshooting for one of our customers, I am reminiscing about the work I have achieved today, the good decision I took which helped us identify a potential cause for  the suspend/resume issue that we were investigating.

First of all, this time I was not the one with the knowledge : Colin King was.  I was there to assist and help in identifying why the laptop was not resuming from suspend. Since the customer plans to deploy 2000 of this specific model, it is better to identify and fix those kind of issues beforehand.

So we went in, him in somewhere in England, me in Paris, both in a constant IRC chat, shooting ideas back and forth.  Well mostly him sending ideas and me shooting back the results and observations, things that I was seeing, important or not, trivial or what I thought was important.  And in those situations where you are part of a chain and not necessarily the one with the most knowledge in the issue being investigated, it is very important to avoid judging on the importance of the information that  you provide.

Too often, the dreaded « oh, I thought that this wasn’t important so I didn’t bother to mention » spell just brings tons of bad magic on the investigation efforts.  I learned about it the hard way, being too often at the other end of the chain, trying to make sense of an investigation with the help of a more junior support colleague or a customer who, too many times, has more important things to do.

This time, one of the most important piece of information came about this way :

(12:08:29) caribou: it’s about lunch time, I may step out to grab a sandwich in the meantime
(12:12:08) cking: sure
(12:12:16) caribou: fyi, it doesn’t completely powers off
(12:12:30) cking: AH
(12:12:45) cking: this means it’s definitely a problem with the power management on the southbridge
(12:12:51) caribou: after the « shutdown », the screen stays on with [28. ....] Power down.
(12:13:19) cking: if it can’t S5 then we can do a S3 either, that’s the same power management register on the southbridge

By simply indicating that, after invoking the ‘shutdown’ command, the laptop did not power off but stayed powered on in some limbo, I brought to Colin’s attention, a situation that for me was trivial, but for him was very important.  I did not know that the suspend/resume functionality and the power off functionality use the same mechanism, the same power management register and that if one did not work, the other couldn’t work either !

From that point on, we were able to target the whole power management subsystem.  After a few more tests on the operating system side, cking became convinced that it was a hardware issue. One easy way to find out is to swap the operating system and see if the problem persists. So here I go, installing Windows 7 on this laptop, installing the graphic driver to get the suspend functionality working to finally be able to test the same functionality on Windows 7.

One other important thing highlighted by cking was to make sure that Windows 7 was also using the ACPI functionalities and not the old APM, which would have rendered our test useless.  A few minutes of searching the web with the support engineer’s best friend : the web search engine pointed to many articles on Microsoft’s website talking about ACPI & Win7. That was sufficient to confirm that it was also using ACPI.

So I went ahead and actioned the « Suspend » functionality on Win7.  The laptop’s screen went blank, but the power button kept blinking with a few more LEDs turned on, which was the main symptom on Ubuntu as well.  This and the fact that shutting down Windows 7 left the laptop in the same kind of limbo’s half powered down state, achieved to convince us that we were facing a hardware problem and not some bug of our operating system.  The customer still have to find out if this is systematic on that model of Sandybridge laptop or this is only happening on this single unit. But now, they know were to look at, especcially  since their back up plan was to deploy those laptop on Windows 7 instead of Ubuntu.

I wanted to write about today’s session because I thing that it shows the kind of investigating work that is expected from a support engineer.  Most of the time, we are not there to fix bugs, but to clearly identify them, to find ways to reproduce them as much as possible.  Our job is to provide the data that will be necessary to identify the flaw and fix it.  Sometimes, the first action is to find workarounds so the user can continue his work while we look for a more definitive solution.  Some of us will even go further and suggest fixes to the developpers or at least help them in fixing those bugs.

A support engineer needs to be investigative, to keep an open mind, to avoid the mistake of being convinced that he already knows the answer before starting to look at the data.  He must be able to ask for help, to be humble about his knowledge and respect the knowledge of others.  And often, it is best to take a step back and look at a problem from different angles.  Sometimes the solution is easier to find from a few steps back.

I have been doing this job for almost fifteen years and I still  get a kick at identifying bugs, in helping users with their issues, in trying to make their computing experience as easy as it can be.  I know that many of my colleagues feel the same about it.  I also think that a support engineer is not just some sysadmin who doesn’t have the skills required to become a developer.  I’ve seen a few too many devs digging in the source code, looking for answers to a problem, when simply opening the log files and searching for known issues was enough to identify and fix the problem.

And don’t get me wrong, I admire the skills of the developers and I’m still hoping to get more and more involved in works similar to theirs, but I also have a great respect all those support engineers that go after the next problem and find a fix for it.

Read more
caribou

Ambivalent

As I have just discovered that I was now a tool, soon to be backaged in Oneiric. Looks like I might meet myself at the next UDS…

https://bugs.launchpad.net/ubuntu/+source/caribou/+bug/845300
:-)

Read more
caribou

While testing Oneiric on a separate disk, I wanted to get some files off my laptop’s hard drive which is hosting my normal Natty’s install.  Keeping with a previous setup, I had installed my laptop with a fully encrypted hard disk, using the alternate CD, so I needed a procedure to do this manually.

Previously, I had tested booting the Natty LiveCD and, to my enlightened surprise, the Livce CD did see the encrypted HD and proceeded to ask for the passphrase in order to mount it.  But this time, I’m not running off the LiveCD, but from a complete install which is on a separate hard drive.  Since it took me a while to locate the proper procedure, I thought that I would help google a bit so it is not so deep in the pagerank for others next time.  But first, thanks to UbuntuGeek’s article Rescue and encrypted LUKS LVM volume for providing the solution.

Since creating an encrypted Home directory is easily achieved with standard installation methods, there are many references to how to achieve it for encrypted private directory. Dustin Kirkland’s blog is a very good source of information on those topics. But dealing with an encrypted partition requires a different approach. Here it is (at least for an encrypted partition done using the Ubuntu alternate DVD) :

First of all, you need to make sure that lvm2 and cryptsetup packages are installed. If not, go ahead and install them

 # sudo aptitude install cryptsetup lvm2

Then verify if the dm-crypt module is loaded and load it if it is not

 # sudo modprobe dm-crypt

Once this is done, open  the LUKS partition (using your own encrypted partition name) :

 # sudo cryptsetup luksOpen /dev/sda3 crypt1

You should have to provide the passphrase that is used to unlock your crypted partition here.

Once this is done, you must scan for the LVM volume groups :

 # sudo vgscan –mknodes
 # sudo vgchange -ay

There, you should get the name of the volume group that will be needed to mount the encrypted partition (which happens to be configured as an LVM volume). You can now procede to mount your partition (changing {volumegroup} with the name that you collected in the previous command ) :

# sudo mount /dev/{volumegroup}/root /mnt

Your encrypted data should now be available in the /mnt directory :-)

Read more
caribou

Oneiric first drive

Ok, I finally got a few minutes to start testing Oneiric Beta 1 on my HP Elitebook 8440p.  Since it is the computer that I use on my daily job, I preferred to launch it using a USB key and not to jeopardize my Natty’s working installation.

It is not by lack of faith toward my fellow developers, but rather because I would hate to be unable to serve our customers following known bugs being applied to my gear. So here are my first thought. I will follow up by visiting the “known bug” section to see if the things that I encountered have already been fixed.

And since I’m using a live USB setup, my first test is on the packages found on the Beta1 DVD image. I didn’t get to run an update to get fresher packages yet.

Video

My first great pleasure is to realize that I no longer have to rely on the Nvidia proprietary drivers to get started. Of course, I only get access to Unity 2D, but at least I have an environment that is very similar to the 3D version (I really can’t see the difference). But I’m not a big user of animations and things like that. Another great thing is that I only had to go to the “Display” section, activate my second 19″ screen, put the appropriate screen definition and the second display started to work.

One thing though : the Dash menu which used to appear on the far left of the leftmost screen in Natty (normal behavior) now appears on the left of the laptop’s screen (my 2nd screen is at the left of the laptop) so the Dash selection area is in the middle of my workspace.

Peripherals

Wireless works out of the box, but this is nothing new. Same with the built-in webcam. As I already indicated, Dual-screen is also working out of the box, but apparently without the 3D acceleration.

User Interface

The first thing I noticed after typing the “Super” key and typing “term” to get a CLI environment is that, after hitting <TAB> to select the terminal Icon, I no longer have a graphical return (i.e. the icon doesn’t get highlighted). It still seems to work as if I hit <Return>, the terminal does get launched. Maybe a side effect of using Unity 2D.

As with the Dash menu, the lens do appear on the right screen (laptop screen) unlike with Natty. One thing that is no longer working is the workspace switcher. Clicking on it in the Dash does display all four workspaces, but when I select any workspace, it always brings me back to the first one. Might be some wrong setup.

Software

So far, I have had a few crashes (software library, Compiz manager) but most things seem to work. The software library crashes have forced me to drop back to CLI to get things installed. UbuntuOne works flawlessly

It is now time to go back to my daytime job, which is fixing our customer’s  problems. Hopefully, I’ll have some more time to explore Oneiric before it ships out. But I really like my first impression.

Read more
caribou

Live from my phone

The fun of being a geek is to find silly things to do, like to try to blog from an Android phone.

Well, using the power of the open source (wordpress & Android) looks like it will be easier then expected (aside from the virtual keyboard that makes typing a bit tedious).

Kind of nice…

 

Read more
caribou

Remember that day ?

Not that it is very a original post, but it was 20 years ago.

Linus Benedict Torvalds
View profile
More options Aug 26 1991, 8:12 am

Hello everybody out there using minix -

I’m doing a (free) operating system (just a hobby, won’t be big and
professional like gnu) for 386(486) AT clones. This has been brewing
since april, and is starting to get ready. I’d like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).

I’ve currently ported bash(1.08) and gcc(1.40), and things seem to work.
This implies that I’ll get something practical within a few months, and
I’d like to know what features most people would want. Any suggestions
are welcome, but I won’t promise I’ll implement them :-)

Linus (torva…@kruuna.helsinki.fi)

PS. Yes – it’s free of any minix code, and it has a multi-threaded fs.
It is NOT protable (uses 386 task switching etc), and it probably never
will support anything other than AT-harddisks, as that’s all I have :-( .

Read more
caribou

Having a linux based distribution as the operating system on my laptop has many advantages.  One of them is to be able to easily run an apache instance and use my laptop as a local webserver.  This has allowed me to run my very own private mediawiki instance on my laptop and to use it as a note keeping engine.  So the landing page on my web browser looks like this :

Two main advantages of this is that, if I intend to publish some of my notes as a public wiki page, there is minimal effort involved in migrating the data to a public Wiki. Otherwise, I can use the search function to find information on a specific topic, see if I have done things on that subject, etc.

It has been a very useful tool to me and I thought I’d share that experience.

Read more
caribou

In my job as a support engineer, I spend my day trying to fix problem. So it is refreshing when, sometimes, the technology that we’re providing simply works.

Like this morning when I decided to test a total autonomous laptop setup. By that I mean that I will on the road tonight and might need internet access for m y laptop.  So I went ahead and configured my Android HTC Desire to act as a WiFi Hotspot.  Then I switched my laptop’s Wireless connection to the SSID that was broadcast by my phone; it connected right away.

Now since I’ll be in a car, I need to have some kind of headset to avoid street noises. So I powered my bluetooth  earphone that I had previously configured to be recognized by my laptop. I went in the notification area and selected the Bluetooth applet & saw that my earphone was been seen. So I connected it.  Now the only thing that doesn’t totally work as I would expect is that I needed to change the whole sound setup of my laptop to send all input and output to my earphone. I’m expecting to use Twinkle for phone calls, and it is the only way I found to have the earphone work properly. If someone knows how to be more selective and have Twinkle use the Bluetooth earphone, I’m a taker.

But this is not a show stopper.  So I fired up Twinkle, connected to my SIP account and dialed my home phone : Drriiiinngg!!!

So I’m there, with my Android phone connecting me to the Internet, my laptop acting as a telephone, while I still have full laptop functionality. I must say that I’m impressed. Don’t get me wrong : I’ve been using Ubuntu since Hoary, and Debian before that.  But nowaday, with all the great work done on those tools, it make rather complex setups work smoothly.

Bravo !

Read more