Canonical Voices

Posts tagged with 'ubuntu server'

Louis

I have seen this setup documented a few places, but not for Ubuntu so here it goes.

I have used this many time to verify or diagnose Device Mapper Multipath (DM-MPIO) since it is rather easy to fail a path by switching off one of the network interfaces. Nowaday, I use two KVM virtual machines with two NIC each.

Those steps have been tested on Ubuntu 12.04 (Precise) and Ubuntu 14.04 (Trusty). The DM-MPIO section is mostly a cut and paste of the Ubuntu Server Guide

The virtual machine that will act as the iSCSI target provider is called PreciseS-iscsitarget. The VM that will connect to the target is called PreciseS-iscsi. Each one is configured with two network interfaces (NIC) that get their IP addresses from DHCP. Here is an example of the network configuration file :

$ cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
#
auto eth1
iface eth1 inet dhcp

The second NIC resolves to the same hostname with a « 2″ appended (i.e. PreciseS-iscsitarget2 and PreciseS-iscsi2)

Setting up the iSCSI Target VM

This is done by installing the following packages :

$ sudo apt-get install iscsitarget iscsitarget-dkms

Edit /etc/default/iscsitarget and change the following line to enable the service :

ISCSITARGET_ENABLE=true

We now proceed to create an iSCSI target (aka disk). This is done by creating a 50 Gb sparse file that will act as our disk :

$ sudo dd if=/dev/zero of=/home/ubuntu/iscsi_disk.img count=0 obs=1 seek=50G

This container is used in the definition of the iSCSI target. Edit the file /etc/iet/ietd.conf. At the bottom, add :

Target iqn.2014-09.PreciseS-iscsitarget:storage.sys0
        Lun 0 Path=/home/ubuntu/iscsi_disk.img,Type=fileio,ScsiId=lun0,ScsiSN=lun0

The iSCSI target service must be restarted for the new target to be accessible

$ sudo service iscsitarget restart


Setting up the iSCSI initiator

To be able to access the iSCSI target, only one package is required :

$ sudo apt-get install open-iscsi

Edit /etc/iscsi/iscsid.conf changing the following:

node.startup = automatic

This will ensure that the iSCSI targets that we discover are enabled automatically upon reboot.

Now we will proceed to discover and connect to the device that we setup in the previous section

$ sudo iscsiadm -m discovery -t st -p PreciseS-iscsitarget
$ sudo iscsiadm -m node --login
$ dmesg | tail
[   68.461405] iscsid (1458): /proc/1458/oom_adj is deprecated, please use /proc/1458/oom_score_adj instead.
[  189.989399] scsi2 : iSCSI Initiator over TCP/IP
[  190.245529] scsi 2:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[  190.245785] sd 2:0:0:0: Attached scsi generic sg0 type 0
[  190.249413] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)
[  190.250487] sd 2:0:0:0: [sda] Write Protect is off
[  190.250495] sd 2:0:0:0: [sda] Mode Sense: 77 00 00 08
[  190.251998] sd 2:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  190.257341]  sda: unknown partition table
[  190.258535] sd 2:0:0:0: [sda] Attached SCSI disk

We can see in the dmesg output that the new device /dev/sda has been discovered. Format the new disk & create a file system. Then verify that everything is correct by mounting and unmounting the new file system.

$ fdisk /dev/sda
n
p
1
<ret>
<ret>
w
$  mkfs -t ext4 /dev/sda1
$ mount /dev/sda1 /mnt
$ umount /mnt

 

Setting up DM-MPIO

Since each of our virtual machines have been configured with two network interfaces, it is possible to reach the iSCSI target through the second interface :

$ iscsiadm -m discovery -t st -p
192.168.1.193:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
192.168.1.43:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
$ iscsiadm -m node -T iqn.2014-09.PreciseS-iscsitarget:storage.sys0 --login

Now that we have two paths toward our iSCSI target, we can proceed to setup DM-MPIO.

First of all, a /etc/multipath.conf file must exist.  Then we install the needed package :

$ sudo -s
# cat << EOF > /etc/multipath.conf
defaults {
        user_friendly_names yes
}
EOF
# exit
$ sudo apt-get -y install multipath-tools

Two paths to the iSCSI device created previously need to exist for the multipath device to be seen.

# multipath -ll
mpath0 (149455400000000006c756e30000000000000000000000000) dm-2 IET,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:0:0 sda 8:0   active ready  running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:0 sdb 8:16  active ready  running

The two paths are indeed visible. We can move forward and verify that the partition table created previously is accessible :

$ sudo fdisk -l /dev/mapper/mpath0

Disk /dev/mapper/mpath0: 53.7 GB, 53687091200 bytes
64 heads, 32 sectors/track, 51200 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0e5e5db1

              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/mpath0p1            2048   104857599    52427776   83  Linux

All that is remaining is to add an entry to the /etc/fstab file so the file system that we created is mounted automatically at boot.  Notice the _netdev entry : this is required otherwise the iSCSI device will not be mounted.

$ sudo -s
# cat << EOF >> /etc/fstab
/dev/mapper/mpath0-part1        /mnt    ext4    defaults,_netdev        0 0
EOF
exit
$ sudo mount -a
$ df /mnt
Filesystem               1K-blocks   Used Available Use% Mounted on
/dev/mapper/mpath0-part1  51605116 184136  48799592   1% /mnt

Read more
Louis

A couple of weeks ago I announced that I was working on a new remote functionality for kdump-tools, the kernel crash dump tool used on Debian and Ubuntu.

I am now done with the development of the new functionality, so the package is ready for testing. If you are interested, just read the previous post which has all the gory details on how to set it up & test it.

Read more
Louis

A few years ago, while I started to participate to the packaging of makedumpfile and kdump-tools for Debian and ubuntu. I am currently applying for the formal status of Debian Maintainer to continue that task.

For a while now, I have been noticing that our version of the kernel dump mechanism was lacking from a functionality that has been available on RHEL & SLES for a long time : remote kernel crash dumps. On those distribution, it is possible to define a remote server to be the receptacle of the kernel dumps of other systems. This can be useful for centralization or to capture dumps on systems with limited or no local disk space.

So I am proud to announce the first functional beta-release of kdump-tools with remote kernel crash dump functionality for Debian and Ubuntu !

For those of you eager to test or not interested in the details, you can find a packaged version of this work in a Personal Package Archive (PPA) here :

https://launchpad.net/~louis-bouchard/+archive/networked-kdump

New functionality : remote SSH and NFS

In the current version available in Debian and Ubuntu, the kernel crash dumps are stored on local filesystems. Starting with version 1.5.1, they are stored in a timestamped directory under /var/crash. The new functionality allow to either define a remote host accessible through SSH or an NFS mount point to be the receptacle for the kernel crash dumps.

A new section of the /etc/default/kdump-tools file has been added :

# ---------------------------------------------------------------------------
# Remote dump facilities:
# SSH - username and hostname of the remote server that will receive the dump
# and dmesg files.
# SSH_KEY - Full path of the ssh private key to be used to login to the remote
# server. use kdump-config propagate to send the public key to the
# remote server
# HOSTTAG - Select if hostname of IP address will be used as a prefix to the
# timestamped directory when sending files to the remote server.
# 'ip' is the default.
# NFS - Hostname and mount point of the NFS server configured to receive
# the crash dump. The syntax must be {HOSTNAME}:{MOUNTPOINT} 
# (e.g. remote:/var/crash)
#
# SSH="<user@server>"
#
# SSH_KEY="<path>"
#
# HOSTTAG="hostname|[ip]"
# 
# NFS="<nfs mount>"
#

The kdump-config command also gains a new option : propagate which is used to send a public ssh key to the remote server so passwordless ssh commands can be issued to the remote SSH host.

Those options and commands are nothing new : I simply based my work on existing functionality from RHEL & SLES. So if you are well acquainted with RHEL remote kernel crash dump mechanisms, you will not be lost on Debian and Ubuntu. So I want to thank those who built the functionality on those distributions; it was a great help in getting them ported to Debian.

Testing on Debian

First of all, you must enable the kernel crash dump mechanism at the kernel level. I will not go in details as it is slightly off topic but you should :

  1. Add crashkernel=128M to /etc/default/grub in GRUB_CMDLINE_LINUX_DEFAULT
  2. Run udpate-grub
  3. reboot

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ apt-get install software-properties-common
$ add-apt-repository ppa:louis-bouchard/networked-kdump

Since you are on Debian, the result of the last command will be wrong, as the serie defined in the PPA is for Utopic. Just use the following command to fix that :

$ sed -i -e 's/sid/utopic/g' /etc/apt/sources.list.d/louis-bouchard-networked-kdump-sid.list 
$ apt-get update
$ apt-get install kdump-tools makedumpfile

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

root@sid:~# kdump-config propagate
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (192.168.122.70)' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash

If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :

SSH_KEY="/root/.ssh/mykey_id_rsa"

Then run the propagate command as previously :

root@sid:~/.ssh# kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

root@sid:~/.ssh# ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd
/home/ubuntu

If the passwordless connection can be achieved, then everything should be all set. You can proceed with a real crash dump test if your setup allows for it (not a production environment for instance).

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually :

root@sid:~/.ssh# mount -t nfs TrustyS-netcrash:/var/crash /mnt
root@sid:~/.ssh# df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167360 5278848 19% /mnt
root@sid:~/.ssh# umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

Testing on Ubuntu

As you would expect, setting things on Ubuntu is quite similar to Debian.

Install the beta packages

The package in the PPA can be installed on Debian with add-apt-repository. This command is in the software-properties-common package so you will have to install it first :

$ sudo add-apt-repository ppa:louis-bouchard/networked-kdump

Packages are available for Trusty and Utopic.

$ sudo apt-get update
$ sudo apt-get -y install linux-crashdump

Configure kdump-tools for remote SSH capture

Edit the file /etc/default/kdump-tools and enable the kdump mechanism by setting USE_KDUMP to 1 . Then set the SSH variable to the remote hostname & credentials that you want to use to send the kernel crash dump. Here is an example :

USE_KDUMP=1
...
SSH="ubuntu@TrustyS-netcrash"

You will need to propagate the ssh key to the remote SSH host, so make sure that you have the password of the remote server’s user you defined (ubuntu in my case) for this command :

ubuntu@TrustyS-testdump:~$ sudo kdump-config propagate
[sudo] password for ubuntu: 
Need to generate a new ssh key...
The authenticity of host 'trustys-netcrash (192.168.122.70)' can't be established.
ECDSA key fingerprint is 04:eb:54:de:20:7f:e4:6a:cc:66:77:d0:7c:3b:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/kdump_id_rsa to server ubuntu@TrustyS-netcrash
ubuntu@TrustyS-testdump:~$
If you have an existing ssh key that you want to use, you can use the SSH_KEY option to point to your own key in /etc/default/kdump-tools :
SSH_KEY="/root/.ssh/mykey_id_rsa"

Then run the propagate command as previously :

ubuntu@TrustyS-testdump:~$ kdump-config propagate
Using existing key /root/.ssh/mykey_id_rsa
ubuntu@trustys-netcrash's password: 
propagated ssh key /root/.ssh/mykey_id_rsa to server ubuntu@TrustyS-netcrash

It is a safe practice to verify that the remote SSH host can be accessed without password. You can use the following command to test (with your own remote server as defined in the SSH variable in /etc/default/kdump-tools) :

ubuntu@TrustyS-testdump:~$sudo ssh -i /root/.ssh/mykey_id_rsa ubuntu@TrustyS-netcrash pwd
/home/ubuntu

If the passwordless connection can be achieved, then everything should be all set.

Configure kdump-tools for remote NFS capture

Edit the /etc/default/kdump-tools file and set the NFS variable with the NFS mount point that will be used to transfer the crash dump :

NFS="TrustyS-netcrash:/var/crash"

The format needs to be the syntax that normally would be used to mount the NFS filesystem. You should test that your NFS filesystem is indeed accessible by mounting it manually (you might need to install the nfs-common package) :

ubuntu@TrustyS-testdump:~$ sudo mount -t nfs TrustyS-netcrash:/var/crash /mnt 
ubuntu@TrustyS-testdump:~$ df /mnt
Filesystem 1K-blocks Used Available Use% Mounted on
TrustyS-netcrash:/var/crash 6815488 1167488 5278720 19% /mnt
ubuntu@TrustyS-testdump:~$ sudo umount /mnt

Once you are sure that your NFS setup is correct, then you can proceed with a real crash dump test.

 Miscellaneous commands and options

A few other things are under the control of the administrator

The HOSTTAG modifier

When sending the kernel crash dump, kdump-config will use the IP address of the server to as a prefix to the timestamped directory on the remote host. You can use the HOSTTAG variable to change that default. Simply define in /etc/default/kdump-tools :

HOSTTAG="hostname"

The hostname of the server will be used as a prefix instead of the IP address.

Currently, this is only implemented for the SSH method, but it will be available for NFS as well in the final version.

kdump-config show

To verify the configuration that you have defined in /etc/default/kdump-tools, you can use kdump-config’s show command to review your options.

ubuntu@TrustyS-testdump:~$ sudo kdump-config show
USE_KDUMP: 1
KDUMP_SYSCTL: kernel.panic_on_oops=1
KDUMP_COREDIR: /var/crash
crashkernel addr: 0x2d000000
SSH: ubuntu@TrustyS-netcrash
SSH_KEY: /root/.ssh/kdump_id_rsa
HOSTTAG: ip
current state: ready to kdump
kexec command:
 /sbin/kexec -p --command-line="BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/TrustyS--vg-root ro console=ttyS0,115200 irqpoll maxcpus=1 nousb" --initrd=/boot/initrd.img-3.13.0-24-generic /boot/vmlinuz-3.13.0-24-generic

If the remote crash kernel dump functionality is setup, you will see the options listed in the output of the commands.

Conclusion

As outlined at the beginning, this is the first functional beta version of the code. If you are curious, you can find the code I am working on here :

http://anonscm.debian.org/gitweb/?p=collab-maint/makedumpfile.git;a=shortlog;h=refs/heads/networked_kdump_beta1

Don’t hesitate to test & let me know if you find issues

Read more
Pat Gaughen

We are looking for two fabulous Software Engineers to join the Ubuntu Server team. Check out the individual job listings for more details:

Think you’ve got what it takes? Apply!

Read more
Antonio Rosales

Meeting information

Agenda

  • Review ACTION points from previous meeting
  • T Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Weekly Updates & Questions regarding Ubuntu ARM Server (rbasak)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Summary

This weeks meeting had a focus on addressing items needed before Feature Freeze on Feb 20. This included conversations around high/essential bugs, red high/essential blueprints, and test failures.

Specific bugs discussed in this weeks meeting were:

  • 1248283 in juju-core (Ubuntu Trusty) “juju userdata should not restart networking” [High,Triaged] https://launchpad.net/bugs/1248283
  • 1278897 in dovecot (Ubuntu Trusty) “dovecot warns about moved ssl certs on upgrade” [High,Triaged] https://launchpad.net/bugs/1278897
  • 1259166 in horizon (Ubuntu Trusty) “Fix lintian error” [High,Triaged]
  • 1273877 in neutron (Ubuntu Trusty) “neutron-plugin-nicira should be renamed to neutron-plugin-vmware” [High,Triaged]

Specific Blueprints discussed:

  • curtain, openstack charms, ceph, mysql alt, cloud-init, openstack (general)

Meeting closed with announcing Marco and Jorge will be at SCALE12x giving a talk, so be sure to stop by if your are going to be at SCALE.

Review ACTION points from previous meeting

The discussion about “Review ACTION points from previous meeting” started at 16:04.

16:06 <arosales> gaughen follow up with jamespage on bug 1243076 16:06 <ubottu> bug 1243076 in mod-auth-mysql (Ubuntu Trusty) “libapache2-mod-auth-mysql is missing in 13.10 amd64″ [High,Won't fix] https://launchpad.net/bugs/1243076 16:09 <jamespage> not got to that yet 16:10 <jamespage> working on a few pre-freeze items first 16:10 <arosales> ack I’ll take its appropriately on your radar Smile :-) –thanks 16:10 <jamespage> it is

16:06 <arosales> gaughen follow up on dbus task for bug 1248283 16:06 <ubottu> bug 1248283 in juju-core (Ubuntu Trusty) “juju userdata should not restart networking” [High,Triaged] https://launchpad.net/bugs/1248283

16:07 <arosales> jamespage to follow up on bug 1278897 (policy compliant) 16:07 <ubottu> bug 1278897 in dovecot (Ubuntu Trusty) “dovecot warns about moved ssl certs on upgrade” [High,Triaged] https://launchpad.net/bugs/1278897

16:07 <arosales> smoser update servercloud-1311-curtin bp 16:07 <smoser> i updated it . 16:07 <smoser> i’ll file a ffe today

16:07 <arosales> hallyn follow up on 1248283 from an lxc pov, ping serue to coordinate 16:08 <serue> Done 16:08 <arosales> smoser update cloud-init BP 16:08 <smoser> we’ll say same there.

Trusty Development

The discussion about “Trusty Development” started at 16:10.

Weekly Updates & Questions for the QA Team (psivaa)

The discussion about “Weekly Updates & Questions for the QA Team (psivaa)” started at 16:27.

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:35.

Action items, by person

  • gaughen
    • gaughen ensure BPs are updated
  • coreycb
    • follow upon bug 1273877

Announce next meeting date and time

Next meeting will be on Tuesday, February 25th at 16:00 UTC in #ubuntu-meeting.

People present (lines said)

  • arosales (77)
  • jamespage (19)
  • psivaa (12)
  • smoser (10)
  • ubottu (9)
  • meetingology (5)
  • serue (2)
  • zul (2)
  • sforshee (1)
  • rbasak (1)
  • gaughen (1)
  • smb (1)

Read more
Mark Baker

To paraphrase from Mark Shuttleworth’s keynote at the OpenStack Developer Summit last week in Hong Kong, building clouds is no longer exciting. It’s easy. That’s somewhat of an exaggeration, of course, as clouds are still a big choice for many enterprises, but there is still a lot of truth in Mark’s sentiment. The really interesting part about the cloud now is what you actually do with it, how you integrate it with existing systems, and how powerful it can be.

OpenStack has progressed tremendously in its first few years, and Ubuntu’s goal has been to show that it is just as stable, production-ready, easy-to-deploy and manage as any other cloud infrastructure. For our part, we feel we’ve done a good job, and the numbers certainly seem to support that. More than 3,000 people from 50 countries and 480 cities attended the OpenStack Summit in Hong Kong, a new record for the conference, and a recent IDG Connect survey found that 84 percent of enterprises plan to make OpenStack part of their future clouds.

Clearly OpenStack has proven itself. And, now, the OpenStack community’s aim is making it work even better with more technologies, more players and more platforms to do more complex things more easily. These themes were evident from a number of influential contributors at the event and require an increased focus amongst the OpenStack community:

Global Collaboration

OpenStack’s collaborative roots were exemplified early on with the opening address by Daniel Lai, Hong Kong’s CIO, who talked about how global the initially U.S.-founded project has become. There are now developers in more than 400 cities around the world with the highest concentration of developers located in Beijing.

Focus on the Core

One of the first to directly hit on the theme of needing more collaboration, though, was Mark Shuttleworth with a quote from Albert Einstein: “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.” OpenStack has grown fantastically, but we do, as a community, need to ensure we can support that growth rate. OpenStack should focus on the core services and beyond that, provide a mechanism to let many additional technologies plug in, or “let a thousand flowers bloom,” as Mark eloquently put it.

HP’s Monty Taylor also called for more collaboration between all of OpenStack’s players to really continue enhancing the core structure and principle of OpenStack. As he put it, “If your amazing plug-in works but the OpenStack core doesn’t, your plug-in is sitting on a pile of mud.” A bit blunt, but it gets to the point of needing to make sure that the core benefits of OpenStack – that an open and interoperable cloud is the only cloud for the future – are upheld.

Greasing the Wheels of Interoperability

And, that theme of interoperability was at the core of one of Ubuntu’s own announcements at the Hong Kong summit: the Ubuntu OpenStack Interoperability Lab, or Ubuntu OIL. Ubuntu has always been about giving companies choice, especially in the cloud. Our contributions to OpenStack so far have included new hypervisors, SDN stacks and the ability to run different workloads on multiple clouds.

We’ve also introduced Juju, which is one step up from a traditional configuration management tool and is able to distil functions into groups – we call them Charms – for rapid deployment of complex infrastructures and services.

Will all the new capabilities being added to OpenStack, Ubuntu OIL will test all of these options, and other non-OpenStack-centric technologies, to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments.

Collaboration and interoperability testing like this will help ensure OpenStack only becomes easier to use for enterprises, and, thus, more enticing to adopt.

For more information on Ubuntu OIL, or to suggest components for testing in the lab, email us at oil@ubuntu.com or visit http://www.ubuntu.com/cloud/ecosystem/ubuntu-oil

Read more
James Page

Meeting summary

Review Previous Actions

Robie has a merge proposal nearly ready for landing for delta reporting:

  • ACTION: rbasak to land delta report to lp:ubuntu-reports, Daviey to deploy

Most server packages are now unblocked from migrating to the saucy release pocket aside from one last fix for the apache2.4 transition.

Saucy Development

James noted that Debian import freeze and Alpha 2 for saucy is schedule for next week:

Ubuntu Server Team Events

OSCON happening right now – Jorge and Mark Mims running a Charm School!

Weekly Updates & Questions for the QA Team (plars)

plars updated on a couple of kernel bugs currently causing issues in server automated testing:

These should be fixed up shortly

Weekly Updates & Questions for the Kernel Team (smb)

Some issues with KVM guests with the latest saucy kernel – apw investigating.

Weekly Updates & Questions regarding Ubuntu ARM Server (rbasak)

Nothing to note.

Open Discussion

The discussion about “Open Discussion” started at 16:33.

Antonio took the opportunity to remind everyone that UDS is scheduled for the end of August.

Chuck noted that Quantum has now been renamed Neutron; removal of the old source package has been requested.

Announce next meeting date and time

Tuesday 30th July at 1600 GMT

Full meeting log can be found here: https://wiki.ubuntu.com/MeetingLogs/Server/20130723

Read more
Dave Walker

The Ubuntu Server Team is constantly working on some really exciting areas.  We try to collate the best of open source to deliver a distribution suitable for cloud, scale out and traditional server workloads.  We try to provide agile granite foundations for users to build their workload on.  Most of the work we do is for no cost to the user, maximising value.

On a weekly basis, we hold an IRC meeting, where we discuss blueprints and development, but I do think we could probably do better at sharing some of the great stuff we are doing.

To achieve this, i’m setting a target of trying to give a weekly insight into the highlights of the work that the Ubuntu Server team is doing.  So join me on my mission and prepare for your dunked digest into the giant cup of Server.

Juju is a key Ubuntu Server technology.  it is typically called a service orchestration management tool, rather than a server management tool.  Many of the deliverables of the server team are either built upon Juju, or underpins Juju itself.  In return, Juju underpins greatness.

Juju supports writing a charm in any language (or even compiled binary!), that can be executed or interpreted by the machine.  I believe the most complex charms in the store are the Openstack ones.  Some of the original charms were written in shell/bash, but it has become apparent that a richer higher level such as python can be massively useful.  Therefore, we decided to rewrite some of the earlier charms in python.  The Cinder charm was rewritten by Adam in python and Andres has been on the same for Glance. The real key part of this is that the deployments can have a seamless upgrade, without realising that the underlying charm language has changed.

We’ve also found that many of the charms contain significant overlap, therefore we have been trying to push much of the common code into charm-helpers.  This is vital for any DRhY methodology, which helps with maintainability – but also allowing us to be more effective.  James found that he could rework the Ceph charms to use charm-helper and push some extra features back to charm-helpers.

The velocity of development means that Quality is a constant concern.  The only way we can raise our capacity and have a good level of confidence in what we deliever is to have frequent testing.  To scale this, we’ve been putting significant work into automating areas where we can.  DEP-8 (autopkgtest) is a format for describing test requirements, setup and the actual test case.  Adam implemented DEP-8 package testing into the Openstack packaging, using juju, jenkins and a special internal Openstack deployment we’ve codenamed ServerStack.

Adam also worked on some Ubuntu Cloud Archive tooling to make it easier to submit packages and cleaner package release reporting that make it easier to identify workflow status.

Andres uploaded latest version of MAAS to Saucy. Diogo who is helping to drive quality in the server team worked on resolving some MAAS jenkins test failures under Saucy and setup tarmac (code lander) for juju-gui.

Chuck uploaded new versions of python-keystoneclient, python-ceilometer, python-swiftclient.  In addition he also backported Qemu 1.5.0 to the Ubuntu Cloud Archive, enabling the latest Qemu features on the stable base that 12.04 LTS provides.

Chuck has also been leading the way for python3 compatibility, including some work on making python-novaclient python3 compatible.  He also worked on a bunch of upstream openstack patches.

As part of the hardware enablement stack, 3.10 kernel is being brought back to the 12.04 Precise.  This means that a bunch of these need to be made 3.10 compatible.  James worked on resolving a failure with with iscsitarget, and pushed it upstream.

James uploaded a new snapshot of OpenvSwitch 1.10.1 to Saucy.  working on Nicira NVP support for the OpenStack charms

Robie did some great work on enabling multiple tests (DEP8/autopkgtest) for LXC, which was discussed on the ubuntu-server mailing list.

Scott decided it would be a good idea to hike to the top of Mount Democrat, and whilst doing some cloud-init enablement and simplestreams development.

Serge, who has been pushing LXC developments in Ubuntu built a custom Saucy kernel with Dwight’s xfs userns patchset (final set needed before we can ask kernel team for enablement!) and also investigated signalfd/epoll/sigchld race which was reproducible with LXC.

Yolanda worked on writing a charm to deploy your own Gerrit code review tool, a pretty nasty ipxe assembly rebuild bug (upstream believe it to be a GCC bug!), and solving a bind9 issue.

Oh, and this week Andy Murray also won the tennis championship, Wimbledon – which I for one, attribute his success to Week 28 of Ubuntu Server development.  The most interesting part of this, is that he is the first British man to win who wasn’t wearing full length trousers.  I’ve heard, but it’s yet to be confirmed – that he used juju during his training, but this is yet to be confirmed.

Read more
Antonio Rosales

LTS Enablement Stack

Jorge Castro had a good blog post regarding the LTS enablement stack and sysadmins. The TLDR as Jorge puts it is, “12.04.2 ISOs are NOT just rolled up updates, they’re 12.04 with newer kernels.” It is good to also note that the 12.04 stack will continue to be maintained for 5 years. Thus, it will get SRUs and the kernel won’t change on you.  I think this is an important thing to note.  However, some folks may want a newer kernel in the LTS life span and for those folks then can evaluate a point release.

Jorge calls out some good recommendations:

  • The 12.04 and 12.04.1 ISO’s are at http://old-releases.ubuntu.com/ - you’ll likely want to keep a set for yourself if you want to roll out with the same exact kernel for your deployments – you’ll probably want to have all three ISO sets on hand depending on your hardware.
  • The original 12.04 stack will continue to be maintained for 5 years, if you don’t need the new kernel, you don’t need to use it.
  • In the past if new hardware rolled out and didn’t work with the LTS you were kind of stuck with either backporting a kernel, or (what I reluctantly did) deploy a non-LTS release until the next LTS came out, at which point you would rebase on the new LTS.)

Be sure to give his blog post and  LTS Enablement Stack Wiki a read. If you have any questions or comments, as always, feel free to give the list (ubuntu-server@lists.ubuntu.com) or a ping in IRC (#ubuntu-server@Freenode).

 

 

Read more
David Duffey

Today we announced a collaborative support and engineering agreement with Dell.  As part of this agreement Canonical will add Dell 11G & 12G PowerEdge models to the Ubuntu Server 12.04 LTS Certification List and Dell will add Ubuntu Server to its Linux OS Support Matrix.

In May 2012, Dell launched the OpenStack Cloud Reference Architecture using Ubuntu 12.04 LTS on select PowerEdge-C series servers. Today’s announcement expands upon that offering by combining the benefits of Ubuntu Server Certification, Ubuntu Advantage enterprise support, and Dell Hardware ProSupport across the PowerEdge line.

Dell customers can now deploy with confidence when purchasing Dell PowerEdge servers with Dell Hardware ProSupport and Ubuntu Advantage.  When these customers call into Dell, their service tag numbers will be entitled with ProSupport and Ubuntu Advantage, which will create a seamless support experience via the collaborative Dell and Canonical support and engineering relationship.

In preparation for this announcement, Canonical engineers worked with Dell to enable and validate Ubuntu Server running on Dell PowerEdge Servers.  This work resulted in improved Ubuntu Server on Dell PowerEdge support for PCIe SSD (solid state drives), 4K-block drives, EFI booting, Web Services Management, consistent network device naming, and PERC (PowerEdge RAID Controllers).

Dell hardware systems management can be done out-of-band via ipmi, iDRAC, and the Lifecycle Controller.  Dell OMSA Ubuntu packages are also available but it is recommended to use the supported out-of-band systems management tools.  Dell TechCenter is a good resource for additional technical information about running Ubuntu Server on Dell PowerEdge servers.

If you are interested in purchasing Ubuntu Advantage for your Dell PowerEdge servers, please contact the Dell Solutions team at Canonical.  If your business is already using or thinking about using a supported Ubuntu Server infrastructure in your data-center then be sure to fill out the annual Ubuntu Server and Cloud Survey to provide additional feedback.

Read more
caribou

If you are part of those people who are reluctant to upgrade to newer kernels, here is an example of how this can make your life miserable every 209 days.

There is a specific kernel bug in Lucid that will provoke a kernel panic after 208 days, which is regular behavior on a server (and a cloud instance ?). Here is the kernel GIT commit related to this :

http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-lucid.git;a=commit;h=595378ac1dd449e5c379bf6caa9cdfab008974c8

This has been fixed in the ubuntu kernel since 2.6.32-38 months ago but if you prefer not to upgrade to earlier kernels on Lucid, you will be hit by this bug.

Read more
Mark Baker

Today, Canonical and HP announced that Ubuntu Server 12.04 LTS is to be certified and supported by HP on its Proliant Systems:

http://www.canonical.com/content/ubuntu-1204-lts-server-be-certified-supported-hp-proliant-systems

This is a huge announcement for us at Canonical. It’s also testament that HP sees real business benefits in offering certified and supported Proliant systems with Ubuntu Server. Arguably, however, the most significant aspect of the announcement is the implication that the next generation of computing requires a different model.

Big data and cloud computing are at the forefront of a move towards hyperscale distributed systems. To meet these new challenges, today’s IT departments need a proven developer-led technology that’s free from licensing restrictions.

Ubuntu Server is that technology. That’s why it is now the platform of choice for Openstack clouds and the only commercially-supported Linux distribution to be increasing its share of the online infrastructure market. Even on Amazon Web Services, Ubuntu Server reigns supreme – thanks to its technological and commercial advantages over other platforms.

HP has been working with Canonical for several years now and in that time, it has grown to understand where we sit in the IT ecosystem. The resulting announcement of support for Ubuntu on Proliant (alongside other Linux platforms) is a signal to organisations of all kinds that the IT landscape is changing.

Read more
caribou

One year ago, I had done my last day after thirteen years with Digital Equipment Corp which became Compaq, then HP.  After starting on Digital Unix/Tru64, I had evolved to a second level support position in the Linux Global Competency Center.

In a few days, on the 18th, I will have completed my first full year as a Canonical employee. I think it is time to take a few minutes to look back at that year.

Coming from a RHEL/SLES environment with a bit of Debian, my main asset was the fact that I  had been an Ubuntu user since 5.04, using it as my sole operating system on my corporate laptop. The first week in the new job was also a peculiar experience, as it brought me back to my native country and to Montréal, a city that I love and where I lived for three years.  So I was not totally lost in my new environment. I also had the chance of ramping up my knowledge of Ubuntu Server, which was an easy task.  What was more surprizing and became one of the most exciting part of the new job is to work in a completely dedicated opensource environment from day one.

Rapidly I became aware of the fact that, participating in the Ubuntu community was not only possible, but it was expected.  That if I were to find a bug, I needed to report it and, if possible find ways to fix it.  In my previous job I was looking for existing solutions, or bringing in enough elements to my L3 counterpart that they would be able to request a fix to Red Hat or Novell.  Here if I was able to identify the problem and suggest a solution, I was encouraged to propose it as the final fix.  I also rapidly found out that the developpers were no longer the remote engineers in some changelog file, but IRC nicks that I could chat with and eventually meet.

Then came about Openstack in the summer : a full week of work with colleagues aimed at getting to know the technology, trying to master concepts that were very vague back then and making things work.  Getting Swift Object Store up and running and trying to figure out how best this could be used.  Here I was asked to do one of the think I like best : learning by getting things to work. This lead to a better understanding of what a cloud architecture was all about and really made me understand how useful and interesting a cloud infrastructure can be. Oh, and I did get to build my first openstack cloud.

This was another of this past year’s great experience : UDS-P. I had heard of UDS-O when I joined but it was too early for me to attend.  But after six months around it was time for UDS-P and, this time, I would be there.  Not only I had time to meet a good chunk of developpers, but I also got a lot of work done.  Like helping Michael Terry fix a bug on Deja-Dup that would only appear on localized systems, get advices on fixing kdump with the kernel team and some of the foundation engineers and a whole lot more.

Then came back the normal work for our customers, fixing their issues, trying to help improve their support experience and get better at what we do. And also seeing some of my fixes make it into our upcoming distribution and also back to the existing ones.  This was a great thrill and an objective that I did not think would come by so fast.

Being part of the Ubuntu community has been a great addition to my career. This makes me want to do even more and get the best out of our collective efforts.

This was a great year. Sure hope that the next one will be even better.

Read more
caribou

Recently, I have realized a major difference in how customer support is done on Ubuntu.

As you know, Canonical provides official customer support for Ubuntu both on server and desktop. This is the work I do : provice customer with the best level of support on the Ubuntu distribution.  This is also what I was doing on my previous job, but for the Red Hat Enterprise Linux and SuSE Linux Enterprise Server distributions.

The major difference that I recently realized is that, unlike my previous work with RHEL & SLES, the result of my work is now available to the whole Ubuntu community, not just to the customers that may for our support.

Here is an example. Recently one of our customer identified a bug with vm-builder in a very specific case.  The work that I did on this bug resulted in a patch that I submitted to the developers who accepted its inclusion in the code. In my previous life, this fix would have been made available only to customers paying a subscription to the vendors through their official update or service pack services.

With Ubuntu, through Launchpad and the regular community activity, this fix will become available to the whole community through the standard -updates channel of our public archives.

This is true for the vast majority of the fixes that are provided to our customers. As a matter of fact, the public archives are almost the only channel that we have to provide fixes to our customers, hence making them available to the whole Ubuntu community at the same time.  This is different behavior and something that makes me a bit prouder of the work I’m doing.

Read more
mandel

At the moment we are working on providing support for proxy on Ubuntu One. In order to test this correctly I have been setting up a LAN in my office so that I can test as many scenarion as possible. On of those scenarios is the one in which the auth if the proxy uses Active Directory.

Because I use bind9 to set one of my boxed for the DNS I had to dig out how to configure it to work with AD. In order to do that I did the following:

  1. Edited named.conf.local to add a subdomain for the AD machine:

    zone "ad.example.com" {
            type master;
            file "/etc/bind/db.ad.example.com";
            allow-update { 192.168.1.103; };
    };
    
  2. Configured the subzone to work with AD.

    ; BIND data file for local loopback interface
    ;
    $TTL    604800
    @       IN      SOA     ad.example.com. root.ad.example.com. (
                                  2         ; Serial
                             604800         ; Refresh
                              86400         ; Retry
                            2419200         ; Expire
                             604800 )       ; Negative Cache TTL
    ;
    @       IN      NS      ad.marvel.
    @       IN      A       127.0.0.1
    @       IN      AAAA    ::1
    ;
    ; AD horrible domains
    ;
    dc1.ad.example.com.    A       192.168.1.103
    _ldap._tcp.ad.example.com.     SRV     0 0 389  dc1.ad.example.com.
    _kerberos._tcp.ad.example.com.    SRV     0 0 88   dc1.ad.example.com.
    _ldap._tcp.dc._msdcs.ad.example.com.   SRV     0 0 389  dc1.ad.example.com.
    _kerberos._tcp.dc._msdcs.ad.example.com.    SRV     0 0 88   dc1.ad.example.com.
    gc._msdcs.ad.example.com.      SRV     0 0 3268 dc1.ad.example.com.
    

    Note:Is important to remember that the computer name of the server that has the AD role is dc1, if we used a diff name we have to change the configuration accordingly.

  3. Restart the bind9 service:

    sudo /etc/init.d/bind9 restart
    
  4. Install the AD server and specify that you DO NOT want to set that server as a DNS server too.
  5. Set the AD server to use your Ubuntu with your bind9 as the DNS server.

There are lots of things missing if you wanted to use this a set up for a corporate network, but it does the trick in my LAN since I do not have AD duplication or other fancy things. Maybe is useful for you home, who knows..

Read more
caribou

While testing Oneiric on a separate disk, I wanted to get some files off my laptop’s hard drive which is hosting my normal Natty’s install.  Keeping with a previous setup, I had installed my laptop with a fully encrypted hard disk, using the alternate CD, so I needed a procedure to do this manually.

Previously, I had tested booting the Natty LiveCD and, to my enlightened surprise, the Livce CD did see the encrypted HD and proceeded to ask for the passphrase in order to mount it.  But this time, I’m not running off the LiveCD, but from a complete install which is on a separate hard drive.  Since it took me a while to locate the proper procedure, I thought that I would help google a bit so it is not so deep in the pagerank for others next time.  But first, thanks to UbuntuGeek’s article Rescue and encrypted LUKS LVM volume for providing the solution.

Since creating an encrypted Home directory is easily achieved with standard installation methods, there are many references to how to achieve it for encrypted private directory. Dustin Kirkland’s blog is a very good source of information on those topics. But dealing with an encrypted partition requires a different approach. Here it is (at least for an encrypted partition done using the Ubuntu alternate DVD) :

First of all, you need to make sure that lvm2 and cryptsetup packages are installed. If not, go ahead and install them

 # sudo aptitude install cryptsetup lvm2

Then verify if the dm-crypt module is loaded and load it if it is not

 # sudo modprobe dm-crypt

Once this is done, open  the LUKS partition (using your own encrypted partition name) :

 # sudo cryptsetup luksOpen /dev/sda3 crypt1

You should have to provide the passphrase that is used to unlock your crypted partition here.

Once this is done, you must scan for the LVM volume groups :

 # sudo vgscan –mknodes
 # sudo vgchange -ay

There, you should get the name of the volume group that will be needed to mount the encrypted partition (which happens to be configured as an LVM volume). You can now procede to mount your partition (changing {volumegroup} with the name that you collected in the previous command ) :

# sudo mount /dev/{volumegroup}/root /mnt

Your encrypted data should now be available in the /mnt directory :-)

Read more
Mark Baker

Last week saw the culmination of one of the UK’s most popular TV shows – Britain’s Got Talent. The way in which this show over five series has captured the attention of the British public is quite incredible, with the majority of popular media outlets dedicating significant space to the contestants, the judges, rumours about the format and speculation about who would win.

Such coverage and excitement means that Britain’s Got Talent drives audience and voter engagement to levels that politicians must dream about. Of course there are many ways that the show makes sure it gets our attention, not least of which is having hours of live coverage on prime time television, but, the talented team behind the show are also using many techniques to encourage deeper engagement for a modern audience.

Take for example the Buzz Off game. This is a game with which viewers can play along while watching the show to ‘buzz’ the acts that they don’t like using a mobile or web-based application. The buzzes are stored with a running total kept and shown per act on the website, so that the audience goes from being an passive viewer to an active participant in the show. The Buzz Off game is developed by Livetalkback for the Britain’s Got Talent Team and recently Malcolm Box, CTO of Livetalkback explained to a group of London big data enthusiasts some of the challenges in building and designing an application that is required to scale to almost Facebook like proportions for a short period of time. The full presentation is below, but for convenience some of the key points are:

  • The volume of traffic being handled by the Buzz application during a two hour live show is equivalent to 130 billion requests per month – excluding Google, this would put the application as approximately the 2nd largest website in the world behind Facebook.
  • To manage this scale, the application is based on Ubuntu Server, MySQL and Cassandra all hosted in the Amazon Public Cloud
  • The service uses hundreds of instances that must be brought online very quickly as additional capacity is required and then released as the load declines after the show.

Malcolm and the team at Livetalkback have done an incredible job to put this together in a short space of time and have it work reliably throughout this year’s programme. A cloud-based approach made perfect sense for an application with such specific scaling requirements, and it was vital that the application scaled not only technically but financially as well. This is where Ubuntu on Amazon really proved its worth – customers pay for the resources they use and there are no license fees or royalties to worry about when bringing up new instances. It is the type of efficient driving of engagement that once again Government departments must be in awe of.

Which brings us onto the Cabinet Office. The UK Government is looking for ways to provide cost effective online systems that drive audience engagement. Recently there have been signs that there has been progress  through the Alpha.gov.uk project led by Martha Lane Fox. Alpha.gov.uk is a prototype site that demonstrates how digital services could be delivered more effectively and simply to users through the use of open, agile and cheaper digital technologies. It is only a prototype at the moment but it is significant in that it has been quickly put together and delivers exactly what it is supposed to do in a cost effective way. So how did they do it? Well they decided on a similar architecture to Livetalkback – Open source software based on Ubuntu Server in a public cloud. Full details of the technology used is at:

http://blog.alpha.gov.uk/colophon

British tax payers will take heart form the knowledge that someone in the Cabinet Office is looking at this and hopefully wondering why more Government services can’t be delivered like this. When it comes to engaging an audience and encouraging interaction in a cost effective way, Britain’s Got Talent and the Cabinet Office now have more in common than you’d think.

Read more
Gerry Carr

One of the benefits of the direction that’s been taken with the next release of Ubuntu is that there is no longer a need for a separate netbook edition. The introduction of the new shell for Ubuntu means that we have a user interface that works equally well whatever the form factor of the PC. And the underlying technology works on a range of architectures including those common in netbook, notebooks, desktops or whatever you choose to run it on. Hence the need for a separate version for netbooks is removed.

To be clear, this is the opposite of us withdrawing from the netbook market. In fact looking at the download figures on ubuntu.com interest in netbooks is not only thriving but booming. It’s us recognising that the market has moved on and celebrating that separate images are no longer a requirement as the much anticipated convergence of devices moves closer.

A return to the Ubuntu name

Which actually got us thinking about our naming conventions in totality. ‘Ubuntu Desktop Edition’ arose in 2005 as a response to the launch of Ubuntu Server Edition and our desire to distinguish between the two. But desktops are no longer the pre-eminent client platform. And actually naming the the ‘edition’ after any target technology is going to have us chasing the trend. Also we were tying ourselves to some ungainly product titles – Ubuntu 10.04 LTS Server Edition for instance. User feedback also told us that people thought the edition was not for them as they had a laptop and spent time looking for a ‘Laptop Edition’.

So we are going back to our roots. From 11.04 the core product that you run on your PC will be simply, Ubuntu. Therefore the next release will be Ubuntu 11.04 and you can run that, my friend, on anything you like from a netbook to a notebook to a desktop. Ubuntu Server will be maintained as a separate product of course and named simply, Ubuntu Server 11.04.

We think this will make things simpler. When we mean Ubuntu for notebooks we will say just that rather than the more confusing, ‘Ubuntu Desktop Edition for notebooks’. We are retaining the concept of ‘remixes’ for community projects and the naming convention therein. And we would love to hear what you think.

Read more