Canonical Voices

Posts tagged with 'servers'

Colin Ian King

Before I started some analysis on benchmarking various popular file systems on Linux I was recommended to read "Systems Performance: Enterprise  and the Cloud" by Brendan Gregg.

In today's modern server and cloud based systems the multi-layered complexity can make it hard to pin point performance issues and bottlenecks. This book is packed full useful analysis techniques covering tracing, kernel internals, tools and benchmarking.

Critical to getting a well balanced and tuned system are all the different components, and the book has chapters covering CPU optimisation (cores, threading, caching and internconnects),  memory optimisation (virtual memory, paging, swapping, allocators, busses),  file system I/O, storage, networking (protcols, sockets, physical connections) and typical issues facing cloud computing.

The book is full of very useful examples and practical instructions on how to drill down and discover performance issues in a system and also includes some real-world case studies too.

It has helped me become even more focused on how to analyse performance issues and consider how to do deep system instrumentation to be able to understand where any why performance regressions occur.

All-in-all, a most systematic and well written book that I'd recommend to anyone running large complex servers and cloud computing environments.






Read more
mark

As we move from “tens” to “hundreds” to “thousands” of nodes in a typical data centre we need new tools and practices. This hyperscale story – of hyper-dense racks with wimpy nodes – is the big shift in the physical world which matches the equally big shift to cloud computing in the virtualised world. Ubuntu’s popularity in the cloud comes in part from being leaner, faster, more agile. And MAAS – Metal as a Service – is bringing that agility back to the physical world for hyperscale deployments.

Servers used to aspire to being expensive. Powerful. Big. We gave them names like “Hercules” or “Atlas”. The bigger your business, or the bigger your data problem, the bigger the servers you bought. It was all about being beefy – with brands designed to impress, like POWER and Itanium.

Things are changing.

Today, server capacity can be bought as a commodity, based on the total cost of compute: the cost per teraflop, factoring in space, time, electricity. We can get more power by adding more nodes to our clusters, rather than buying beefier nodes. We can increase reliability by doubling up, so services keep running when individual nodes fail. Much as RAID changed the storage game, this scale-out philosophy, pioneered by Google, is changing the server landscape.

In this hyperscale era, each individual node is cheap, wimpy and, by historical standards for critical computing, unreliable. But together, they’re unstoppable. The horsepower now resides in the cluster, not the node. Likewise, the reliability of the infrastructure now depends on redundancy, rather than heroic performances from specific machines. There is, as they say, safety in numbers.

We don’t even give hyperscale nodes proper names any more – ask “node-0025904ce794”. Of course, you can still go big with the cluster name. I’m considering “Mark’s Magnificent Mountain of Metal” – significantly more impressive than “Mark’s Noisy Collection of Fans in the Garage”, which is what Claire will probably call it. And that’s not the knicker-throwing kind of fan, either.

The catch to this massive multiplication in node density, however, is in the cost of provisioning. Hyperscale won’t work economically if every server has to be provisioned, configured  and managed as if it were a Hercules or an Atlas. To reap the benefits, we need leaner provisioning processes. We need deployment tools to match the scale of the new physical reality.

That’s where Metal as a Service (MAAS) comes in. MAAS makes it easy to set up the hardware on which to deploy any service that needs to scale up and down dynamically – a cloud being just one example. It lets you provision your servers dynamically, just like cloud instances – only in this case, they’re whole physical nodes. “Add another node to the Hadoop cluster, and make sure it has at least 16GB RAM” is as easy as asking for it.

With a simple web interface, you can  add, commission, update and recycle your servers at will.  As your needs change, you can respond rapidly, by adding new nodes and dynamically re-deploying them between services. When the time comes, nodes can be retired for use outside the MAAS.

As we enter an era in which ATOM is as important in the data centre as XEON, an operating system like Ubuntu makes even more sense. Its freedom from licensing restrictions, together with the labour saving power of tools like MAAS, make it cost-effective, finally, to deploy and manage hundreds of nodes at a time

Here’s another way to look at it: Ubuntu is bringing cloud semantics to the bare metal world. What a great foundation for your IAAS.

Read more
Barry Warsaw

Gentoo No More

Today I finally swapped my last Gentoo server for an Ubuntu 10.04 LTS server. Gentoo has served me well over these many years, but with my emerge updates growing to several pages (meaning, I was waaaay behind on updates with almost no hope of catching up) it was long past time to switch. I'd moved my internal server over to Ubuntu during the Karmic cycle, but that was a much easier switch. This one was tougher because I had several interdependent externally facing services: web, mail, sftp, and Mailman.


The real trick to making this go smoothly was to set up a virtual machine in which to install, configure and progressively deploy the new services. My primary desktop machine is a honkin' big i7-920 quad-core Dell with 12GB of RAM, so it's perfectly suited for running lots of VMs. In fact, I have several Ubuntu, Debian and even Windows VMs that I use during my normal development of Ubuntu and Python. However, once I had the new server ready to go, I wanted to be able to quickly swap it into the real hardware. So I purchased a 160GB IDE drive (since the h/w it was going into was too old to support SATA, but still perfectly good for a simple Linux server!) and a USB drive enclosure. I dropped the new disk into the enclosure, mounted it on the Ubuntu desktop and created a virtual machine using the USB drive as its virtio storage.

It was then a pretty simple matter of installing Ubuntu 10.04 on this USB drive-backed VM, giving the VM an IP address on my local network, and installing all the services I wanted. I could even register the VM with Landscape to easily keep it up-to-date as I took my sweet time doing the conversion. There were a few tricking things to keep in mind:

  • I use a port forwarding border router to forward packets from my static external IP address to the appropriate server on my internal network. As I prepared to move each service, I first shut the service off on the old server, twiddled the port forwarding to point to a bogus IP, then tested the new service internally before pointing the real port forward to the new service. This way for example, I had reasonably good confidence that my SMTP server was configured properly before loosing the fire hose of the intarwebs on it.
  • I host several domains on my server so of course my Apache uses NameVirtualHosts. The big downside here is that the physical IP address is used, so I had to edit all the configs over to the temporary IP address of the VM, then back again to the original IP of the server, once the switch was completed.
  • My old server used a fairly straightforward iptables configuration, but in Ubuntu, UFW seems to be the norm. Again, I use IP addresses in the configuration, so these had to be changed twice during the migration.
  • /etc/hosts and /etc/hostname both had to be tweaked after the move since while living in a VM, the host was called something different than when in its final destination. Landscape also had to be reconfigured (see landscape-config(8)).
You get the picture. All in all, just tedious if not very difficult. One oddness was that when the machine was a Gentoo box, the ethernet port was eth0 but after the conversion to Ubuntu, it became eth1. Once I figured that out, it was easy to fix networking. There were a few other little things like updates to my internal DNS, pointing my backup server to the new locations of the web and Mailman data on the new server (Ubuntu is more FHS-compliant than my ancient Gentoo layout), and I'm sure a few other little things I forgot to take notes on.

Still, the big lesson here was that by bouncing the services to a USB-drive backed VM, I was able to fairly easy drop the new disk into the old server for a quick and seamless migration to an entirely new operating system.

Read more