Canonical Voices

Posts tagged with 'kvm'

Marcin Bednarz

MAAS - bare metal provisioning with top-of-the-rack switch - remote location server set up

Use bare metal provisioning with a top-of-the-rack switch

When deploying a small footprint environment such as edge computing sites, 5G low latency services, a site support cabinet or baseband unit, its critical to establish the optimal number of physical servers needed for set up. While several approaches exist, bare metal provisioning through KVM can often be the most reliable option. Here’s why.

For every physical server in such a constrained physical environment, there is an associated cost.

In the case of an edge deployment, this cost can be measured in (among other properties):

  • Capital and operational expenses
  • Power usage
  • Dissipated heat
  • The actual real estate it occupies

Ways to set up servers in remote locations

One approach would be to have a dedicated server shipped to every remote location to act as an infrastructure node. Typically this would require an additional node (or committed shared resources) which might not align with remote site footprint constraints.

This approach might introduce unnecessary latency and delays in server provisioning

Another option is stretching the provisioning and management network across WAN and provisioning all the servers from a central location. This approach might, however, introduce unnecessary latency and delays in server provisioning. It also requires quite sophisticated network configuration to account for security, reliability and scale of remote site deployments.

So what other options exist? What is the common infrastructure component always present in every remote location? The answer is quite straightforward – in every single site one needs to provide basic network connectivity through a top-of-the-rack/site switch. It’s this critical component that enables servers to communicate with the rest of the network and provide required functions such as application servers, VNFs, container and virtualisation platforms.

How do I re-purpose nodes to provision different operating systems?

Modern switches can run Linux as their underlying operating system, enabling infrastructure operators to run applications directly on these top-of-the-rack devices either through KVM or snaps support.

A great example of a workload that can run on a top-of-the-rack switch is a bare metal provisioning solution such as MAAS. By deploying MAAS we can solve the system provisioning challenge without unnecessary complexity. By running a lightweight version of MAAS on a top-of-the-rack switch, we reduce friction in small footprint environments as well as providing an open API-driven way to provision and repurpose nodes in every remote location. This enables not only fast and efficient server provisioning but also eliminates drawbacks of other alternatives mentioned above.

Contact us to learn more

The post Need to set up servers in remote locations? appeared first on Ubuntu Blog.

Read more
Dustin Kirkland

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!


The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)


Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
echo 3 | sudo tee /proc/sys/vm/drop_caches; \
free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?

Cheers!
Dustin

Read more
Colin Ian King

I've been using QEMU and KVM for quite a while now for general kernel testing, for example, sanity checking eCryptfs and Ceph.   It can be argued that the best kind of testing is performed on real hardware, however, there are times when it is much more convenient (and faster) to exercise kernel fixes on a virtual machine.

I used to use the command line incantations to run QEMU and KVM, but recently I've moved over to using virt-manager because it so much simpler to use and caters for most of my configuration needs.

Virt-manager provides a very usable GUI and allows one to create, manage, clone and destroy virtual machine instances with ease.

virt-manager view of virtual machines
Each virtual machine can be easy reconfigured in terms of CPU configuration (number and type of CPUs),  memory size, boot options, disk and CD-ROM selection, NIC selection, display server (VNC or Spice), sound device, serial port config, video hardware and USB and IDE controller config.  

One can add and remove additional hardware, such serial port, parallel ports, USB and PCI host devices, watchdog controllers and much more besides.

Configuring a virtual machine

..so reconfiguring a test to run on a single core CPU to multi-core is a simple case of shutting down the virtual machine, bumping up the number of CPUs and booting up again.

By default one can view the virtual machine's console via a VNC viewer in virt-manager and there is provision to scale the screen to the window size, set to full size or resize the virt-manager window to the screen size.  For ease of use, I generally just ssh into the virtual machines and ignore the console unless I can't get the kernel to boot.

virt-manager viewing a 64 bit Natty server (for eCryptfs testing)
Virt-manager is a great tool and well worth giving a spin. For more information on virt-manager visit virt-manager.org

Read more
Nick Barcet

Six month after starting a private beta for HPCloud, HP has announced this week that their cloud is ready to start scaling up to a public beta next month.  This is a major milestone for HPCloud which coincides with two major events: the release of OpenStack Essex last week and the upcoming release of Ubuntu Server 12.04 LTS at the end of this month.  These two components are the foundation that HP uses to build its public cloud offering, on which they bring their own set of enhancements.

HPCloud is built on top of Ubuntu Server and uses the built in KVM hypervisor to power OpenStack compute nodes.  HP’s OpenStack deployment includes all core components of Essex, including the new central authentication, Keystone, which provides unified login for all components of OpenStack.

We are proud that Ubuntu and our support services are at the heart of this public cloud deployment which is one more proof point that Ubuntu and OpenStack are ready for business.

Read more
mandel

This is here for me to remember the next time I need to do this task:

  1. Copy the default pool definition:

    virsh pool-dumpxml default > pool.xml
  2. edit pool.xml changing the following vars:

    <pool type='dir'>
      <name>{$name}</name>
      <uuid>{$id}</uuid>
      <capacity>43544694784</capacity>
      <allocation>30412328960</allocation>
      <available>13132365824</available>
      <source>
      </source>
      <target>
        <path>{$path}</path>
        <permissions>
          <mode>0700</mode>
          <owner>-1</owner>
          <group>-1</group>
        </permissions>
      </target>
    </pool>
  3. virsh pool-create pool.xml
  4. virsh pool-refresh name

Doing the above you can add a new pool, for example one that is not in you ssd.

Read more
mandel

I use KVM daily for testing purposes of Ubuntu One on Windows. Recetly I created a Vista VM with 20Gb thinking that it was going to be big enough, turns out that after installation I had a single Gb left (WTF!). Not wanting to have to go through the painful installation process again I decided to find out how to re-size a KVM disk image. Here are the steps if you have to do the same:

  1. Create a new image with the extra size

    sudo qemu-img create -f raw addon.raw 30G
  2. Add the new data, my old vm image is called caranage_old.img. Do remember the order is important, otherwise the image won’t boot.

    cat carnage_old.img addon.raw >> carnage.img
  3. Create a new vm that uses the new image and resize the hardrive accordingly. For example, on Windows Vista I had to go to Mamangement Tools and resize the C: partition to use the new 30Gb.
  4. I hope it helps!

    Read more