Canonical Voices

Posts tagged with 'planet ubuntu'

David Planella

We’re thrilled to announce yet another significant milestone in the history of the Ubuntu project. After having recently unveiled the Ubuntu Touch Developer Preview, today we’re publishing the full source code and images for supported devices.

For developers and enthusiasts only

While a huge amount of Engineering and Design work has been put into ensuring that the foundations for our user experience vision are in place, we want to stress that the Ubuntu Touch Developer Preview is currently work in progress. We are releasing the full code at this point to align to our philosophy of transparency and open source development.

We recommend to install the Touch Developer Preview only if you are a developer or enthusiast who wants to test or contribute to the platform. It is not intended to replace production devices or the tablet or handset you use every day.

Flash your device

All that said, let’s get on to how to install Touch Developer Preview from a public image on your device.

What to expect after flashing

Not all functionality from a production device is yet available on the Touch Preview. The list of functions you can expect after installing the preview on your handset or tablet are as follows. For detailed information check the release notes.

  • Shell and core applications
  • Connection to the GSM network (on Galaxy Nexus and Nexus 4)
  • Phone calls and SMS (on Galaxy Nexus and Nexus 4)
  • Networking via Wifi
  • Functional camera (front and back)
  • Device connectivity through the Android Developer Bridge tool (adb)

Supported devices

The images we are making available today support the following devices:

  • Galaxy Nexus
  • Nexus 4
  • Nexus 7
  • Nexus 10

I’m all set, show me how to flash!

You will find the detailed instructions to flash on the Ubuntu wiki.

Install the Touch Developer Preview >

Contributing and the road ahead

These are exciting times for Ubuntu. We’re building the technology of the future, this time aiming at a whole new level of massive adoption. The Touch Developer Preview means the first fully open source mobile OS developed also in the open. True to our principles this milestone also enables our community of developers to contribute and be a key part of this exciting journey.

In terms of the next steps, today we’re making the preview images available for the Ubuntu 12.10 stable release. In the next few days we’re going to switch to Raring Ringtail, our development release, which is where development will happen on the road to our convergence story.

You’ll find the full details of how the infrastructure and the code are being published and used on the Ubuntu wiki.

Contribute to the Touch Developer Preview >

Presenting the Ubuntu SDK Alpha

But there’s more! To further celebrate the Touch Preview, we’re very proud to bring some exciting news that app developers will surely enjoy: the Ubuntu SDK Alpha release.

In fact, development of the SDK still keeps happening in the open and on a rolling release basis. But coinciding with the Touch Developer Preview, we thought that the latest release came with so much goodness, that we decided to label it in celebration.

Feature highlight: remote app deployment

Perhaps the coolest feature ever since the SDK was released: you can now deploy and execute the apps you create straight from the IDE.

Applications developed with Qt Creator can now be seamlessly and securely transferred and executed to a device just moving two fingers. Remember this shortcut: Ctrl+F12.

Inline with how easy and lightweight the process of creating a phone app is, a lot of work has been put into ensuring all complexity is hidden from the developer, yet it works solidly. Behind the scenes, SSH key pairing with the remote device works on-the-fly.

Here’s the lowdown:

  1. Plug in your mobile device running Ubuntu on the USB port of your computer
  2. Make sure your device is also connected to a wireless network (SSH key pairing happens over the air)
  3. Start Qt Creator from the Dash, and select the new Devices tab
  4. Press the Enable button to activate Developer Mode
  5. Once the device is connected, you can develop your QML projects as usual (check out the new project wizard as well) and press Ctrl+F12 to install and execute your app on the remote device

Tooling updates

With Qt Creator at its heart, the set of tools app developers use on an everyday basis to author their software, have seen major improvements:

  • Qt Creator has been updated to the bleeding edge version: 2.7. We expect this version to continue maturing together with the platform and the SDK.
  • Ubuntu application templates and wizard are now available to easily start creating apps that run on the phone and tablets.
  • The visual user interface designer in Qt Creator now works with QtQuick 2, the framework upon the Ubuntu SDK is based.

User Interface Toolkit updates

The UI Toolkit is the part of the SDK that provides the graphical components (such as buttons, text entries, and others) as building blocks that enable the basic user interaction with the underlying system. A new component, polishing and bug fixing have set the theme for this release:

Install the Ubuntu SDK Alpha

By now we’re pretty certain you’re looking forward to installing and putting all of that development goodness to the test.

That’s an easy one, if you haven’t yet install the Ubuntu SDK.

If you already installed the SDK, just run Update Manager from the Dash and update the Ubuntu SDK package as prompted. Or alternatively, if you prefer the command line, just fire up a terminal and run ‘sudo apt-get update && sudo apt-get install ubuntu-sdk’.

And that’s pretty much it! Be sure to check out the release notes for any additional technical details too.

Let us know what you think

We’d be delighted to hear what you think and get your feedback on how you are using the SDK and ways in which it could be improved. So do get in touch with us or report a bug if you find any issues.

Time to start developing beautiful apps now!

Read more
David Planella

We’re thrilled to announce one of the most expected resources for Ubuntu app developers: the App Design Guides.

The App Design Guides site is the first installment of a live resource that will organically grow to provide guidance and enable app developers to build stunning, consistent and usable applications on a diversity of Ubuntu devices.

Together with the Ubuntu SDK preview, the App Design Guides complete yet another chapter in the Ubuntu app developer story. Developers have now the tools to create beautiful software, along with all the information related to UX, behaviour, patterns and visual design to ensure their apps provide a solid, clean and enjoyable user experience.

And consistent with the Ubuntu philosophy and our beliefs, all of these tools and guides are available to everyone as open source and for free.

Show me the Ubuntu App Design Guides! ›

Updating the core app designs for Ubuntu App Guides compliance

We have recently kicked off a community-driven process to design and implement a set of 12 core apps for Ubuntu running on phones. The first stage of the project consisted in asking community members to participate in the submission of designs to be used as input and food for thought for the core app developers.

The response so far has been overwelming:  over 50 community designers signed up for this initiative, submitting nearly 90 mockups on the Ubuntu MyBalsamiq site we set up for this project.

Following the App Design Guides go-live, it is now a great opportunity to ensure those designs follow the guidelines for a consistent app experience on Ubuntu. Therefore, we’d like to ask everyone who submitted a design to review them and update them to make sure they are inline with the App Design Guides.

Reminder: if you want to collaborate in this design project, just drop an e-mail to David Planella <david(dot)planella(at)canonical(dot)com> and Michael Hall <michael(dot)hall(at)canonical(dot)com>.

Open design and collaboration

Continuing with the trend of open and collaborative design, we want to hear from you!

The Guides are a resource that will grow together with the needs of app developers, so we’ll greatly appreciate your feedback on the Ubuntu Phone mailing list (remember to prepend the subject with [Design]) and if you’ve got any questions about them, just ask on Ask Ubuntu.

Stay tuned for updates and for some visual designs for core apps from the Canonical Design team coming soon!

Read more
Stéphane Graber

NorthSec 2013

NorthSec logo

So, when I’m not busy working on Ubuntu, or on LXC, or on Edubuntu, or … I also spend some of my spare time preparing the upcoming NorthSec 2013 security contest which will be held from Friday the 5th of April to Sunday the 7th of April at ETS in downtown Montreal.

NorthSec can be seen as the successor of HackUS 2010 and HackUS 2011 which both were held where I currently live, in Sherbrooke, QC. This year, we’re moving to Montreal, in the hope of attracting more people, especially from other Canadian provinces and from abroad.

I’m personally mostly involved with the internal infrastructure side of things, building the Ubuntu based infrastructure required to simulate the hundreds of servers and services used for the contest. All of that while making sure everything is rock solid and copes extremely well under pressure (considering what our contestants tend to throw at us).

I also usually get involved with some of the tracks, mostly the networking one, trying to think of really twisted setups ranging from taking over an active IPv6 network to hijacking IPs by messing with a badly configured BGP router (taken from past editions).

Outside of our twisted network challenges, we have quite a few more things to offer, here’s the current list of tracks for this year:

  • Trivias (they seem easy but people are known to have wasted hours on them)
  • Web (sql injection, xss anyone?)
  • Binaries (because we know you love those)
  • Networking (my track of choice)
  • Reverse Java

And if anyone manages to finish everything, don’t worry, we’ll come up with more.
As far as I know, we never had a single team get bored in the past two editions ;)

So if you’re interested in computer security, want to try to prove how good you are at finding security flaws and exploiting them or just want to see what that thing is all about, well you should consider a trip to Montreal in early April.
All the details you need are at: http://www.nsec.io/en

If you are a company interested in helping us with sponsorship, I hear that we’re always looking for more sponsors. So if that’s something you can help with, feel free to contact me directly at: stgraber at nsec dot io

Read more
Stéphane Graber

Anyone who met me probably knows that I like to run everything in containers.

A couple of weeks ago, I was attending the Ubuntu Developer Summit in Copenhagen, DK where I demoed how to run OpenGL code from within an LXC container. At that same UDS, all attendees also received a beta key for Steam on Linux.

Yesterday I finally received said key by e-mail and I’ve been experimenting with Steam a bit. Now, my laptop is running the development version of Ubuntu 13.04 and only has 64bit binaries. Steam is 32bit-only and Valve recommends running it on Ubuntu 12.04 LTS.

So I just spent a couple of hours writing a tool called steam-lxc which uses LXC’s new python API and a bunch more python magic to generate an Ubuntu 12.04 LTS 32bit container, install everything that’s needed to run Steam, then install Steam itself and configures some tricks to get direct GPU access and access to pulseaudio for sound.

All in all, it only takes 3 minutes for the script to setup everything I need to run Steam and then start it.

Here’s a (pretty boring) screencast of the script in action:

This script has only been tested with Intel hardware on Ubuntu 13.04 64bit at this point, but the PPA contains builds for Ubuntu 12.04 and Ubuntu 12.10 too.

To get it on your machine just do:

  • sudo apt-add-repository ppa:ubuntu-lxc/stable
  • sudo apt-get update
  • sudo apt-get install steam-lxc
  • sudo mkdir -p /var/lib/lxc /var/cache/lxc

Then once that’s all installed, set it up with sudo steam-lxc create. This can take somewhere from 5 minutes to an hour depending on your internet connection.

And once the environment is all setup, you can start steam with sudo steam-lxc run.

The code can be found at: https://code.launchpad.net/~ubuntu-lxc/lxc/steam-lxc

You can leave your feedback as comment here and if you want to improve the script, merge proposals are more than welcome.
I don’t have any hardware requiring proprietary drivers but I’d expect steam to fail on such hardware as the drivers won’t get properly installed in the container. Adding code to deal with those is pretty easy and I’d love to get some patches for that!

Have fun!

Read more
Stéphane Graber

(tl;dr: Edubuntu 14.04 will include a new Edubuntu Server and Edubuntu tablet edition with a lot of cool new features including a full feature Active Directory compatible domain.)

Now that Edubuntu 12.10 is out the door and the Ubuntu Developer Summit in Copenhagen is just a week away, I thought it’d be an appropriate time to share our vision for Edubuntu 14.04.

This was so far only discussed in person with Jonathan Carter and a bit on IRC with other Edubuntu developers but I think it’s time to make our plans a bit more visible so we can get more feedback and hopefully get interested people together next week at UDS.

There are three big topics I’d like to talk about. Edubuntu desktop, Edubuntu server and Edubuntu tablet.

Edubuntu desktop

Edubuntu desktop is what we’ve been offering since the first Edubuntu release and what we’ll obviously continue to offer pretty much as it’s today.
It’s not an area I plan on spending much time working on personally but I expect Jonathan to drive most of the work around this.

Basically what the Edubuntu desktop needs nowadays is a better application selection, better testing, better documentation, making sure our application selection works on all our supported platforms and is properly translated.

We’ll also have to refocus some of our efforts and will likely drop some things like our KDE desktop package that hasn’t been updated in years and was essentially doubling our maintenance work which is why we stopped supporting it officially in 12.04.

There are a lot of cool new tools we’ve heard of recently and that really should be packaged and integrated in Edubuntu.

Edubuntu Server

Edubuntu Server will be a new addition to the Edubuntu project, expected to ship in its final form in 14.04 and will be supported for 5 years as part of the LTS.

This is the area I’ll be spending most of my Edubuntu time on as it’s going to be using a lot of technologies I’ve been involved with over the years to offer what I hope will be an amazing server experience.

Edubuntu Server will essentially let you manage a network of Edubuntu, Ubuntu or Windows clients by creating a full featured domain (using samba4).

From the same install DVD as Edubuntu Desktop, you’ll be able to simply choose to install a new Edubuntu Server and create a new domain, or if you already have an Edubuntu domain or even an Active Directory domain, you’ll be able to join an extra server to add extra scalibility or high-availability.

On top of that core domain feature, you’ll be able to add extra roles to your Edubuntu Server, the initial list is:

  • Web hosting platform – Will let you deploy new web services using JuJu so schools in your district or individual teachers can easily get their own website.
  • File server – A standard samba3 file server so all your domain members can easily store and retrieve files.
  • Backup server – Will automatically backup the important data from your servers and if you wish, from your clients too.
  • Schooltool – A school management web service, taking care of all the day to day school administration.

LTSP will also be part of that system as part of Edubuntu Terminal Server which will let you, still from our single install media, install as many new terminal servers as you want, automatically joining the domain, using the centralized authentication, file storage and backup capabilities of your Edubuntu Server.

As I mentioned, the Edubuntu DVD will let you install Edubuntu Desktop, Edubuntu Server and Edubuntu Terminal Server. You’ll simply be asked at installation time whether you want to join an Edubuntu Server or Active Directory domain or if you want your machine to be standalone.

Once installed, Edubuntu Server will be managed through a web interface driving LXC behind the scene to deploy new services, upgrade individual services or deploy new web services using JuJu.
Our goal is to have Edubuntu Server offer an appliance-like experience, never requiring any command line access to the system and easily supporting upgrades from a version to another.

For those wondering what the installation process will look like, I have some notes of the changes available at: http://paste.ubuntu.com/1289041/
I’m expecting to have the installer changes implemented by the time we start building our first 13.04 images.

The rest of Edubuntu Server will be progressively landing during the 13.04 cycle with an early version of the system being released with Edubuntu 13.04, possibly with only a limited selection of roles and without initial support for multiple servers and Active Directory integration.

While initially Edubuntu branded, our hope is that this work will be re-usable by Ubuntu and may one day find its way into Ubuntu Server.
Doing this as part of Edubuntu will give us more time and more flexibility to get it right, build a community around it and get user feedback before we try to get the rest of the world to use it too.

Edubuntu Tablet

During the Edubuntu 12.10 development cycle, the Edubuntu Council approved the sponsorship of 5 tablets by Revolution Linux which were distributed to some of our developers.

We’ve been doing daily armhf builds of Edubuntu, refined our package selections to properly work on ARM and spent countless hours fighting to get our tablet to boot (a ZaTab from ZaReason).
Even though it’s been quite a painful experience so far, we’re still planning on offering a supported armhf tablet image for 14.04, running something very close to our standard Edubuntu Desktop and also featuring integration with Edubuntu Server.

With all the recent news about Ubuntu on the Nexus 7, we’ll certainly be re-discussing what our main supported platform will be during next week’s UDS but we’re certainly planning on releasing 13.04 with experimental tablet support.

LTS vs non-LTS

For those who read our release announcement or visited our website lately, you certainly noticed the emphasis on using the LTS releases.
We really think that most Edubuntu users want something that’s stable, very well tested with regular updates and a long support time, so we’re now always recommending the use of the latest LTS release.

That doesn’t mean we’ll stop doing non-LTS release like the Mythbuntu folks recently decided to do, pretty far from that. What it means however is that we’ll more freely experiment in non-LTS releases so we can easily iterate through our ideas and make sure we release something well polished and rock solid for our LTS releases.

Conclusion

I’m really really looking forward to Edubuntu 14.04. I think the changes we’re planning will help our users a lot and will make it easier than ever to get school districts and individual schools to switch to Edubuntu for both their backend infrastructure with Edubuntu Server and their clients with Edubuntu Desktop and Edubuntu Tablet.

Now all we need is your ideas and if you have some, your time to make it all happen. We usually hang out in #edubuntu on freenode and can also be contacted on the edubuntu-devel mailing-list.

For those of you going to UDS, we’ll try to get an informational session on Edubuntu Server scheduled on top of our usual Edubuntu session. If you’re there and want to know more or want to help, please feel free to grab Jonathan or I in the hallway, at the bar or at one of the evening activities.

Read more
Stéphane Graber

One of our top goals for LXC upstream work during the Ubuntu 12.10 development cycle was reworking the LXC library and turn it from a private library mostly used by the other lxc-* commands into something that’s easy for developers to work with and is accessible from other languages with some bindings.

Although the current implementation isn’t complete enough to consider the API stable and some changes will still happen to it over the months to come, we have pushed the initial implementation to the LXC staging branch on github and put it into the lxc package of Ubuntu 12.10.

The initial version comes with a python3 binding packaged as python3-lxc, that’s what I’ll use now to give you an idea of what’s possible with the API. Note that as we don’t have full user namespaces support at the moment, any code using the LXC API needs to run as root.

First, let’s start with the basics, creating a container, starting it, getting its IP and stopping it:

#!/usr/bin/python3
import lxc
container = lxc.Container("my_container")
container.create("ubuntu", {"release": "precise", "architecture": "amd64"})
container.start()
print(container.get_ips(timeout=10))
container.shutdown(timeout=10)
container.destroy()

So, pretty simple.
It’s also possible to modify the container’s configuration using the .get_config_item(key) and .set_config_item(key, value) functions. For those keys supporting multiple values, a list will be returned and a list will be accepted as a value by .set_config_item.

Network configuration can be accessed through the .network property which is essentially a list of all network interfaces of the container, properties can be changed that way or through .set_config_item and saved to the config file with .save_config().

The API isn’t terribly well documented at this point, help messages are present for all functions but there’s no generated html help yet.

To get a better idea of the functions exported by the API, you may want to look at the API test script. This script uses all the functions and properties exported by the python module so it should be a reasonable reference.

Read more
Stéphane Graber

A few months ago, I received two test SIM cards for Orange Poland’s new IPv6 network.

The interesting thing about this network is that it’s running IPv6 in a fairly unusual configuration and it was interesting to see how to get that work on Ubuntu.

This network uses two separate APNs, one for IPv4 (internet) and one for IPv6 (internetipv6).
Using two separate APNs is certainly easier on the carrier’s infrastructure side as they can get IPv6 online without actually changing anything on the IPv4 equipement, however that means that any client wanting to use both protocols at once needs to use multiple PDP contexts.

I’m now going to detail how to manually configure ppp to connect to such a network:
/etc/ppp/peers/orange

noauth
connect "/usr/sbin/chat -e -f /etc/ppp/peers/orange-connect"
/dev/ttyACM0
460800
+ipv6

/etc/ppp/peers/orange-connect

TIMEOUT 5
ABORT BUSY
ABORT 'NO CARRIER'
ABORT VOICE
ABORT 'NO DIALTONE'
ABORT 'NO ANSWER'
ABORT DELAYED
ABORT ERROR
'' \nAT
TIMEOUT 12
OK ATH
OK ATE1
OK 'AT+CGDCONT=1,"IP","internet"'
OK 'AT+CGDCONT=2,"IPV6","internetipv6"'
OK ATD*99#
CONNECT ""

Then all that’s needed is a good old:

pon orange

And a few seconds later, I’m getting the following on ppp0:

ppp0      Link encap:Point-to-Point Protocol  
          inet addr:87.96.119.169  P-t-P:10.6.6.6  Mask:255.255.255.255
          inet6 addr: 2a00:f40:2100:ac9:8c1e:da60:93e2:c234/64 Scope:Global
          inet6 addr: 2a00:f40:2100:ac9::1/64 Scope:Global
          inet6 addr: fe80::1/10 Scope:Link
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:3 
          RX bytes:354 (354.0 B)  TX bytes:767 (767.0 B

This config should work for any mobile network using a similar setup (likely to become more and more popular as the various RIRs are running out of IPv4).

Sadly ModemManager/NetworkManager don’t support mutliple PDP contexts yet, though it’s being discussed upstream, so we can hope to see something land soon.

Apparently multiple PDP contexts support is also dependant on hardware. In my case, I’ve been using an “old” Nokia E51 over USB as I didn’t have any luck getting that to work with an Android phone. My Nokia N900 also worked but required a custom kernel to be installed first to properly handle IPv6.

Read more
Stéphane Graber

With the DNS changes in Ubuntu 12.04, most development machines running with libvirt and lxc end up running quite a few DNS servers.

These DNS servers work fine when queried from a system on their network, but aren’t integrated with the main dnsmasq instance and so won’t let you resolve your VM and containers from outside of their respective networks.

One way to solve that is to install yet another DNS resolver and use it to redirect between the various dnsmasq instances. That can quickly become tricky to setup and doesn’t integrate too well with resolvconf and NetworkManager.

Seeing a lot of people wondering how to solve that problem, I took a few minutes yesterday to come up with an ssh configuration that’d allow one to access their containers and VM using their name.

The result is the following, to add to your ~/.ssh/config file:

Host *.lxc
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(host $(echo %h | sed "s/\.lxc//g") 10.0.3.1 | tail -1 | awk '{print $NF}') %p

Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(host $(echo %h | sed "s/\.libvirt//g") 192.168.122.1 | tail -1 | awk '{print $NF}') %p

After that, things like:

  • ssh user@myvm.libvirtu
  • ssh ubuntu@mycontainer.lxc

Will just work.

For LXC, you may also want to add a “User ubuntu” line to that config as it’s the default user for LXC containers on Ubuntu.
If you configured your bridges with a non-default subnet, you’ll also need to update the IPs or add more sections to the config.

These also turn off StrictHostKeyChecking and UserKnownHostsFile as my VMs and containers are local to my machine (reducing risk of MITM attacks) and tend to exist only for a few hours, to then be replaced by a completely different one with a different SSH host key. Depending on your setup, you may want to remove these lines.

Read more
~apw

The Internet has been alive with doom saying since the IPv4 global address pool was parcelled out.  Now I do not subscribe to the view that the Internet is going to end imminently, but I do feel that if the technical people out there do not start playing with IPv6 soon then what hope is there for the masses?

In the UK getting native IPv6 is not a trivial task, only one ISP I can find seems to offer it and of course it is not the one I am with.  So what options do I have?  Well there are a number of different types of IPv4 tunnelling techniques such as 6to4 but these seem to require the ability to handle the transition on your NAT router, not an option here.  The other is a proper 6in4 tunnel to a tunnel broker but this needs an end-point.

As I have a local server that makes a sensible anchor for such a tunnel.  Talking round with those in the know I settled on getting a tunnel from Hurricane Electric (HE), a company which gives out tunnels to individuals for free and seems to have local presence for their tunnel hosts.  HE even supply you with tools to cope with your endpoint having a dynamic address, handy.  So with an HE tunnel configuration in hand I set about making my backup server into my IPv6 gateway.

First I had to ensure that protocol 41 (the tunnelling protocol) was being forwarded to the appropriate host.  This is a little tricky as this required me to talk to the configurator for my wireless router.  With that passed on to my server I was able to start configuring the tunnel.

Following the instructions on my HE tunnel broker page, a simple cut-n-paste into /etc/network/interfaces added the new tunnel network device, a quick ifup and my server started using IPv6.  Interestingly my apt-cacher-ng immediately switched backhaul of its incoming IPv4 requests to IPv6 no configuration needed.

Enabling IPv6 for the rest of the network was surprisingly easy.  I had to install and configure radv with my assigned prefix.  It also passed out information on the HE DNS servers, prioritising IPv6 in DNS lookup results.  No changes were required for any of the client systems; well other than enabling firewalls.  Win.

Overall IPv6 is still not simple as it is hard to obtain native IPv6 support, but if you can get it onto your network the client side is working very well indeed.

Read more
Stéphane Graber

Quite a few people have been asking for a status update of LXC in Ubuntu as of Ubuntu 12.04 LTS. This post is meant as an overview of the work we did over the past 6 months and pointers to more detailed blog posts for some of the new features.

What’s LXC?

LXC is a userspace tool controlling the kernel namespaces and cgroup features to create system or application containers.

To give you an idea:

  • Feels like somewhere between a chroot and a VM
  • Can run a full distro using the “host” kernel
  • Processes running in a container are visible from the outside
  • Doesn’t require any specific hardware, works on all supported architectures

A libvirt driver for LXC exists (libvirt-lxc), however it doesn’t use the “lxc” userspace tool even though it uses the same kernel features.

Making LXC easier

One of the main focus for 12.04 LTS was to make LXC dead easy to use, to achieve this, we’ve been working on a few different fronts fixing known bugs and improving LXC’s default configuration.

Creating a basic container and starting it on Ubuntu 12.04 LTS is now down to:

sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-container
sudo lxc-start -n my-container

This will default to using the same version and architecture as your machine, additional option are obviously available (–help will list them). Login/Password are ubuntu/ubuntu.

Another thing we worked on to make LXC easier to work with is reducing the number of hacks required to turn a regular system into a container down to zero.
Starting with 12.04, we don’t do any modification to a standard Ubuntu system to get it running in a container.
It’s now even possible to take a raw VM image and have it boot in a container!

The ubuntu-cloud template also lets you get one of our EC2/cloud images and have it start as a container instead of a cloud instance:

sudo apt-get install lxc cloud-utils
sudo lxc-create -t ubuntu-cloud -n my-cloud-container
sudo lxc-start -n my-cloud-container

And finally, if you want to test the new cool stuff, you can also use juju with LXC:

[ ! -f ~/.ssh/id_rsa.pub ] && ssh-keygen -t rsa
sudo apt-get install juju apt-cacher-ng zookeeper lxc libvirt-bin --no-install-recommends
sudo adduser $USER libvirtd
juju bootstrap
sed -i "s/ec2/local/" ~/.juju/environments.yaml
echo " data-dir: /tmp/juju" >> ~/.juju/environments.yaml
juju bootstrap
juju deploy mysql
juju deploy wordpress
juju add-relation wordpress mysql
juju expose wordpress

# To tail the logs
juju debug-log

# To get the IPs and status
juju status

Making LXC safer

Another main focus for LXC in Ubuntu 12.04 was to make it safe. John Johansen did an amazing work of extending apparmor to let us implement per-container apparmor profiles and prevent most known dangerous behaviours from happening in a container.

NOTE: Until we have user namespaces implemented in the kernel and used by the LXC we will NOT say that LXC is root safe, however the default apparmor profile as shipped in Ubuntu 12.04 LTS is blocking any armful action that we are aware of.

This mostly means that write access to /proc and /sys are heavily restricted, mounting filesystems is also restricted, only allowing known-safe filesystems to be mounted by default. Capabilities are also restricted in the default LXC profile to prevent a container from loading kernel modules or control apparmor.

More details on this are available here:

Other cool new stuff

Emulated architecture containers

It’s now possible to use qemu-user-static with LXC to run containers of non-native architectures, for example:

sudo apt-get install lxc qemu-user-static
sudo lxc-create -n my-armhf-container -t ubuntu -- -a armhf
sudo lxc-start -n my-armhf-container

Ephemeral containers

Quite a bit of work also went into lxc-start-ephemeral, the tool letting you start a copy of an existing container using an overlay filesystem, discarding any change you make on shutdown:

sudo apt-get install lxc
sudo lxc-create -n my-container -t ubuntu
sudo lxc-start-ephemeral -o my-container

Container nesting

You can now start a container inside a container!
For that to work, you first need to create a new apparmor profile as the default one doesn’t allow this for security reason.
I already did that for you, so the few commands below will download it and install it in /etc/apparmor.d/lxc/lxc-with-nesting. This profile (or something close to it) will ship in Ubuntu 12.10 as an example of alternate apparmor profile for container.

sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-host-container -t ubuntu
sudo wget https://www.stgraber.org/download/lxc-with-nesting -O /etc/apparmor.d/lxc/lxc-with-nesting
sudo /etc/init.d/apparmor reload
sudo sed -i "s/#lxc.aa_profile = unconfined/lxc.aa_profile = lxc-container-with-nesting/" /var/lib/lxc/my-host-container/config
sudo lxc-start -n my-host-container
(in my-host-container) sudo apt-get install lxc
(in my-host-container) sudo stop lxc
(in my-host-container) sudo sed -i "s/10.0.3/10.0.4/g" /etc/default/lxc
(in my-host-container) sudo start lxc
(in my-host-container) sudo lxc-create -n my-sub-container -t ubuntu
(in my-host-container) sudo lxc-start -n my-sub-container

Documentation

Outside of the existing manpages and blog posts I mentioned throughout this post, Serge Hallyn did a very good job at creating a whole section dedicated to LXC in the Ubuntu Server Guide.
You can read it here: https://help.ubuntu.com/12.04/serverguide/lxc.html

Next steps

Next week we have the Ubuntu Developer Summit in Oakland, CA. There we’ll be working on the plans for LXC in Ubuntu 12.10. We currently have two sessions scheduled:

If you want to make sure the changes you want will be in Ubuntu 12.10, please make sure to join these two sessions. It’s possible to participate remotely to the Ubuntu Developer Summit, through IRC and audio streaming.

My personal hope for LXC in Ubuntu 12.10 is to have a clean liblxc library that can be used to create bindings and be used in languages like python. Working towards that goal should make it easier to do automated testing of LXC and cleanup our current tools.

I hope this post made you want to try LXC or for existing users, made you discover some of the new features that appeared in Ubuntu 12.04. We’re actively working on improving LXC both upstream and in Ubuntu, so do not hesitate to report bugs (preferably with “ubuntu-bug lxc”).

Read more
Chris Johnston

Over the past few months a lot of work has been done to the Summit Scheduler. This all culminated this past week when I sent in an RT to update the production instance of Summit. This included somewhere in the neighborhood of 8,000 lines of code change between Summit and the two themes that are run.

There are two major changes to Summit from the changes this past week. The first is that Summit has been rethemed to meet the new design guidelines for Ubuntu. This work was completed with major assistance by Alexander Fougner, Stephen Williams, and Brandon Holtsclaw who each put in numerous hours to make this happen.

The other major change, which affects the way that you will work with Summit is that you now have the ability to propose meetings in Summit removing the requirement that you create a blueprint. Also, as long as you’re using the new propose meeting feature in Summit, you no longer have to have a special name for your meeting, this is now done automatically. I also created a video which will walk you through the process of proposing a meeting in Summit.

As always, please file bugs if you find any issues!

Read more
Chris Johnston

The Summit Hackers have been hard at work again creating new features to make Summit easier to use. However, we need your help testing these features before the public release. This is where you, the user, come in! I have created a Test Summit on my server and created a test sprint in Launchpad and here is where I need your help with testing:

First, visit the test sprint in Launchpad, next mark yourself as attending. Please note, it could take up to 20 minutes before you are able to use summit after marking yourself as attending. You can either attend physically or remotely, it doesn’t matter. While you are there, take the time to create a couple of test meetings. Please be sure when you create these test meetings to follow the naming guidelines below.

Hey Chris, what are the new features? I am so glad you asked! The new features are:

For Attendees:

Summit now supports the ability for attendees to propose meetings right inside of Summit! On certain pages in Summit, you will now see an “Actions” area. Depending on what part of the cycle we are in, you may see different links. During the sponsorship application process, you will see links to apply for sponsorship. If you have the ability to review sponsorship applications, this is where you will find that link during that time period. During the scheduling phase of the cycle, you will be able to “Propose a Meeting.” What this means is that you will create a meeting in Summit, which will then go into a holding pattern waiting for a Track Lead or Summit Organizer to approve or deny your meeting. You may be familiar with this behavior, as it isThis is a very similar behavior as it has been with what you may have seen with Launchpad.

For Track Leads, Schedulers, and Organizers:

If you are listed as a Track Lead, Scheduler, or Organizer in Summit, you will have the ability to “Create a Meeting” and “Review Proposed Meetings.” When creating a meeting, you have the ability to mark a meeting as Private. If a meeting is marked as private, you will have to get it scheduled through a scheduler (Michelle and Marianna for UDS, Arwen for Linaro Connect), it will not be automatically scheduled. Non-private meetings that are created by a Track Lead, Scheduler, or Organizer will be automatically scheduled when slots are available.

The other option that this group has is reviewing proposed meetings. There are three different options for the status, Approved, Pending, and Declined, which you get to by clicking the current status. Pending and Declined meetings show up on this page, while Approved meetings are auto-scheduled and no not show up here.

Known Issues:
* All tracks are shown in the track choice list, not just the ones for the current summit
* Edit meeting links don’t work
* Issues with the form theme

Creating Meetings through Launchpad

The good news is–the old way still works! You will still able to create meetings through Launchpad Blueprints just like before. There are a few things you will need to be aware of as you create meetings this way, these are as follows:

Tracks:

I have created a couple of tracks for this Test Summit. They are as follows:

Name                         Slug
Community        community
Linaro                     linaro
QA                               qa
Ubuntu                  ubuntu

Blueprint naming standards

When creating a blueprint in Launchpad, please follow the required naming standards whereis a custom name that you create for your meeting:

For the Community track:

community-

For the Linaro track:

linaro-

For the QA track:

qa-

For the Ubuntu track:

ubuntu-

Managers, Organizers, Track Leads, oh my!

If you are a member of ~summit-users on Launchpad, please take the time to approve/deny blueprints in launchpad.. I would like to have a couple denied, just to make sure everything still works properly. The same thing with the new meeting review feature in Summit. Please do not approve all meetings! Some of them need to be declined to ensure that everything is working properly!

IT’S VERY IMPORTANT that people who are in the role of Track Lead, Scheduler, or Organizer for UDS and Linaro Connect test things and give your feedback, please don’t wait until the week before your event to test this, please be considerate to those volunteers who are working to make this better for everyone. If you feel that you should have this access but don’t, please contact me and I will get you setup.

If you run into any problems, please file a bug and talk to me on IRC!

I look forward to your feedback and continuing to make summit the best possible scheduler it can be, and to do so requires your help. Thank in advance.

Read more
Stéphane Graber

One thing that we’ve been working on for LXC in 12.04 is getting rid of any remaining LXC specific hack in our templates. This means that you can now run a perfectly clean Ubuntu system in a container without any change.

To better illustrate that, here’s a guide on how to boot a standard Ubuntu VM in a container.

First, you’ll need an Ubuntu VM image in raw disk format. The next few steps also assume a default partitioning where the first primary partition is the root device. Make sure you have the lxc package installed and up to date and lxcbr0 enabled (the default with recent LXC).

Then run kpartx -a vm.img this will create loop devices in /dev/mapper for your VM partitions, in the following configuration I’m assuming /dev/mapper/loop0p1 is the root partition.

Now write a new LXC configuration file (myvm.conf in my case) containing:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.utsname = myvminlxc

lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /dev/mapper/loop0p1
lxc.arch = amd64
lxc.cap.drop = sys_module mac_admin

lxc.cgroup.devices.deny = a
# Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
#lxc.cgroup.devices.allow = c 4:0 rwm
#lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm
#fuse
lxc.cgroup.devices.allow = c 10:229 rwm
#tun
lxc.cgroup.devices.allow = c 10:200 rwm
#full
lxc.cgroup.devices.allow = c 1:7 rwm
#hpet
lxc.cgroup.devices.allow = c 10:228 rwm
#kvm
lxc.cgroup.devices.allow = c 10:232 rwm

The bits in bold may need updating if you’re not using the same architecture, partition scheme or bridges as I’m.

Then finally, run: lxc-start -n myvminlxc -f myvm.conf

And watch your VM boot in an LXC container.

I did this test with a desktop VM using network manager so it didn’t mind LXC’s random MAC address, server VMs might get stuck for a minute at boot time because of that though.
In such case, either clean /etc/udev/rules.d/70-persistent-net.rules or set “lxc.network.hwaddr” to the same mac address as your VM.

Once done, run kpartx -d vm.img to remove the loop devices.

Read more
Stéphane Graber

DNS in Ubuntu 12.04

Anyone who’s been using 12.04 over the past month or so may have noticed some pretty significant changes in the way we do DNS resolving in Ubuntu.

This is the result of the implementation of: foundations-p-dns-resolving

Here is a description of the two big changes that happened:

Switch to resolvconf for /etc/resolv.conf management

resolvconf is a set of script and hooks managing DNS resolution. The most notable difference for the user is that any change manually done to /etc/resolv.conf will be lost as it gets overwritten next time something triggers resolvconf. Instead, resolvconf uses DHCP client hooks, a Network Manager plugin and /etc/network/interfaces to generate a list of nameservers and domain to put in /etc/resolv.conf.

For more details, I’d highly encourage you to read resolvconf’s manpage but here are a few answers to common questions:

  • I use static IP configuration, where should I put my DNS configuration?
    The DNS configuration for a static interface should go as “dns-nameservers”, “dns-search” and “dns-domain” entries added to the interface in /etc/network/interfaces
  • How can I override resolvconf’s configuration or append some entries to it?
    Resolvconf has a /etc/resolvconf/resolv.conf.d/ directory that can contain “base”, “head”, “original” and “tail” files. All in resolv.conf format.
    • base: Used when no other data can be found
    • head: Used for the header of resolv.conf, can be used to ensure a DNS server is always the first one in the list
    • original: Just a backup of your resolv.conf at the time of resolvconf installation
    • tail: Any entry in tail is appended at the end of the resulting resolv.conf. In some cases, upgrading from a previous Ubuntu release, will make tail a symlink to original (when we think you manually modified resolv.conf in the past)
  • I really don’t want resolvconf, how can I disable it?
    I certainly wouldn’t recommend disabling resolvconf but you can do it by making /etc/resolv.conf a regular file instead of a symlink.
    Though please note that you may then be getting inconsistent /etc/resolv.conf when multiple software are fighting to change it.

This change affects all Ubuntu installs except for Ubuntu core.

Using dnsmasq as local resolver by default on desktop installations

That’s the second big change of this release. On a desktop install, your DNS server is going to be “127.0.0.1″ which points to a NetworkManager-managed dnsmasq server.

This was done to better support split DNS for VPN users and to better handle DNS failures and fallbacks. This dnsmasq server isn’t a caching server for security reason to avoid risks related to local cache poisoning and users eavesdropping on other’s DNS queries on a multi-user system.

The big advantage is that if you connect to a VPN, instead of having all your DNS traffic be routed through the VPN like in the past, you’ll instead only send DNS queries related to the subnet and domains announced by that VPN. This is especially interesting for high latency VPN links where everything would be slowed down in the past.

As for dealing with DNS failures, dnsmasq often sends the DNS queries to more than one DNS servers (if you received multiple when establishing your connection) and will detect bogus/dead ones and simply ignore them until they start returning sensible information again. This is to compare against the libc’s way of doing DNS resolving where the state of the DNS servers can’t be saved (as it’s just a library) and so every single application has to go through the same, trying the first DNS, waiting for it to timeout, using the next one.

Now for the most common questions:

  • How to know what DNS servers I’m using (since I can’t just “cat /etc/resolv.conf”)?
    “nm-tool” can be used to get information about your existing connections in Network Manager. It’s roughly the same data you’d get in the GUI “connection information”.
    Alternatively, you can also read dnsmasq’s configuration from /run/nm-dns-dnsmasq.conf
  • I really don’t want a local resolver, how can I turn it off?
    To turn off dnsmasq in Network Manager, you need to edit /etc/NetworkManager/NetworkManager.conf and comment the “dns=dnsmasq” line (put a # in front of it) then do a “sudo restart network-manager”.

Bugs and feedback

Although we’ve been doing these changes more than a month ago and we’ve been looking pretty closely at bug reports, there may be some we haven’t found yet.

Issues related to resolvconf should be reported with:
ubuntu-bug resolvconf

Issues related to the dnsmasq configuration should be reported with:
ubuntu-bug network-manager

And finally, actual dnsmasq bugs and crashed should be reported with:
ubuntu-bug dnsmasq

In all cases, please try to include the following information:

  • How was your system installed (desktop, alternate, netinstall, …)?
  • Whether it’s a clean install or an upgrade?
  • Tarball of /etc/resolvconf and /run/resolvconf
  • Content of /run/nm-dns-dnsmasq.conf
  • Your /var/log/syslog
  • Your /etc/network/interfaces
  • And obviously a detailed description of your problem

Read more
Chris Johnston

My buddy Dustin Kirkland pointed me to a neat little utility that he wrote with Scott Moser called ssh-import-id. Since he showed it to me a few months ago, I have used it many times and it has made my life quite a bit easier.

ssh-import-id fetches a the defined user(s) public keys from Launchpad, validates them, and then adds them to the ~/.ssh/authorized_keys file. That’s it, but if you need to add multiple people, or don’t know which key they are going to want used, this will save you time.

Dustin has tried to get it added to OpenSSH, but he hasn’t been able to succeed at this yet.

To use ssh-import-id, you first need to install it if it already isn’t:

sudo apt-get install ssh-import-id

Then to run it you would run:

ssh-import-id chrisjohnston

This would import my public keys. You are also able to import multiple users at the same time:

ssh-import-id chrisjohnston kirkland

If you are looking for the latest version of the code it is available in a ppa:

ppa:ssh-import-id/ppa

If you have problems or want to check out the code, check out the package on Launchpad.

Read more
Stéphane Graber

Every year, I try to set a few hours aside to work on one of my upstream projects, pastebinit.

This is one of these projects which mostly “just works” with quite a lot of users and quite a few of them sending merge proposals and fixes in bug reports.

I’m planning on uploading pastebinit 1.3 right before Feature Freeze, either on Wednesday or early Thursday, delaying the release as much as possible to get a few last translations in.

If you speak any language other than English, please go to:
https://translations.launchpad.net/pastebinit

Any help getting this as well translated as possible would be appreciated, for Ubuntu users, you’ll have to deal with it for the next 5 years, so it’s kind of important :)

Now the changes, they’re pretty minimal but still will make some people happy I’m sure:

  • Finally merged pbget/pbput/pbputs from Dustin Kirkland, these 3 tools let you securely push and retrieve files using a pastebin. It’s using a mix of base64, tar and gpg as well as some wget and parsing to retrieve the data.
    These are nice scripts to use with pastebinit, though please don’t send huge files to the pastebins, they really aren’t meant for that ;)
  • Removed stikked.com from the supported pastebins as it’s apparently dead.
  • Now the new pastebins:
  • paste.debian.net should now work fine with the ‘-f’ (format) option, thanks for their work on making their form pastebinit-friendly.
  • pastebinit should now load pastebin definition files properly from multiple locations.
    Starting with /usr/share/pastebin.d, then going through /etc/pastebin.d, /usr/local/etc/pastebin.d, ~/.pastebin.d and finally <wherever pastebinit is>/.pastebin.d
  • A few other minor improvements and fixes merged by Rolf Leggewie over the last year or so, thanks again for taking care of these!

Testing of the current trunk before release would also be greatly appreciate, you can get the code with: bzr branch lp:pastebinit

Bug reports are welcome at: https://launchpad.net/pastebinit/+filebug

Read more
Chris Johnston

Yesterday was day one of Linaro Connect Q1.12. This was a very productive day of meeting people that I will be working with on LAVA, as well as other people throughout Linaro. I attended a couple of sessions during the morning, but I didn’t have anything noteworthy come out of them at this point. During the afternoon I worked with the Validation team in their hack session and was able to get LAVA running locally on my laptop and imported a bundle stream from Linaro’s LAVA setup.

Throughout the day I also did some work on Summit, pushing a couple of fixes and a new IRC link feature to Launchpad. While in the hacking session I met with Andy Doan and discussed the future of the Linaro Connect Android app, and we managed to get out a bug fix for it.

On day 2 I hope to do some more work with LAVA, hopefully figuring out how to play with the results that are displayed in the dashboard.

Read more
Chris Johnston

This week I will be attending Linaro Connect Q1.12 in Redwood City, California. Infact, I’m in an American Airlines plane at 34,000 feet heading there now. In-flight WiFi is awesome!

Over the past two months Michael Hall and myself have been doing a large amount of work on The Summit Scheduler to get it ready for Connect this week including modifying more than 2,400 lines of Summit code. You can find out more about that in my previous post.

I have a few things that I want to get out of Connect. The first is that I want to get feedback on the changes to Summit, as well as figure out what other things we may need to change. The second thing that I want to do is to learn more about the Beagleboard-xM that I have and how to use it for the many different things it can be used for. The third thing that I want to do is to learn about Linaro’s LAVA.

LAVA is an automated validation suite used to run all sorts of different tests on the products that Linaro produces. The things that I would like to get out of Connect in relation to LAVA are how to setup and run LAVA, how to set it up to run tests, and how to produce results and display those results the way that I want them.

If you are at Linaro Connect, and would be willing to talk with me about Summit and the way you use it and your thoughts on the changes, please contact me and we will set aside a time to meet.

Read more
Stéphane Graber

It took a while to get some apt resolver bugs fixed, a few packages marked for multi-arch and some changes in the Ubuntu LXC template, but since yesterday, you can now run (using up to date Precise):

  • sudo apt-get install lxc qemu-user-static
  • sudo lxc-create -n armhf01 -t ubuntu — -a armhf -r precise
  • sudo lxc-start -n armhf01
  • Then login with root as both login and password

And enjoy an armhf system running on your good old x86 machine.

Now, obviously it’s pretty far from what you’d get on real ARM hardware.
It’s using qemu’s user space CPU emulation (qemu-user-static), so won’t be particularly fast, will likely use a lot of CPU and may give results pretty different from what you’d expect on real hardware.

Also, because of limitations in qemu-user-static, a few packages from the “host” architecture are installed in the container. These are mostly anything that requires the use of ptrace (upstart) or the use of netlink (mountall, iproute and isc-dhcp-client).
This is the bare minimum I needed to install to get the rest of the container to work using armhf binaries. I obviously didn’t test everything and I’m sure quite a few other packages will fail in such environment.

This feature should be used as an improvement on top of a regular armhf chroot using qemu-user-static and not as a replacement for actual ARM hardware (obviously), but it’s cool to have around and nice to show what LXC can do.

I confirmed it to work for armhf and armel, powerpc should also work, though it didn’t succeed to debootstrap when I tried it earlier today.

Enjoy!

Read more
Chris Johnston

During the months since UDS – P in Orlando, FL, the Summit Hackers have been hard at work. As of right now, we have modified more than 2400 lines of code and fixed 24 bugs. Most of those 2400 lines of code have been adding new features and finally getting around to adding a better looking schedule view when on your personal device. While we have made great strides towards getting Summit to be perfect, there is still more work that needs to be done. Our first big test for these new features will be the Linaro Connect Q1.12 event that starts next Monday, February 6. I have the pleasure of being able to go to this event, so I hope to obtain feedback while I am there on ways to further improve Summit for next UDS. Please feel free to poke around with the changes and provide any feedback you may have!

Also, thanks to the work of Stuart Langridge, the Summit agenda view will soon be mobile friendly, making it easy to follow your schedule on your cell phone!

Release Notes:

  • Added user roles of Scheduler, Manager, and Track Lead
  • Added the ability for Admins, Schedulers, Managers, and Track Leads to create private meetings through the UI
    • The creator has the ability to add people who are required to attend
    • The creator and any schedulers have the ability to edit the private meeting, as well as edit the attendee list
  • Added a way to create a link to a private meeting that has a random hash string in the URL.
    • If you have been given this URL you can view the meeting details without being marked as attending, or having a launchpad account
  • Added a new daily agenda view (http://summit.ubuntu.com/lcq1-12/2012-02-06/)
    • New display is much more personal computer friendly
    • Displays all information about a meeting, including time, location, description, attendees, and track
    • New displays will show private meetings that the logged in user is marked as attending on, including the information about the private meeting
    • Gray star denotes not attending, Gold star denotes attending.
  • Added “Track” page (http://summit.ubuntu.com/lcq1-12/tracks)
    • This page includes all tracks, track descriptions, and track leads.
  • Added the ability for the plenary room to double as a “Training room”
  • Removed ALL Linaro hacks
  • Added the ability for a user to mark themself as attending a meeting without having to subscribe to the blueprint

Bugs that were fixed:

version 1.0.0

version 1.0.1

version 1.0.2

version 1.0.3

Read more