Canonical Voices

beuno

After a few weeks of being coffee-deprived, I decided to disassemble my espresso machine and see if I could figure out why it leaked water while on, and didn't have enough pressure to produce drinkable coffee.
I live a bit on the edge of where other people do, so my water supply is from my own pump, 40 meters into the ground. It's as hard as water gets. That was my main suspicion. I read a bit about it on the interwebz and learned about descaling, which I'd never heard about. I tried some of the home-made potions but nothing seemed to work.
Long story short, I'm enjoying a perfect espresso as I write this.

I wanted to share a bit with the internet people about what was hard to solve, and couldn't find any instructions on. All I really did was disassemble the whole thing completely, part by part, clean them, and make sure to put it back together tightening everything that seemed to need pressure.
I don't have the time and energy to put together a step-by-step walk-through, so here's the 2 tips I can give you:

1) Remove ALL the screws. That'll get you there 95% there. You'll need a philips head, a torx head, a flat head and some small-ish pliers.
2) The knob that releases the steam looks unremovable and blocks you from getting the top lid off. It doesn't screw off, you just need to pull upwards with some strength and care. It comes off cleanly and will go back on easily. Here's a picture to prove it:

DeLongi eco310.r

Hope this helps somebody!

Read more
Daniel Holbach

In a recent conversation we thought it’d be a good idea to share tips and tricks, suggestions and ideas with users of Ubuntu devices. Because it’d help to have it available immediately on the phone, an app could be a good idea.

I had a quick look at it and after some discussion with Rouven in my office space, it looked like hyde could fit the bill nicely. To edit the content, just write a bit of Markdown, generate the HTML (nice and readable templates – great!) and done.

Unfortunately I’m not a CSS or HTML wizard, so if you could help out making it more Ubuntu-y, that’d be great! Also: if you’re interested in adding content, that’d be great.

I pushed the code for it up on Launchpad, there are also the first bugs open already. Let’s make it look pretty and let’s share our knowledge with new Ubuntu devices users. :-)

Oh, and let’s see that we translate the content as well! :-)

Read more
UbuntuTouch

[原]Ubuntu 手机开发培训准备

在这篇文章中,我们将介绍学生如何做培训准备前的准备工作。提前准备并安装好自己的环境是做好一个培训非常重要的步骤。


                 

1)安装好自己的SDK


如果想在自己的电脑上安装Ubntu系统

学生可以按照文章“Ubuntu SDK 安装”安装好自己的Ubuntu系统及SDK。让后根据文章“创建第一个Ubuntu for phone应用”来检验自己安装的环境是否正确。这种安装通常需要在电脑上安装多系统功能,或虚拟机(模拟器在虚拟机的效果可能并不好)。

如果想做一个专为Ubuntu手机开发而做的Live USB

请参照文章“如何制作Ubuntu SDK Live USB盘”来专门制作一个可以启动的Live USB盘。这个盘可以直接插入到电脑中的USB口中,并启动Ubuntu系统。这个USB盘中已经安装好整个可以供开发的SDK,不需要安装任何额外的软件即可开发。

a) 在BIOS中启动硬件虚拟化功能,这样会使得模拟器的运行速度加快
b) 在BIOS中设置优选顺序以使得USB可以优先启动,或在启动的时候按下F12功能键,并选择由USB来启动Ubuntu

在启动Ubuntu系统后,Ubuntu SDK已经完全安装好了。开发者可以直接进行开发了。建议参阅文章“创建第一个Ubuntu for phone应用”来检验自己安装的环境是否正确。


在开发过程中如果使用手机进行安装时,如果需要密码解锁手机的话,这个密码是“0000”。

2)Ubuntu手机介绍


对不熟悉Ubuntu手机的开发者来说,可以先观看视频“如何使用Ubuntu手机”来了解Ubuntu手机。如果你想对Ubuntu SDK有更深的认识,请观看视频“如何使用Ubuntu SDK (视频)”。

你可以在地址“Ubuntu手机介绍”下载有关Ubuntu手机介绍的幻灯片,并在地址观看相应的视频


3)QML应用开发


Flickr应用开发

阅读文章“使用Ubuntu SDK开发Flickr应用教程”,并观看视频“Ubuntu手机应用QML开发 (视频)”。幻灯片“Ubuntu应用开发”。

教程的源码在: bzr branch lp:~liu-xiao-guo/debiantrial/flickr7
我们可以在Shell中输入以上的指令来下载源码。

DeveloperNews RSS阅读器

首先我们可以阅读文章“从零开始创建一个Ubuntu应用--一个小的RSS阅读器”及文章“如何在Ubuntu中使用条件布局”。视频在“在Ubuntu平台上开发Qt Quick QML应用 (视频)

教程的源码在:bzr branch lp:~liu-xiao-guo/debiantrial/developernews4

我们可以在Shell中输入以上的指令来下载源码。


4)Scope 开发


大家可以先观看视频“Ubuntu Scope简介及开发流程”来了解Ubuntu OS上的Scope开发流程。


教程的源码在: bzr branch lp:~liu-xiao-guo/debiantrial/dianpianclient8
我们可以在Shell中输入以上的指令来下载源码。

更多关于Scope开发的例程可以在链接找到。

5)更多的培训材料


我们也有更多的英文的培训材料。开发者可以在地址下载。


如果有任何问题,请在该文章处评论。我会尽力回答你们的问题。


作者:UbuntuTouch 发表于2015-1-4 15:36:54 原文链接
阅读:627 评论:2 查看评论

Read more
UbuntuTouch

在这篇文章中,我们来介绍如何判断一个QML应用被推到后台或前台。我们知道,在Ubuntu手机平台中,它是一个单应用的操作系统。当一个应用被推到后台后,应用就被挂起,不能运行。我们有时需要这个标志来判断我们的应用什么时候是在前台,什么时候是在后台。


我们用Ubuntu SDK创建一个简单的QML应用:


import QtQuick 2.0
import Ubuntu.Components 1.1

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    id: main
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "com.ubuntu.developer.liu-xiao-guo.foregrounddetect"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false

    width: units.gu(100)
    height: units.gu(75)


    Page {
        title: i18n.tr("ForegroundDetect")

        Connections {
             target: Qt.application
             onActiveChanged: {
                 console.log("Qt.application.active: " + Qt.application.active);
             }
         }
    }
}

在这里,我们使用Qt.application这个变量的"active"属性来判断一个应用是否被推到后台或前台。我们运行应用结果如下:




当我们在手机上把应用推到后台时,就会显示false;当我们把应用推到前台时,就会显示true。

整个应用的源码在:bzr branch lp:~liu-xiao-guo/debiantrial/foregrounddetect


作者:UbuntuTouch 发表于2015-1-5 10:56:25 原文链接
阅读:234 评论:0 查看评论

Read more
UbuntuTouch

在这个视频里,我们从“0”开始来开发一个mini的RSS阅读器。通过这个练习,开发者可以对QML的编程有一个基本的了解,并了解在Ubuntu平台上的一些开发的流程。应用的图片如下:


  



作者:UbuntuTouch 发表于2015-1-13 15:08:31 原文链接
阅读:167 评论:0 查看评论

Read more
UbuntuTouch

[原]如何制作Ubuntu SDK Live USB盘

对于一些想开发Ubuntu手机应用或Scope的开发者来说,不想重新买一个电脑安装Ubuntu操作系统或在自己的硬盘上重新安装一个Ubuntu系统,那么可以考虑制作一个Ubuntu系统的Live USB盘。这个USB包括如下的部分:


  • Ubuntu Kylin 14.10操作系统
  • Ubuntu SDK (包括已经安装好的SDK,模拟器及编译环境)

使用这个Live USB盘,开发者就不用安装任何的东西,直接插入电脑的USB口中。在电脑启动的过程中,选择我们制作好的USB启动盘进行启动(在电脑启动的过程中,按下“F12”键)。在启动的过程中选择“Try Ubuntu Kylin without installing




虽然这是一个Ubuntu OS的启动盘,但是它可以保存我们在开发过程中所创建的项目(存于Home目录中)及一些设置(比如wifi设置密码等)。


当我们选择USB时,我们最好是选择USB 3.0并把USB盘放入到电脑USB 3.0的口中。一般来说,电脑上的USB 3.0口是用蓝色标示的。建议使用质量较好,速度较快一点的USB这样可以使得系统的启动和运行更快更流畅。目前我们使用SanDisk CZ80来做测试,效果还是不错的。USB需要有16G的存储。


为了使得我们的模拟器能够更加流畅及模拟器不会出现黑色的屏幕,我们需要在电脑的BIOS里启动硬件虚拟化功能。开发者需要到自己的电脑的BIOS里的设置启动VT-X/AMD-V。开发者可以参考文章“Ubuntu SDK 安装”来检查自己的电脑是否支持virtualization。




如果开发者想要在自己的电脑上安装Ubuntu系统并在上面开发的话,可以参考文章“Ubuntu SDK 安装”来一步一步地安装Ubuntu SDK。



1)如何在Ubuntu系统下制作Live USB盘


启动Ubuntu操作系统,打开浏览器并在如下的地址下载最新的image:


https://mega.co.nz/#F!S8QSRZyI!2HBWgXk4kmc_2bcCcpBR3Q


下载的文件包含:

  • kylin-live-20150133.iso (md5sum 13cd61270bf98eb462dc0497df8eee33) 
  • casper-rw-20150113.tar.bz2  (md5sum 8c69f94a03481275bf66aa883b69ae1b)
  • post-usb-creator-window.sh(在Windows下制作需要这个)
  • README.md (简单的说明文件)

我们把下载的文件存于到我们想要的一个目录中,比如在自己的Home下的“usb”目录中。


在Dash中输入“usb”,并启动“Startup Disk Creator/启动盘创建器”






我们按照如下的方法来制作我们的USB启动盘。





在设置“储存在额外保留空间”时,它的值应该为非零的值。等USB盘已经制作好以后,你将会看到如下的画面:







重新挂载USB盘,因为在前一步会自动卸载USB盘,或者在Ubuntu中的文件浏览器中点击USB所在的device。这样就可以完成重新挂载USB:





然后按下面运行自带的脚本,参数为 USB 盘挂载的路径。


解压已经下载的casper-rw-2015xxxx.tar.bz2文件


等文件都被解压完后,进入解压文件所在的目录,并在shell中执行如下的指令:


liuxg@liuxg:~/usb$ ./post-usb-creator-linux.sh /media/liuxg/BD52-7153/


这里“/liuxg/BD52-7153”为USB盘挂载的路径。根据自己USB盘所在的路径替换。


2)如何在Windows 平台下制作启动盘


http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows

下载制作工具,与 Linux 平台的工具相似。




单我们在选择“Persistent file”时,它的大小应该是非零的一个值。在我们填入“Step 2”时,我们不应该把拷贝好的字符串拷到该输入框中,否则在“Step 3”中的输入框就会是灰色的。我们应该点击“Browse”按钮,并按照如下的方式进行输入image的路径:




在这之后把 casper-rw 文件拷贝到USB的主目录下即可。


:如果只想使用英文版的Ubuntu系统就不需要进行下面的步骤。如果想要支持中文版,请把 post-usb-creator-window.sh 也拷贝到 USB盘的根目录下。从USB 盘启动Ubuntu系统后,在shell中执行如下的指令:


$ cd /cdrom/

$ sudo ./post-usb-creator-window.sh


再次重新启动后,会进入中文版的Ubuntu系统。


3)测试已经制作好的USB启动盘


我们可以把我们的Live USB盘插入电脑,我们可以通文章“创建第一个Ubuntu for phone应用”来检验我们是否有一个完好的Ubuntu SDK。


在我们启动模拟器时,如果需要输入密码,请使用默认的密码“0000”。如果开发者需要自己修改这个密码,请到Ubuntu SDK模拟器中的“系统设置”中去修改。


对于应用开发者来说,在Qt Creator中的热键组合“Ctrl + Space”键有它独特的用处。可是,在Ubuntu系统中,“Ctrl + Space”被用来转换中英文输入法。建议开发者参考文章“怎么在Ubuntu OS上面安装搜狗输入法及对Qt Creator的支持”来重新定义键的组合。


已知问题 (known issues)

如果你在使用的过程中,发现有如下的乱码的情况(极少情况下出现),请重新启动你的机器来纠正这个问题。




在个别电脑上不能启动的问题


我们发现在联想 E455 出现不能启动的问题,目前怀疑是和 AMD 显卡驱动有关,问题仍在调查中,如果遇到些问题,请在系统上安装14.04 LTS版本并安装相应的ubuntu-sdk包来尝试学习ubuntu phone的开发知识,其中的基本概念都是一样。


注:如果想长时间致力于ubuntu phone的开发建议在电脑上安装一个ubuntu系统,最好是utopic (14.10),而不是在Live环境下进行学习,一是以防数据的丢失,二是在使用性能上有更快速的体验。



作者:UbuntuTouch 发表于2015-1-22 15:35:55 原文链接
阅读:213 评论:0 查看评论

Read more
jdstrand

Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

Background

Ubuntu Touch has several goals that all apply to Snappy:

  • we want system-image upgrades
  • we want to replace the distro archive model with an app store model for Snappy systems
  • we want developers to be able to get their apps to users quickly
  • we want a dependable application lifecycle
  • we want the system to be easy to understand and to develop on
  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Snappy adds a few things to the above (that pertain to this conversation):

  • we want the system to be bulletproof (transactional updates with rollbacks)
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

Let’s look at what all these mean more closely.

system-image upgrades

  • we want system-image upgrades
  • we want the system to be bulletproof (transactional updates with rollbacks)

We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

app store

  • we want to replace the distro archive model with an app store model
  • we want developers to be able to get their apps to users quickly

Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

application lifecycle

  • dependable application lifecycle

This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

(I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

security

  • we want the system to be secure
  • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

easy to use

  • we want the system to be easy to understand and to develop on
  • we want the system to be easy to use for system builders
  • we want the system to be easy to use and understand for admins

We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

  1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
  2. apps declaring specific exceptions to the security policy
  3. apps declaring to use restricted security policy
  4. apps declaring to run (effectively) unconfined
  5. apps shipping hand-crafted policy (that can be strict or lenient)

(Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

Moving forward

The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

immediate term

The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

The user experience for this will be discussed and refined on the mailing list in the coming days.

short term

After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

future

Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

Read more
Sergio Schvezov

Preliminary support for dtb override from OEM snaps

Today the always in motion ppa ppa:snappy-dev/tools has landed support for overriding the dtb provided by the platform in the device part with one provided by the oem snap.

The package.yaml for the oem snap has been extended a bit to support this, an example follows for extending the am335x-boneblack platform.


name: mydevice.sergiusens
vendor: sergiusens
icon: meta/icon.png
version: 1.0
type: oem

branding:
    name: My device
        subname: Sergiusens Inc.

        store:
            oem-key: 123456

            hardware:
                dtb: mydtb.dtb

The path hardware/dtb key in the yaml holds a value which is the path to the dtb withing the package, so in this case, I put mydtb.dtb in the root of the snap.

After that it’s just a snappy build away:

snappy build .

In order to get this properly provisioned, first we need the latest ubuntu-device-flash from the ppa:snappy-dev/tools, so let’s get it

sudo add-apt-repository ppa:snappy-dev/tools 
sudo apt update
sudo apt install ubuntu-device-flash

And now we are ready to flash

sudo ubuntu-device-flash core \
    --platform am335x-boneblack \
    --size 4 \
    --install mydevice_sergiusens_1.0_all.snap
    --output bbb_custom.img

If everything went well, the boot partiton will hold your custom dtb instead of the default one, specifying --platform is required for this.

Please note that some of these things described here are subject to change.

Read more
Daniel Holbach

What do Kinshasa, Omsk, Paris, Mexico City, Eugene, Denver, Tempe, Catonsville, Fairfax, Dania Beach, San Francisco and various places on the internet have in common?

Right, they’re all participating in the Ubuntu Global Jam on the weekend of 6-8 February! See the full list of teams that are part of the event here. (Please add yours if you haven’t already.)

What’s great about the event is that there are just two basic aims:

  1. do something with Ubuntu
  2. get together and have fun!

What I also like a lot is that there’s always something new to do. Here are just 3 quick examples of that:

App Development Schools

We have put quite a bit of work into putting training materials together, now, you can take them out to your team and start writing Ubuntu apps easily.

Snappy

As one tech news article said “Robots embrace Ubuntu as it invades the internet of things“. Ubuntu’s newest foray, making it possible to bring a stable and secure OS to small devices where you can focus on apps and functionality, is attracting a number of folks on the mailing lists (snappy-devel, snappy-app-devel)  and elsewhere. Check out the mailing lists and the snappy site to find out more and have a play with it.

Unity8 on Desktop

Convergence is happening and what’s working great on the phone is making its way onto the desktop. You can help making this happen, by installing and testing it. Your feedback will be much appreciated.

Unity-8-Is-Starting-to-Look-More-Like-a-Desktop-for-Ubuntu-Video-465329-5

maxresdefault

 

Read more
Ben Howard

One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.


Screenshot
https://cloud-images.ubuntu.com/locator
In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..



The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

Read more
Ben Howard

A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

And we were wrong.

So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

If you want to install the LTS kernel on your existing instance(s), simply run:

  • sudo apt-get update
  • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
  • sudo reboot


[1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


Similarly, Github itself now also "suggests" random repo names.



I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations (more than 64-bit space)
  • 5 words can generate over 268 combinations
Interestingly, you need 10 words to cover 128-bit space!  So it's

unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

versus

b9643037-4a79-412c-b7fc-80baa7233a31

Shell

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install petname
$ petname
itchy-Marvin
$ petname -w 3
listlessly-easygoing-Radia
$ petname -s ":" -w 5
onwardly:unflinchingly:debonairly:vibrant:Chandler

Python

That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install python-petname
$ python-petname
flaky-Megan
$ python-petname -w 4
mercifully-grimly-fruitful-Salma
$ python-petname -s "" -w 2
filthyLaurel

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)
boomingly_tangible_Mikayla

Golang


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/dustinkirkland/golang-petname

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

And:
$ sudo apt-get install golang-petname
$ golang-petname
quarrelsome-Cullen
$ golang-petname -words=1
Vivian
$ golang-petname -separator="|" -words=10
snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

Using it in your own Golang code looks as simple as this:

package main
import (
"fmt"
"math/rand"
"time"
"github.com/dustinkirkland/golang-petname"
)
func main() {
flag.Parse()
rand.Seed(time.Now().UnixNano())
fmt.Println(petname.Generate(2, ""))
}
Gratuitous picture of my pets, 7 years later.
Cheers,
happily-hacking-Dustin

Read more
Robin Winslow

In the design team we keep some projects in Launchpad (as canonical-webmonkeys), and some project in Github (as UbuntuDesign), meaning we work in both Bazaar and Git.

The need to synchronise Github to Launchpad

Some of our Github projects need to be also stored in Launchpad, as some of our systems only have access to Launchpad repositories.

Initally we were converting these projects manually at regular intervals, but this quickly became too cumbersome.

The Bazaar synchroniser

To manage this we created a simple web-service project to synchronise Git projects to Bazaar. This script basically automates the techniques described in our previous article to pull down the Github repository, convert it to Bazaar and push it up to Launchpad at a specified location.

It’s a simple Python WSGI app which can be run directly or through a server that understands WSGI like gunicorn.

Setting up the server

Here’s a guide to setting up our bzr-sync project on a server somewhere to sync Github to Launchpad.

System dependencies

Install necessary system dependencies:

User permissions

First off, you’ll have to make sure you set up a user on whichever server is to run this service which has read access to your Github projects and write access to your Launchpad projects:

Cloning the project

Then you should clone the project and install dependencies. We placed it at /srv/bzr-sync but you can put it anywhere:

Preparing gunicorn

We should serve this over HTTPS, so our auth_token will remain secret. This means you’ll need a SSL certificate keyfile and certfile. You should get one from a certificate authority, but for testing you could just generate a self-signed-certificate.

Put your certificate files somewhere accessible (like /srv/bzr-sync/certs/), and then test out running your server with gunicorn:

Try out the sync server

You should now be able to synchronise a Github repository with Launchpad by pointing your browser at:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

You should be able to see the progress of the conversion as command-line output from the above gunicorn command.

Add upstart job

Rather than running the server directly, we can setup an upstart job to manage running the process. This way the bzr-sync service will restart if the server restarts.

Here’s an example of an upstart job, which we placed at /etc/init/bzr-sync.conf:

You can now start the bzr-sync server as a service:

And output will be logged to /etc/upstart/bzr-sync.log.

Setting up Github projects

Now to use this sync server to automatically synchronise your Github projects to Launchpad, you simply need to add a post-commit webhook to ping a URL of the form:

https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}

Creating a webhook

Creating a webhook

In your repository settings, select “Webhooks and Services”, then “Add webhook”, and enter the following information:

  • Payload URL: https://{server-domain}/?token={secret-token}&git_url={url-of-github-repository}&bzr_url=lp:{launchpad-branch-location}
  • Content type: “application/json”
  • Secret: -leave blank-
  • Select Just the push event
  • Tick Active
Saving a webhook

Saving a webhook

NB: Notice the Disable SSL verification button. By default, the hook will only work if your server has a valid certificate. If you are testing with a self-signed one then you’ll need to disable this SSL verification.

Now whenever you commit to your Github repository, Github should ping the URL, and the server should synchronise your repository into Launchpad.

Read more
facundo

Vacaciones en el sur


Este verano volvimos a Piedra del Águila, en modo vacaciones pero también visita a mi hermana y cuñado, que viven allá desde hace unos años.

El viaje es largo, especialmente para los niños, pero haciéndolo en dos tirones (o sea, en dos días, durmiendo en algún hotel en la mitad del viaje) se hace soportable. Pero tampoco es para hacerlo seguido, y es en parte por eso que pasaron tres años desde la última visita.

En aquella oportunidad pusimos carpa en el terreno de los chicos, y los ayudamos a empezar a construir las habitaciones. Esta vez las habitaciones estaban totalmente terminadas y habitables, más el taller donde funciona la imprenta, más el garage (que usamos como habitación nuestra), más un montón de comodidades (como el horno de barro!).

Primeras habitaciones de la casa que están haciendo Diana y Gus a pulmón.

Hicimos bastante fiaca durante las vacaciones... yo, por ejemplo, dormí siesta todos los días (normalmente no duermo), leí un montón, charlamos mucho, comimos demasiado. Aprovechamos bastante el horno de barro: hicimos pollo y cordero, siempre con verduras que al horno de barro quedan geniales, incluso el choclo.

Malena jugando con las gallinas (que pensaban que ella les iba a dar de comer...).

El horno de barro que construyó mi hermana; ahí hicimos cordero, pollo, pan y muchas verduras.

Muy atípico cielo, una gran tormenta que apenas nos mojó un rato (pero pegó fuerte ~100km más al norte).

También paseamos bastante. Hicimos algunas actividades cortitas y cercanas, como subir hasta el águila representativa del pueblo, pasar una tarde en el perilago, una caminata al cerro que está al lado de la casa de los chicos, pasamos una tarde en un lugarcito muy lindo aguas abajo del embalse Pichi Picún Leufú, e incluso hicimos una caminata bastante complicada para llegar a una bahía que nos habían contado, con visita incluída a los restos de una ciudad abandonada.

Quizás la que más se destaca de todas las actividades que hicimos en Piedra del Águila fue escalar la pared vertical de una formación que corona un cerro de las afueras de la ciudad.

Fuimos guiados y supervisados por Esteban Martinez, que ya había subido y colgado las sogas de seguridad. La caminata hasta arriba del cerro no fue simple (ni la bajada, especialmente para mí, que llevé casi todo el tiempo a Malena a upa), pero llegamos a una pequeña superficie casi horizontal al costado de la pared. Allí fuimos escalando por turnos, trepando a la roca con la fuerza de piernas y brazos, mientras que alguien desde abajo mantenía tensas las cuerdas de seguridad, por las dudas que nos cayéramos (y con estas cuerdas, luego bajábamos descolgándonos). La verdad es que estuvo buenísimo, aunque las primera vez te da un poco de cagazo el estar solamente agarrado/apoyado con manos y pies a varios metros de altura...).

A mitad de la caminata hasta el cerro (lo que realmente escalamos es la parecita vertical de arriba de todo).

Ya casi arriba de todo, Gustavo y Diana viendo como Esteban prepara los equipos.

Casi llegando a la cima, parece más fácil y menos divertido de lo que realmente es :).

Haciendo la parte de seguridad mientras escalaba Felipe.

Por otro lado, no sólo nos quedamos en casa de los chicos o hicimos paseos cercanos. En dos oportunidades nos tomamos el día entero, saliendo temprano y volviendo tarde, para hacer una recorrida a algún lugar más lejos y conocerlo.

En uno de esos días nos fuimos al Chocón, a unos 150km al norte de Piedra del Águila. Fuimos principalmente al Museo Municipal, donde está expuesto el Giganotosaurus Carolinii (hasta el momento considerado el dinosaurio carnívoro más grande de todos los tiempos, aún superior al Tyrannsaurus Rex) que se descubrió justamente en esa zona por Rubén Carolini a fines del siglo pasado.

Todo el Chocón está coloreado con la temática dinosauril, y está muy bien que así sea (todas las ciudades deberían explotar más sus capacidades turísticas, siempre hay algo que mostrar). Pero no sólo eso tiene la ciudad, sino unos paisajes hermosos al Lago Ramos Mexía, y obviamente la represa.

La familia a los pies del simpático dinosaurio que cuida la entrada a El Chocón.

Dinosaurio reconstruido, en el museo.

Tres bestias feroces.

Otro día nos fuimos al Lago Huechulafquen. Fuimos desde Piedra del Águila para el sur por la RN237, hasta el Río Collón Curá, y de ahí subimos por la RN234 y la RN40 hasta Junín de los Andes. Ahí almorzamos, y seguimos camino al lago. En este último tramo tardamos bastante, porque no sólo es de ripio, sino que hay caminos de cornisa sinuosos y en pendiente, nada trivial de recorrer pero tampoco algo imposible, sólo se tarda más de lo calculado (y también porque nos clavamos una siesta a la mitad de la recorrida :p ).

Obviamente, los paisajes pagan todo eso con creces.

Vista desde donde paramos a tomar unos mates.

Malena, haciendo un enchastre de si misma al jugar con la tierra finita del lugar.

Felipe

Vista del volcán Lanín desde la ruta.

La vuelta la encaramos un par de horas antes de que anochezca, es que quería hacer sí o sí todo el camino hasta Junín de los Andes y también la ruta desde ahí hasta el cruce con la 237 antes que sea noche cerrada, por seguridad y comodidad.

Los cuatro junto al monumento representativo del nombre del pueblo.

Entre una cosa y la otra se fueron pasando los días y tuvimos que regresar a casa. Hace rato que no nos tomábamos más de una semana de vacaciones, y lo disfrutamos un montón, pero también te dan ganas de volver a casa, :)

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150127 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Our Vivid kernel has been rebased to v3.18.3 upstream stable. It’s been
uploaded to the archive, 3.18.0-11.2. We’ll rebase to v3.18.4 shortly.
We’ve also rebased our unstable branch to v3.19-rc6 and uploaded to our
ckt PPA.
Important upcoming dates:
Thurs Feb 5 – 14.04.2 Point Release (~1 weeks away)
Thurs Feb 26 – Beta 1 Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Verification & Testing
  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing
    Current opened tracking bugs details:
  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:
    cycle: 09-Jan through 31-Jan
    ====================================================================
    09-Jan Last day for kernel commits for this cycle
    11-Jan – 17-Jan Kernel prep week.
    18-Jan – 31-Jan Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Robin Winslow

Here in the design team we use both Bazaar and Git to keep track our projects’ hostory.

We quite often end up coverting our projects from Bazaar to Git or vice-versa. Here are some tips on how to do that.

To convert revision history between Git and Bazaar, we will use their respective fast-import features.

Install bzr-fastimport

In either case, you need the fastimport plugin for Bazaar, which installs both bzr fast-import and bzr fast-export:

Bazaar to Git

To convert a Bazaar branch to Git, open a Bazaar branch of your project and do the following:

Now you should have all the revision history for that Bazaar branch in Git:

(From Astrofloyd’s blog)

 

Git to Bazaar

Converting from Git to Bazaar is slightly different. Because Bazaar stores branches in sub-folders, while Git stores branches all in the same directory, when you convert a Git repository to Bazaar, it will create a directory tree for the branches:

bzr-repo will now contain a folder for each branch that was in your Git repository. You’re probably most interested in trunk, which will be at bzr-repo/trunk, or perhaps bzr-repo/trunk.remote:

(From the Bazaar wiki)

 

Keeping a project in both Git and Bazaar

You may wish to keep a project in both Git and Bazaar.

 

Create ignore files for both systems

As your project may be used in either Git or Bazaar, you should create practically duplicate .gitignore and .bzrignore files, the only difference being that the .bzrignore should ignore the .git directory, and the .gitignore should ignore the .bzr directory. You should also make sure you ignore the bzr-repo directory – e.g.:

And keep both ignore files in all versions of the project.

Only work in one repository

It is not practical to be doing your actual work in both systems, because converting from one to the other will overwrite any history in the destination repository. For this reason you need to choose to do all your work in either Git or Bazaar, and then regularly convert it to the other using the above conversion instructions.

Read more
Colin Ian King

Finding kernel bugs with cppcheck

For the past year I have been running the cppcheck static analyzer against the linux kernel sources to see if it can detect any bugs introduced by new commits. Most of the bugs being found are minor thinkos, null pointer de-referencing, uninitialized variables, memory leaks and mistakes in error handling paths.

A useful feature of cppcheck is the --force option that will check against all the configurations in the source (and the kernel does have many!).  This allows us to check for code that may not be exercised much (because it is normally not built in with most config options) or even find dead code.

The downside of using the --force option is that each source file may need to be checked multiple times for each configuration.  For ~20800 sources files this can take a 24 processor server several hours to process.  Errors and warnings are then compared to previous runs (a delta), making it relatively easy to spot new issues on each run.

We also use the latest sources from the cppcheck git repository.  The upside of this is that new static analysis features are used early and this can result in finding existing bugs that previous versions of cppcheck missed.

A typical cppcheck run against the linux kernel source finds about 600 potential errors and 1700 warnings; however a lot of these are false positives.  These need to be individually eyeballed to sort the wheat from the chaff.

Finally, the data is passed through a gnu plot script to generate a trend graph so I can see how errors (red) and warnings (green) are progressing over time:


..note that the large changes in the graph are mostly with features being enabled (or fixed) in cppcheck.

I have been running the same experiment with smatch too, however I am finding that cppcheck seems to have better code coverage because of the --force option and seems to have less false positives.   As it stands, I am finding that the most productive time for finding issues is around the -rc1 and -rc2 merge times (obviously when most of the the major changes land in the kernel).  The outcome of this work has been a bunch of small fixes landing in the kernel to address bugs that cppcheck has found.

Anyhow, cppcheck is an excellent open source static analyzer for C and C++ that I'd heartily recommend as it does seem to catch useful bugs.

Read more
Pat Gaughen

Liam Young wrote a blog post a few months ago about how to enable OpenStack guest console support and noted it was in the next charms. This feature landed in our stable charms in October. If you are wondering how it’s done, check out Liam’s blog post – http://blog.gnuoy.eu/2014/09/openstack-guest-console-access-with-juju.html

Read more
niemeyer

MongoDB 3.0 (previously known as 2.8) is right around the block, and it’s time to release a few fixes and improvements on the mgo driver for Go to ensure it works fine on that new major server version. Compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

Release r2015.01.24 of mgo includes the following changes:


Support ReplicaSetName in DialInfo

DialInfo now offers a ReplicaSetName field that may contain the name of the MongoDB replica set being connected to. If set, the cluster synchronization routines will prevent communication with any server that does not report itself as part of that replica set.

Feature implemented by Wisdom Omuya.

MongoDB 3.0 support for collection and index listing

MongoDB 3.0 requires the use of commands for listing collections and indexes, and may report long results via cursors that must be iterated over. The CollectionNames and Indexes methods were adapted to support both the old and the new cases.

Introduced Collection.NewIter method

In the last few releases of MongoDB, a growing number of low-level database commands are returning results that include an initial set of documents and one or more cursor ids that should be iterated over for obtaining the remaining documents. Such results defeated one of the goals in mgo’s design: developers should be able to walk around the convenient pre-defined static interfaces when they must, so they don’t have to patch the driver when a feature is not yet covered by the convenience layer.

The introduced NewIter method solves that problem by enabling developers to create normal iterators by providing the initial batch of documents and optionally the cursor id for obtaining the remaining documents, if any.

Thanks to John Morales, Daniel Gottlieb, and Jeff Yemin, from MongoDB Inc, for their help polishing the feature.

Improved JSON unmarshaling of ObjectId

bson.ObjectId can now be unmarshaled correctly from an empty or null JSON string, when it is used as a field in a struct submitted for unmarshaling by the json package.

Improvement suggested by Jason Raede.

Remove GridFS chunks if file insertion fails

When writing a GridFS file, the chunks that hold the file content are written into the database before the document representing the file itself is inserted. This ensures the file is made visible to concurrent readers atomically, when it’s ready to be used by the application. If writing a chunk fails, the call to the file’s Close method will do a best effort to clean up previously written chunks. This logic was improved so that calling Close will also attempt to remove chunks if inserting the file document itself failed.

Improvement suggested by Ed Pelc.

Field weight support for text indexing

The new Index.Weights field allows providing a map of field name to field weight for fine tuning text index creation, as described in the MongoDB documentation.

Feature requested by Egon Elbre.

Fixed support for $** text index field name

Support for the special $** field name, which enables the indexing of all document fields, was fixed.

Problem reported by Egon Elbre.

Consider only exported fields on omitempty of structs

The implementation of bson’s omitempty feature was also considering the value of non-exported fields. This was fixed so that only exported fields are taken into account, which is both in line with the overall behavior of the package, and also prevents crashes in cases where the field value cannot be evaluated.

Fix potential deadlock on Iter.Close

It was possible for Iter.Close to deadlock when the associated server was concurrently detected unavailable.

Problem investigated and reported by John Morales.

Return ErrCursor on server cursor timeouts

Attempting to iterate over a cursor that has timed out at the server side will now return mgo.ErrCursor.

Feature implemented by Daniel Gottlieb.

Support for collection repairing

The new Collection.Repair method returns an iterator that goes over all recovered documents in the collection, in a best-effort manner. This is most useful when there are damaged data files. Multiple copies of the same document may be returned by the iterator.

Feature contributed by Mike O’Brien.

Read more
Nicholas Skaggs

It's time for a testing jam!

Ubuntu Global Jam, Vivid edition is a few short weeks away. It's time to make your event happen. I can help! Here's my officially unofficial guide to global jam success.

Steps:

  1. Get your jam pack! Get the request in right away so it gets to you on time. 
  2. Pick a cool location to jam
  3. Tell everyone! (be sure to mention free swag, who can resist!?)
But wait, what are you going to do while jamming? I've got that covered too! Hold a testing jam! All you need to know can be found on the ubuntu global jam wiki. The wiki even has more information for you as a jam host in case you have questions or just like details.

Ohh and just in case you don't like testing (seems crazy, I know), there are other jam ideas available to you. The important thing is you get together with other ubuntu aficionados and celebrate ubuntu! 

P.S. Don't forget to share pictures afterwards. No one will know you had the coolest jam in the world unless you tell them :-)

P.P.S. If I'm invited, bring cupcakes! Yum!

Read more