Canonical Voices

Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
beuno

I'm a few days away from hitting 6 years at Canonical and I've ended up doing a lot more management than anything else in that time. Before that I did a solid 8 years at my own company, doing anything from developing, project managing, product managing, engineering managing, sales and accounting.
This time of the year is performance review time at Canonical, so it's gotten me thinking a lot about my role and how my view on engineering management has evolved over the years.

A key insights I've had from a former boss, Elliot Murphy, was viewing it as a support role for others to do their job rather than a follow-the-leader approach. I had heard the phrase "As a manager, I work for you" a few times over the years, but it rarely seemed true and felt mostly like a good concept to make people happy but not really applied in practice in any meaningful way.

Of all the approaches I've taken or seen, a role where you're there to unblock developers more than anything else, I believe is the best one. And unless you're a bit power-hungry on some level, it's probably the most enjoyable way of being a manager.

It's not to be applied blindly, though, I think a few conditions have to be met:
1) The team has to be fairly experienced/senior/smart, I think if it isn't it breaks down to often
2) You need to understand very clearly what needs doing and why, and need to invest heavily and frequently in communicated it to the team, both the global context as well as how it applies to them individually
3) You need to build a relationship of trust with each person and need to trust them, because trust is always a 2-way street
4) You need to be enough of an engineer to understand problems in depth when explained, know when to defer to other's judgments (which should be the common case when the team generally smart and experienced) and be capable of tie-breaking in a technical-savvy way
5) Have anyone who's ego doesn't fit in a small, 100ml container, leave it at home

There are many more things to do, but I think if you don't have those five, everything else is hard to hold together. In general, if the team is smart and experienced, understands what needs doing and why, and like their job, almost everything else self-organizes.
If it isn't self-organizing well enough, walk over those 5 points, one or several must be mis-aligned. More often than not, it's 2). Communication is hard, expensive and more of an art than a science. Most of the times things have seemed to stumble a bit, it's been a failure of how I understood what we should be doing as a team, or a failure on how I communicated it to everyone else as it evolved over time.
Second most frequent I think is 1), but that may vary more depending on your team, company and project.

Oh, and actually caring about people and what you do helps a lot, but that helps a lot in life in general, so do that anyway regardless of you role  :)

Read more
David Planella

OSM GPS dump

We’re very excited to announce an agreement with Nokia HERE to provide A-GPS support on Ubuntu. The new platform service will enable developers to obtain accurate positioning data for their location-based apps in under two minutes, a significantly shorter Time To First Fix (TTFF) than the average for raw GPS technologies.

Faster positioning

While Ubuntu already features GPS-based location, it has always been a key requirement for the OS to provide application developers with rapid and efficient location positioning capabilities.

The new positioning service will be a hybrid solution integrating A-GPS and WiFi positioning, a powerful combo to help obtaining a very fast and accurate TTFF. The system is to be functional by the Release To Manufacturer (RTM) milestone, and available on the regular Ubuntu builds and for retail phones shipping Ubuntu.

Privacy and security

With the user’s explicit consent, anonymous data related to signal strength of local WiFi signals and radio cells can be contributed to crowd-sourcing location services, with the purpose of improving the overall quality of the positioning service for all users.

In line with Ubuntu’s privacy policy, no personal data of any nature is to be collected and released. Users will also be able to opt-out of this service if they do not wish their mobile handset to collect this type of data.

The positioning system will also be run under strict confinement, so that the service and its data cannot be accessed without the user explicitly granting access. With Ubuntu’s trust model, a confined application has to be granted trust by the user to gain access to security- or privacy-relevant system components.

Mapping capabilities

As the new service is to be focused on positioning, it will be decoupled from any mapping solution. Ubuntu Developers, as before, will have a choice of mapping services to use for their applications, including Nokia HERE, OpenStreetMap and others.

Header image based on “openstreetmap gps coverage” by Steven Kay, CC-BY-SA 2.0.

Read more
用户1016111425

创建第一个Ubuntu Touch应用

如果你还没有安装好你的环境的话,请参考"Ubuntu SDK 安装"章节来安装好自己的SDK环境。这篇文章的主要目的是为了检查我们所安装的环境是否正确以确保我们所安装的环境是正确的。

1)  创建一个简单的QML应用
  • 启动Ubuntu SDK
  • 选中菜单"File" ==> "New File or Project"
  • 选中"App with Simple UI"


  • 选中"Choose",然后选择所需要创建的项目的名字接路经,如下:


  • 然后接受默认的设置,就可以完成一个简单的QML应该。如下:


2)在Desktop上面运行

我们这时可以选择在IDE左下角的绿色的三角按钮或同时按下Ctrl + R。这样我们就可以在默认的情况下在Desktop下运行该应用。如果我们能够看见如下的画面,说明我们的安装是没有问题的。



3)在模拟器上运行应用

为了能够在模拟器上运行我们的应用,我们可以按如下的操作进行:

  • 启动Ubuntu SDK
  • 选择IDE左侧的"Devices",并同时选中我们已经创建的模拟器(我先前已经创建好myinstance)。同时点击图中的绿色的按钮以启动模拟器。


  • 回到我们先前的界面,选中菜单"Build"==>"Ubuntu",然后选择"Run Application on Device"。或者我们直接同时使用快捷键Ctrl + F12。 这样我们就可以看到如下的画面:


如果我们看见这样的画面,我们可以认为我们的模拟器环境是没有问题的。我们可以接下来让这个应用在手机中运行。

3)在手机中运行

为了在手机中运行该应用,我们首先得把自己的手机连接到自己的开发电脑。我们可以通过如下的步骤:

  • 启动Ubuntu SDK
  • 点击IDE 左侧的"Devices",并同时点击"Ubuntu Device" (这是一个默认的名字,该名字可以修改)这时我们在Qt Creator IDE中可以看到如下的界面:
  • 点击"AutoCreate"按钮,安装Device Kits。这个过程可能需要一些时间,需要耐心等待
  • 保持"Ubuntu Device"为当前选定的设备



  • 回到项目页面,并运行该应用。也可以直接使用快捷键Ctrl + F12


这样就可以在手机上看到该应用的运行情况。



4)创建一个"App with QML extension Library" 应用

现在我们来创建一个带有QML extension Libray的应用,并运行它:




我们选择默认的设置,直至到如下的界面:



记得选中"Ubuntu Device (GCC armhf-ubuntu-sdk-14.10-utopic)",这样是为了可以在以后在手机上面直接运行。如果在创建的时候没有选上,可以在主界面中,选中"Projects",并选中”Add Kit".



为了使得该应用在模拟器中运行:
  • 点击"Devices", 然后点击自己先前创建的模拟器(对我的情况是myinstance)
  • 点击模拟器中绿色的按钮以启动模拟器
  • 如果"Device Kits"没有被添加,点击"AutoCreate"按钮进行安装。期间如果没有安装相应的chroot,系统会提示你安装相应的chroot。如果是这样的话,安装的过程可能需要一定的时间,请耐心等待



  • 等"Device Kits"安装完后,就是如下的画面:


  • 回到"Projects"界面,点击"Add Kit"。选中刚刚创建的"myinstance (GCC i386-ubuntu-sdk-14.10-utopic)" (这个名字可能会和你自己选择的名字不同)
  • 选择IDE左下角的桌面图标,然后选择不同的架构进行运行即可。对模拟器架构来说,选择”myinstance (GCC i386-ubuntu-sdk-14.10-utopic)"。这样就可以使得应用在模拟器中运行了
总结,在这编文章中,我们介绍了怎么创建一个最基本的应用及怎么在不同的框架中运行该应用。通过这样的实践,我们可以检验我们的安装环境是否正确,同时也达到熟悉整个的运行环境的目的。在下一个章节中,我们将介绍怎么生成一个click安装包,并如何安装它到手机中。

 青春就应该这样绽放  游戏测试:三国时期谁是你最好的兄弟!!  你不得不信的星座秘密

Read more
用户1016111425

Ubuntu SDK 安装

在这篇文章里,你将学到如何安装Ubuntu SDK到你的系统中,并生成一个简单的应用以测试你的安装是否成功。对英文好的学习者,可以参考Ubuntu 网站中的英文地址来进行安装。


操作系统选择

Ubuntu Touch是在Ubuntu 14.10 (Utopic)。为了能够使得Scope应用的开发编译成功,Ubuntu SDK应该安装在Utopic的Ubuntu OS之中。如果你使用的操作系统不是这个版本的,你可以安装一个VM(比如VirtualBox或VMWare),在VM中再安装Ubuntu OS 14.10版本。

添加Phablet Tools PPA

Phablet Tools PPA 提供了一些额外的工具来对device进行安装。这个工具是安装在从Ubuntu OS 12.04以后的版本中的。

你可以在Ubunt 14.04 Trusty 中并不需要添加,因为它已经在Ubuntu通用的发布中。你可以通过如下的方式进行添加:

$ sudo add-apt-repository ppa:phablet-team/tools

添加Ubuntu SDK 发布 PPA中

按照一下方式添加Ubuntu SDK 发布 PPA (https://launchpad.net/~ubuntu-sdk-team/+archive/ppa)。
输入你的Linux管理员密码

$ sudo add-apt-repository ppa:ubuntu-sdk-team/ppa

安装 Ubuntu SDK

按一下方式安装SDK。在需要的时候输入Linux管理员密码

$ sudo apt-get update && sudo apt-get install ubuntu-sdk

提示:对一些人,特别是对那些安装Ubuntu 14.10 ( Utopic)的开发者来说,必须确保所有的安装的包更新到最新的
版本。这个可以通过如下的命令实现:

$ sudo apt-get update && sudo apt-get dist-upgrade

启动Ubuntu SDK IDE

  • 在Ubuntu "Unity Dash Applications lens"中寻找 "Ubuntu SDK
  • 点击找到的”Ubuntu SDK" 图标

你也可以在shell中启动Ubuntu SDK:

$ ubuntu-sdk 

提示:对一些开发者来说,他们可能很想让Ubuntu SDK IDE的图标出现在Ubuntu Unity 的启动面板中,这样可以每次很方便地启动。只要先启动SDK,然后在Ubuntu桌面的左侧的启动面板中,找到SDK的图标,并按下右键,然后选定"Lock to Launcher"。这样,SDK 就可以固定在启动的面板中了。

当我们第一次启动Ubuntu SDK时,可以看到如下的界面:



我们会在下面的步骤中介绍如何来安装不同的target及模拟器(emulator)。

安装Ubuntu SDK armhf chroot

这个步骤是为了交叉编译我们所开发的应用(armhf格式)并部署到手机上。我们可以通过如下的步骤进行安装:

  • 启动Ubuntu SDK
  • 选中IDE菜单中的"Tools",然后在选中"Options",然后再选中”Ubuntu"。就会看到如下的画面
  • 点击"Create Click Target",然后可以看到如图所示的对话框。选择"armhf/Framework-14.10"即可。之后你可以看到安装开始。依赖于你的网络的情况,安装需要一段时间。耐心等待。


在上图中,我们可以看到已经安装好的"utopic ubuntu-sdk ... armhf",这里我们可以点击"update"来更新我们所安装的包,同时,我们也可以看到"Maintain"这个按钮。这个是用来对我们的chroot来进行维护的。比如说我们所开发的应用中,可能需要一个库,但它不是标准的库,没有安装。这时我们想测试时,就可以点击这个按钮,并在shell中进行安装或删除某个包。当然我们必须也要记得在手机中进行安装这个库以使编译好的应用能够运行。

等安装完后,我们可以在shell中看到如下的信息:

~$ schroot -l
chroot:click-ubuntu-sdk-14.10-armhf
chroot:trusty-amd64-armhf
chroot:trusty-armhf
chroot:utopic-amd64-armhf
source:click-ubuntu-sdk-14.10-armhf
source:trusty-amd64-armhf
source:trusty-armhf
source:utopic-amd64-armhf

这里 "chroot:click-ubuntu-sdk-14.10-armhf"就是我们在这个步骤中安装的chroot。有了这个我们就可以为手机target生成目标安装文件进行部署了

安装Ubuntu SDK i386 chroot

这个安装是为了使得以后我们含有C++代码(比如说C++ plugins)的应用能够顺利编译并使得应用在模拟器中运行。我们可以一并安装,在以后需要的时候我们可以生下这个步骤。这个安装过程同样需要很长的时间。需要耐心等待。这个安装步骤和上面几乎是一样的,只是我们需要选择"i386"架构。



安装完后,我们可以在shell中通过如下的命令查看已经安装好的chroot:

~$ schroot -l
chroot:click-ubuntu-sdk-14.10-armhf
chroot:click-ubuntu-sdk-14.10-i386
chroot:trusty-amd64-armhf
chroot:trusty-armhf
chroot:utopic-amd64-armhf
source:click-ubuntu-sdk-14.10-armhf
source:click-ubuntu-sdk-14.10-i386
source:trusty-amd64-armhf
source:trusty-armhf
source:utopic-amd64-armhf

安装模拟器

这个步骤是为了安装一个在手机一个模拟器以仿真一个手机,这样开发者可以在电脑上进行开发及测试。等调试好了以后,就可以部署到我们的真手机中以进行下一步的测试。具体的安装步骤如下:

  • Ubuntu 启动SDK
  • 选择IDE左侧的"Devices",然后在所在的界面中点击图中的"+"。这样就可以看到如下的画面
  • 在所显示的对话框中,输入所需要的模拟器的名字。选择"i386",然后点击"Create"即可。整个过程可能会花很长的时间完成。请耐心等待。这个安装虽然也可以选择"armhf"来进行模拟,但目前建议的还是"i386"架构。

有了这个模拟器,我们就可以在模拟器中运行我们开发的应用了。我们可以选择刚才生成的模拟器(myinstance),并运行它:



实际运行的效果图如下,


开发者也可以参阅https://wiki.ubuntu.com/Touch/Emulator文章来安装自己的模拟器。

总结

至此,我们的开发安装环境基本上已经好了。在下一个章节中,我们来试着创建一个应用来检测一下我们的环境是否已经成功了。我们可以转到"开发第一个Ubuntu Touch应用"来检查我们的安装环境是否正确。


 青春就应该这样绽放  游戏测试:三国时期谁是你最好的兄弟!!  你不得不信的星座秘密

Read more
facundo


Se nos está terminando Julio, así que ya debería tener casi todo el segundo semestre medio planificado... sino después se me empiezan a pisar las cosas y tengo que cancelar eventos, etc.

La segunda mitad del año es normalmente la más cargada de eventos tanto de Python como de otros ámbitos... y a mi no se me ocurrió mejor idea que ponerme un curso que me ocupa todos los sábados de Agosto y Septiembre: un curso introductorio de Locución y Técnicas Vocales, en ETER.

Lo primero que se me pisa es el 9 de Agosto mismo, un sprint organizado por Lipe (está buscando laburar con cosas de torrent, pero yo creo que voy a ir a hacer Encuentro o CDPedia). Obviamente voy a ir a la tarde, no creo que haya problema.

Lo segundo que se me pisa (y me complica la vida) es el PyDay en Luján, el 20 de septiembre. Acá la tengo más complicada, porque encima de que salgo del curso a las 13, me tengo que ir hasta Luján. Igual, voy seguro, porque lo más probable es que de alguna charla (propuse dos repetidas y una nueva, sobre consejos de debugging).

Ya en Noviembre (con lo cual el curso no molesta) está la PyCon, en Rafaela, el viernes 14 y sábado 15, aunque lo más probable es que vaya el día anterior, para viajar tranquilo, ayudarlos a terminar todo, etc.

El 26 de septiembre, viernes, a la noche, tengo un curso en La Plata, donde doy Introducción a Python en un Postgrado de Informática para Científicos, como en la mayoría de los últimos años.

Los últimos días de Septiembre y los primeros de Octubre, aunque no hay nada puntual, me los voy a tratar de dejar liberados, porque está el cumpleaños de Felu. Seguramente haremos algo menos para adultos, más para compañeros de jardín y eso, pero no sabemos todavía exactamente qué, y siempre es un montón de trabajo :p

En fin... un segundo semestre bailado. Encima estoy bastante ocupado en general, especialmente en el laburo porque en estos meses ya sale a la calle el teléfono de Ubuntu, y también porque finalmente estamos hablando seriamente de hacer la asociación civil de Python Argentina (tenemos una lista separada para charlar de eso) y aunque ahora los abogados están de feria, después seguro voy a tener algunas reuniones, trámites para hacer, etc.

Read more
Robie Basak

Meeting Actions

None

U Development

The discussion about “U Development” started at 16:00.

  • Feature freeze is August 21. Note Debian Import Freeze is coming up
    • as well.
  • The mysql /var/lib/mysql discussion is proceeding, but it seems
    • unlikely that this will happen by feature freeze now. Nevertheless, we expect to land 5.6 in main in the same manner as 5.5 is currently on schedule.
  • http://status.ubuntu.com/ubuntu-u/group/topic-u-server.html – please

    • remember to keep your blueprints updated with work item progress and re-plan milestones if things slip.

Server & Cloud Bugs (caribou)

The discussion about “Server & Cloud Bugs (caribou)” started at 16:03.

  • No updates

Weekly Updates & Questions for the QA Team (psivaa)

The discussion about “Weekly Updates & Questions for the QA Team (psivaa)” started at 16:05.

  • No updates

Weekly Updates & Questions for the Kernel Team (smb, sforshee)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee)” started at 16:05.

  • James Page reports that iscsitarget 12.04 DKMS updates for HWE
    • kernels are ready and uploaded to trusty-proposed awaiting SRU team review (bug 1262712)
  • The KSM on NUMA + KVM bug (1346917) is making great progress, driven
    • by Chris Arges. Brad Figg reports that an upload to trusty-proposed is imminent, and it should land on August 8th (the day after 12.04.5). 12.04.5 (for the HWE kernel) won’t include the update, but one will be available for it the next day.
  • For kernel SRU cadence updates, see

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:17.

  • rbasak noted that the Canonical Server Team have been sprinting in
    • #ubuntu-server on Fridays to complete merges, including mentoring and sponsoring, and that all are welcome to join them.

Open Discussion

The discussion about “Open Discussion” started at 16:18.

  • James Page reported that there are plans to SRU docker 1.0.x to
    • 14.04 in bug 1338768. The proposed uploaded is in a PPA and awaiting review from the SRU team. Testers are encouraged to try it out.

Agree on next meeting date and time

Next meeting will be on Tuesday, August 4th at 16:00 UTC in #ubuntu-meeting. Note that this was stated incorrectly in the meeting itself. The chair will be Liam Young.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140729 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc7 and uploaded to the
archive, ie. linux-3.13.0-6.11. Please test and let us know your
results. I also want to mention 14.04.1 released last Thursday
July 24 and 12.04.5 is scheduled to release next Thurs Aug 7.
—–
Important upcoming dates:
Thurs Aug 07 – 12.04.5 (~1 week away)
Thurs Aug 21 – Utopic Feature Freeze (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
用户1016111425

欢迎您在新浪博客安家

亲爱的朋友:

    欢迎您在新浪博客安家,您的博客地址是:http://blog.sina.com.cn/u/1016111425

    您可以用文字、图片、视频记录和展示最真实的自我,与网友交流,与线上好友聊天,还能通过手机发表博文和上传图片,随时随地记录心情和身边趣闻。

    我们为您提供了丰富的炫酷模板来装点您在网上的家园,强大的音乐播放功能更能陪伴您的网络生活。准备好了吗?现在就开始精彩的博客之旅!

 

温馨提示:

    只需一步您就可以把现有MSN SPACES、搜狐或网易的博客内容备份到新浪,现在就去搬家

    绑定手机博客,只要您的手机可以上网,就可以第一时间浏览他人的博客或者更新您自己的博客,您用手机更新的博客可以同时显示在互联网上

这样做您的博客会受到更多的关注:

    装饰个性博客,看看如何换上炫酷模板

    完善个人资料,上传靓照当头像

    随便逛逛,点击屏幕右上方的随便逛逛,看看邻居的观点,留下您的宝贵评论

 

还想了解更多,欢迎去帮助中心找答案。也可以到博客首页浏览大事小情。

                                                                                                                                                                      新浪官博


 青春就应该这样绽放  游戏测试:三国时期谁是你最好的兄弟!!  你不得不信的星座秘密

Read more
Michael Hall

When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific.  On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made?

In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively?

Owners

The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community.  Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors.  As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner.

Leaders

After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer.

Legends

Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time.  Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself.  The more and better contributions you make, the more your reputation grows.  Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work.

Mentors

When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes.  These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders.  Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be.

Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up.

Read more
Benjamin Keyser

Bringing Fluid Motion to Browsing

In the previous Blog Post, we looked at how we use the Recency principle to redesign the experience around bookmarks, tabs and history.
In this blog post, we look at how the new Ubuntu Browser makes the UI fade to the background in favour of the content. The design focuses on physical impulse familiarity – “muscle memory” – by marrying simple gestures to the two key browser tasks, making the experience feel as fluid and simple as flipping through a magazine.

 

Creating a new tab

For all new browsers, the approach to the URI Top Bar that enables searching as well as manual address entry has made the “new tab” function more central to the experience than ever. In addition, evidence suggests that opening a new tab is the third of the most frequently used action in browser. To facilitate this, we made opening a new tab effortless and even (we think) a bit fun.
By pulling down anywhere on the current page, you activate a sprint loaded “new tab” feature that appears under the address bar of the page. Keep dragging far enough, and you’ll see a new blank page coming into view. If you release at this stage, a new tab will load ready with the address bar and keyboard open as well as an easy way to get to your bookmarks. But, if you change your mind, just drag the current page back up or release early and your current page comes back.

http://youtu.be/zaJkNRvZWgw

 

Get to your open tabs and recently visited sites

Pulling the current page downward can create a new blank tab, and conversely dragging the bottom edge upward shows you already open tabs ordered by recency that echoes the right edge “open apps” view.

If you keep on dragging upward without releasing, you can dig even further into the past with your most recently visited pages grouped by site in a “history” list. By grouping under the site domain name, it’s easier to find what you’re looking for without thumbing through hundreds of individual page URLs. However, if you want all the detail, tap an item in the list to see your complete history.

Blog Post - Browser #2 (1)
It’s not easy to improve upon such a well-worn application as the browser, it’s true. We’re hopeful that by adding new fluidity to creating, opening and switching between tabs, our users will find that this browsing experience is simpler to use, especially with one hand, and feels more seamless and fluid than ever.

 

 

Read more
Jussi Pakkanen

A use case that pops up every now and then is to have a self-contained object that needs to be accessed from multiple threads. The problem appears when the object, as part of its usual things calls its own methods. This leads to tricky locking operations, a need to use a recursive mutex or something else that is nonoptimal.

Another common approach is to use the pimpl idiom, which hides the contents of an object inside a hidden private object. There are ample details on the internet, but the basic setup of a pimpl’d class is the following. First of all we have the class header:

class Foo {
public:
    Foo();
    void func1();
    void func2();

private:
    class Private;
    std::unique_ptr<Private> p;
};

Then in the implementation file you have first the defintiion of the private class.

class Foo::Private {
public:
    Private();
    void func1() { ... };
    void func2() { ... };

private:
   void privateFunc() { ... };
   int x;
};

Followed by the definition of the main class.

Foo::Foo() : p(new Private) {
}

void Foo::func1() {
    p->func1();
}

void Foo::func2() {
    p->func2();
}

That is, Foo only calls the implementation bits in Foo::Private.

The main idea to realize is that Foo::Private can never call functions of Foo. Thus if we can isolate the locking bits inside Foo, the functionality inside Foo::Private becomes automatically thread safe. The way to accomplish this is simple. First you add a (public) std::mutex m to Foo::Private. Then you just change the functions of Foo to look like this:

void Foo::func1() {
    std::lock_guard<std::mutex> guard(p->m);
    p->func1()
}

void Foo::func2() {
    std::lock_guard<std::mutex> guard(p->m);
    p->func2();
}

This accomplishes many things nicely:

  • Lock guards make locks impossible to leak, no matter what happens
  • Foo::Private can pretend that it is single-threaded which usually makes implementation a lot easier

The main drawback of this approach is that the locking is coarse, which may be a problem when squeezing out ultimate performance. But usually you don’t need that.

Read more
pitti

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
Michael Hall

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

takemyworkThis question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140722 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc6 and officially uploaded
to the archive. We (as in apw) has also completed a hurculean config
review for Utopic and administered the appropriate changes. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~2 days away)
Thurs Aug 07 – 12.04.5 (~2 weeks away)
Thurs Aug 21 – Utopic Feature Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
Jussi Pakkanen

There are usually two different ways of doing something. The first is the correct way. The second is the easy way.

As an example of this, let’s look at using the functionality of C++ standard library. The correct way is to use the fully qualified name, such as std::vector or std::chrono::milliseconds. The easy way is to have using std; and then just using the class names directly.

The first way is the “correct” one as it prevents symbol clashes and for a bunch of other good reasons. The latter leads to all sorts of problems and for this reason many style guides etc prohibit its use.

But there is a catch. Software is written by humans and humans have a peculiar tendency.

They will always do the easy thing.

There is no possible way for you to prevent them from doing that, apart from standing behind their back and watching every letter they type.

Any sort of system that relies, in any way, on the fact that people will do the right thing rather than the easy thing are doomed to fail from the start. They. Will. Not. Work. And they can’t be made to work. Trying to force it to work leads only to massive shouting and bad blood.

What does this mean to you, the software developer?

It means that the only way your application/library/tool/whatever is going to succeed is that correct thing to do must also be the simplest thing to do. That is the only way to make people do the right thing consistently.

Read more
pitti

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Read more
Rick Spencer

The Community Team



So, given Jono’s departure a few weeks back, I bet a lot of folks have been wondering about the Canonical Community Team. For a little background, the community team reports into the Ubuntu Engineering division of Canonical, which means that they all report into me. We have not been idle, and this post is to discuss a bit about the Community Team going forward.

What has Stayed the Same?

First, we have made some changes to the structure of the community team itself. However, one thing did not change. I kept the community team reporting directly into me, VP of Engineering, Ubuntu. I decided to do this so that there is a direct line to me for any community concerns that have been raised to anyone on the community team.

I had a call with the Community Council a couple of weeks ago to discuss the community team and get feedback about how it is functioning and how things could be improved going forward. I laid out the following for the team.

First, there were three key things that I think that I wanted the Community Team to continue to focus on:
  • Continue to create and run innovative programs to facilitate ever more community contributions and growing the community.
  • Continue to provide good advice to me and the rest of Canonical regarding how to be the best community members we can be, given our privileged positions of being paid to work within that community.
  • Continue to assist with outward communication from Canonical to the community regarding plans, project status, and changes to those.
The Community Council was very engaged in discussing how this all works and should work in the future, as well as other goals and responsibilities for the community team.

What Has Changed?

In setting up the team, I had some realizations. First, there was no longer just one “Community Manager”. When the project was young and Canonical was small, we had only one, and the team slowly grew. However, the Team is now four people dedicated to just the Community Team, and there are others who spend almost all of their time working on Community Team projects.

Secondly, while individuals on the team had been hired to have specific roles in the community, every one of them had branched out to tackle new challenges as needed.

Thirdly, there is no longer just one “Community Spokesperson”. Everyone in Ubuntu Engineering can and should speak to/for Canonical and to/for the Ubuntu Community in the right contexts.
So, we made some small, but I think important changes to the Community Team.

First, we created the role Community Team Manager. Notice the important inclusion of the word Team”. This person’s job is not to “manage the community”, but rather to organize and lead the rest of the community team members. This includes things like project planning, HR responsibilities, strategic planning and everything else entailed in being a good line manager. After a rather competitive interview process, with some strong candidates, one person clearly rose to the top as the best candidate. So, I would like formally introduce David Planella (lp, g+) as the Community Team Manager!

Second, I change the other job titles from their rather specific titles to just “Community Manager” in order to reflect the reality that everyone on the community team is responsible for the whole community. So that means, Michael Hall (lp, g+), Daniel Holbach (lp, g+), and Nicholas Skaggs (lp, g+), are all now “Community Manager”.

What's Next?

This is a very strong team, and a really good group of people. I know them each personally, and have a lot of confidence in each of them personally. Combined as a team, they are amazing. I am excited to see what comes next.

In light of these changes, the most common question I get is, “Who do I talk to if I have a question or concern?” The answer to that is “anyone.” It’s understandable if you feel the most comfortable talking to someone on the community team, so please feel free to find David, Michael, Daniel, or Nicholas online and ask their question. There are, of course, other stalwarts like Alan Pope (lp, g+) and Oliver Grawert (lp, g+) who seem to be always online :) By which, I mean to say that while the Community Managers are here to serve the Ubuntu Community, I hope that anyone in Ubuntu Engineering considers their role in the Ubuntu Community to include working with anyone else in the Ubuntu Community :)

Want talk directly to the community team today? Easy, join their Ubuntu on Air Q&A Session at 15 UTC :)

Finally, please note that I love to be "interrupted" by questions from community members :) The best way to get in touch with me is on freenode, where I go by rickspencer3. Otherwise, I am also on g+, and of course there is this blog :)

Read more
niemeyer

This is a special release of mgo, the Go driver for MongoDB. Besides the several new features, this release marks the change of the Go package import path to gopkg.in, after years using the current one based on a static file that lives at labix.org. Note that the package API is still not changing in any backwards incompatible way, though, so it is safe to replace in-use import paths right away. Instead, the change is being done for a few other reasons:

  • gopkg.in is more reliable, offering a geographically distributed replicated deployment with automatic failover
  • gopkg.in has a nice landing page with pointers to source code, API documentation, versions available, etc.
  • gopkg.in is backed by git and github.com, which is one of the most requested changes over the years it has been hosted with Bazaar at Launchpad

So, from now on the import path to use when using and retrieving the package should be:

Starting with this release, the source code is also maintained and updated only on GitHub at:

The old repository and import path will remain up for the time being, to ensure existing applications continue to work, but it’s already holding out-of-date code.

In terms of changes, the r2014.07.21 release brings the following news:

Socket pool size limit may be changed

A new Session.SetPoolLimit was added for changing the number of sockets that may be in use for the desired server before the session will block waiting for an available one. The value defaults to 4096 if unset.

Note that the driver actually had the ability to limit concurrent sockets for a long time, and the hardcoded default was already 4096 sockets per server. The reason why this wasn’t exposed via the API is because people commonly delegate to the database driver the management of concurrency for the whole application, and this is a bad design. Instead, this limit must be set to cover any expected workload of the application, and application concurrency should be controlled “at the door”, by properly restricting used resources (goroutines, etc) before they are even created.

This feature was requested by a number of people, with the most notable threads being with Travis Reader, from IronIO, and the feature is finally being added after Tyler Bunnel, from Stretchr.com, asked for the ability to increase that limit. Thanks to both, and everybody else who chimed in as well.

TCP name resolution times out after 10 seconds

Previous releases had no driver-enforced timeout.

Reported by Cailin Nelson, MongoDB.

json.Number is marshalled as a number

The encoding/json package in the standard library allows numbers to be held as a string with the json.Number type so that code can take into account their visual representation for differentiating between integers and floats (a non-standard behavior). The mgo/bson package will now recognize those values properly, and marshal them as BSON int64s or float64s based on the number representation.

Patch by Min-Young Wu, Facebook.

New GridFile.Abort method for canceling uploads

When called, GridFile.Abort cancels an upload in progress so that closing the file being written to will drop all uploaded chunks rather than atomically making the file available.

Feature requested by Roger Peppe, Canonical.

GridFile.Close drops chunks on write errors

Previously, a write error would prevent the

File">

File">GridFile file from being created, but would leave already written chunks in the database. Now those chunks are dropped when the file is closed.

Support for PLAIN (LDAP) authentication

The driver can now authenticate against MongoDB servers set up using the PLAIN mechanism, which enables authentication against LDAP servers as documented.

Feature requested by Cailin Nelson, MongoDB.

Preliminary support for Bulk API

The ability to execute certain operations in bulk mode is being added to the 2.6 release of the MongoDB server. This allows queuing up inserts, updates, and removals to be sent in batches rather than one by one.

This release includes an experimental API to support that, including compatibility with previous version of the MongoDB server for the added features. The functionality is exposed via the Collection.Bulk method, which returns a value with type *mgo.Bulk that contains the following methods at the moment:

  • Bulk.Insert – enqueues one or more insert operations
  • Bulk.Run – runs all operations in the queue
  • Bulk.Unordered – use unordered mode, so latter operations may proceed when prior ones failed

Besides preparing for the complete bulk API with more methods, this preliminary change adds support for the ContinueOnError wire protocol flag of the insert operation in a way that will remain compatible with the upcoming bulk operation commands, via the unordered mode.

The latter feature was requested by Chandra Sekar, and justifies the early release of the API.

Various compatibility improvements in user handling for 2.6

The MongoDB 2.6 release includes a number of changes in user handling functionality. The existing methods have been internally changed to preserve compatibility to the extent possible. Certain features, such as the handling of the User.UserSource field, cannot be preserved and will cause an error to be reported when used against 2.6.

Wake up nonce waiters on socket death

Problem reported and diagnosed by John Morales, MongoDB.

Don’t burn CPU if no masters are found and FailFast is set

Problem also reported by John Morales, MongoDB.

Stop Iter.Next if failure happens at get-more issuing time

Problem reported by Daniel Gottlieb, MongoDB.

Various innocuous race detector reports fixed

Running the test suite with the race detector enabled would raise various issues due to global variable modifications that are only done and only accessible to the test suite itself. These were fixed and the race detector now runs cleanly over the test suite.

Thanks to everybody that contributed to this release.

Read more