Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

15.04 is out!

ubuntu.com

 

And another Ubuntu release went out the the door. I can’t believe that it’s the 22nd Ubuntu release already.

There’s a lot to be excited about in 15.04. The first phone powered by Ubuntu went out to customers and new devices are in the pipeline. The underpinnings of the various variants of Ubuntu are slowly converging, new Ubuntu flavours saw the light of day (MATE and Desktop next), new features landed, new apps added, more automated tests were added, etc. The future of Ubuntu is looking very bright.

What’s Ubuntu Core?

One thing I’m super happy about is a very very new addition: Ubuntu Core and snappy. What does it offer? It gives you a minimal Ubuntu system, automatic and bulletproof updates with rollback, an app store and very straight-forward enablement and packaging practices.

It has been brilliant to watch the snappy-devel@ and snappy-app-devel@ mailing lists in recent weeks and notice how much interest from enthusiasts, hobbyists, hardware manufacturers, porters and others get interested and get started. If you have a look at Dustin’s blog post, you get a good idea of what’s happening. It also features a video of Mark, who explains how Ubuntu has adapted to the demands of a changing IT world.

One fantastic example of how Ubuntu Snappy is already powering devices you had never thought of is the Erle-Copter. (If you can’t see the video, check out this link.)

It’s simply beautiful how product builders and hobbyists can now focus on what they’re interested in: building a tool, appliance, a robot, something crazy, something people will love or something which might change a small art of the world somewhere. What’s taken out of the equation by Ubuntu is: having to maintain a linux distro.

Maintaining a linux distro

Whenever I got a new device in my home I could SSH into, I was happy and proud. I always felt: “wow, they get it – they’re using open source software, they’re using linux”. This  feeling was replaced at some stage, when I realised how rarely my NAS or my router received system updates. When I checked for changelog entries of the updates I found out how only some of the important CVEs of the last year were mentioned, sometimes only “feature updates” were mentioned.

To me it’s clear that not all product builders or hardware companies collaborate with the NSA and create backdoors on purpose, but it’s hard work to maintain a linux stack and to do it responsibly.

That’s why I feel Ubuntu Core is an offering that “has legs” (as Mark Shuttleworth would say): as somebody who wants to focus on building a great product or solving a specific use case, you can do just that. You can ship your business logic in a snap on top of Ubuntu Core and be done with it. Brilliant!

What’s next?

Next week is Ubuntu Online Summit (5-7 May). There we are going to discuss the plans for the next time and that’s where you can get involved, ask questions, bring up your ideas and get to know the folks who are working on it now.

I’ll write a separate blog post in the coming days explaining what’s happening next week, until then feel free to have a look at:

 

Read more
David Planella

Ubuntu Online Summit
The 15.04 release frenzy over, but the next big event in the Ubuntu calendar is just around the corner. In about a week, from the 5th to 7th of May, the next edition of the Ubuntu Online Summit is taking off. Three days of sessions for developers, designers, advocates, users and all members of our diverse community.

Along the developer-oriented discussions you’ll find presentations, workshops, lightning talks and much more. It’s a great opportunity for existing and new members to get together and contribute to the talks, watch a workshop to learn something new, or ask your questions to many of the rockstars who make Ubuntu.

While the schedule is being finalized, here’s an overview (and preview) of the content that you should expect in each one of the tracks:

  • App & scope development: the SDK and developer platform roadmaps, phone core apps planning, developer workshops
  • Cloud: Ubuntu Core on clouds, Juju, Cloud DevOps discussions, charm tutorials, the Charm, OpenStack
  • Community: governance discussions, community event planning, Q+As, how to get involved in Ubuntu
  • Convergence: the road to convergence, the Ubuntu desktop roadmap, requirements and use cases to bring the desktop and phone together
  • Core: snappy Ubuntu Core, snappy post-vivid plans, snappy demos and Q+As
  • Show & Tell: presentations, demos, lightning talks (read: things that break and explode) on a varied range of topics

Joining the summit is easy, you’ll just need to follow the instructions and register for free to the Ubuntu Online Summit >

UOS highlights: back to the desktop, snappy and the road to convergence

This is going to be perhaps one of the most important summits in recent times. After a successful launch of the phone, followed by the exciting announcement and delivery of snappy Ubuntu Core, Ubuntu is entering a new era. An era of lean, secure, minimal and modular systems that can run on the cloud, on Internet-enabled devices, on the desktop and virtually anywhere.

While the focus on development in the last few cycles has been on shaping up and implementing the phone, this doesn’t mean other key parts of the project have been left out. The phone has helped create the platform and tools that will ultimately bring all these projects together, into a converged code base and user experience. From desktop to phone, to the cloud, to things, and back to the desktop.

The Ubuntu 15.10 cycle begins, and so does this exciting new era. The Ubuntu Online Summit will be a unique opportunity to pave the road to convergence and discuss how the next generation of the Ubuntu desktop is built. So the desktop is back on the spotlight, and snappy will be taking the lead role in bringing Ubuntu for devices and desktop together. Expect a week of interesting discussions and of thinking out of the box to get there!

Participating in the Ubuntu Online Summit

Does this whet your appetite? Come and join us at the Summit, learn more and contribute to shaping the future of Ubuntu! There are different ways of taking part in the online event via video hangouts:

  • Participate or watch sessions – everyone is welcome to participate and join a discussion to provide input or offer contribution. If you prefer to take a rear seat, that’s fine too. You can either subscribe to sessions, watch them on your browser or directly join a live hangout. Just remember to register first and learn how to join a session.
  • Propose a session – do you want to take a more active role in contributing to Ubuntu? Do you have a topic you’d like to discuss, or an idea you’d like to implement? Then you’ll probably want to propose a session to make it happen. There is still a week for accepting proposals, so why don’t you go ahead and propose a session?

Looking forward to seeing you all at the Summit!

The post Announcing the next Ubuntu Online Summit appeared first on David Planella.

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant init http://goo.gl/DO7a9W 
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-15.04-snappy-core-edge-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/15.04/core/edge/current/core-edge-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-15.04-snappy-core-edge-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Nicholas Skaggs

It's Show and Tell Time!

Err, show and tell?
Who remembers their first years of schooling? At least for me growing up in the US, those first years invovled an activity called 'Show and Tell'. We were instructed to bring something in from home and talk about it. This could be a picture or souvenir from a trip or unique life event, something we made, another person who does interesting things, or just something we found really interesting. It was a way for us to learn more about each other in the classroom, as well as share cool things with each other.


Online Summit
Ok, snapping you back to reality, it's nearing time for UOS 15.05. UOS is the Ubuntu Online Summit we hold each cycle to talk about what's happening in ubuntu. UOS 15.05 will be on May 5th - May 7th.

So what does the childhood version of me reminiscing about show and tell have to do with UOS? Well, I'm glad you asked! There is a 'Show and Tell' track available to everyone as a platform for sharing interesting and unique things with the rest of the community. These sessions can be very short (5 or 10 minutes) and are a great way to share about your work within ubuntu.

With that in mind, it's a perfect opportunity for you to participate in 'Show and Tell' with the rest of the community. I encourage you to propose a session on the 'Show and Tell' track. This track exists for things like demos, quick talks, and 'show and tell' type things. It's perfect to spend 5 or 10 minutes talking about something you made or work on. Or perhaps something you find interesting. Or just a way to share a little about the team you work with or a project you've done. For those of you who may have been a part of the 'lightning talks' during the days of the physical UDS, anything that would have been considered a lightning talk is more than welcome in this track.

Cool, where do I signup?
Proposing a session is simple to do, and there's even a webpage to help! If you really get stuck, feel free to contact me, Svetlana Belkin, Marco Ceppi, or Allan Lesage who are your friendly track leads for this track. Once it's proposed the session will be assigned a date and time. Myself or another track lead will follow-up with you before UOS to ensure you are ready and the date and time is suitable for you.

Is there another way to participate?
Yes! Remember to checkout the show and tell sessions and participate by asking questions and enjoying the presentations. I guarantee you will learn something new. Maybe even useful!


Thanks for helping make UOS a success. I'll see you there!


Read more
Shuduo

日期:2015-04-14 作者:新闻更新 出处:IT.com.cn(IT世界网)

近日,Canonical与中国移动联合发起的“Ubuntu开发者创新大赛”正在各大高校如火如荼的展开,公开征集优秀适配Ubuntu操作系统的Scope、应用等作品。其中,优麒麟作为本次大赛的开发平台,为开发者提供了开放的、便捷的桌面操作系统,该系统易于开发者使用,开发者可以将更多的精力投入到创新作品的开发中。


中国移动&Ubuntu开发者创新大赛

本次赛事活动将使国内开发者接触到Ubuntu所带来的新一代移动体验。这将打破自第一款iPhone以来所形成的对于应用为王的固定模式,也将突破屏幕上惟有应用图标排列组成的单一视觉效果和使用上的局限。Ubuntu系统实现了内容和服务的前置,可直接呈现于屏幕上,从而有效的为用户创造一个丰富、快速且不碎片化的体验。而开发者只需花费相对于传统应用开发和维护而言很小的成本,便可在系统级别上创造出与应用同样的用户体验。这些独特体验是通过使用Ubuntu Scopes来完成的,它是Ubuntu独有的全新UI工具,可通过Ubuntu SDK获取。想了解更多Ubuntu Scopes和Ubuntu手机的信息,可访问Ubuntu官网。

赛事面向学生群体、职业开发者和开源社区,共设置6个奖项,奖励包括7万人民币现金和手机等奖品,并为学生组的获奖参赛者提供Canonical实习机会。报名截止时间为2015年5月15日,2015年6月进行决赛评选。开发者可访问“和你圆梦”百万青年创业就业计划官网报名参赛,

据悉,本次大赛的参赛开发者大都在使用优麒麟Ubuntu Kylin做Ubuntu手机开发的操作系统,作为一个中文化本地化的Ubuntu分支优麒麟,让国内开发者可以更轻松的使用Ubuntu手机的SDK来进行开发。

为了让国内业界开发者以及校内学生,更快更好的了解Ubuntu手机平台和开发技术,Canonical公司已在全国校内外场所举办多场落地培训互动。由知名专家高级工程师带队,为国内开发者个人团体从平台介绍到实际上手开发详尽讲解,并在现场手把手教大家如何开发Scope。同时,他们还为广大的开发者朋友准备了一定数量的优麒麟Ubuntu Kylin启动盘,内含Ubuntu SDK,以及一整套的培训教程资料,为开发者们提供了全面的技术支持。

开发者使用优麒麟平台进行Ubuntu手机应开发,可以轻松实现自己的创新想法、潮流创意等,让青年开发者率先接触到了新兴的移动生态系统,在获得崭新的创业机遇同时助力TD 产业蓬勃发展。日前,Ubuntu开发者创新大赛线上线下的技术培训活动已经在北京邮电、中科大、长沙中南大学、中山大学等高校展开。后续培训安排日程可通过关注Ubuntu微信账号(UbuntuByCanonical)获得。

Read more
Shuduo

2015-04-14 16:48 作者:www.guigu.org HV:87 来源:硅谷网 编辑:书明寒

  近日,记者了解到国产操作系统优麒麟正式发布了15.04 Beta 2测试版本,对重量级特色软件进行了升级,更好的方便用户体验,这款适合个人应用、大型企业和政府机构办公的操作系统在细节上的创新升级,提升了其品质和服务能力,获得了用户的认可,对于国内Linux操作系统的发展也起到了很好的示范作用。

  满足中国用户的使用习惯

  优麒麟设计之初就是为了做更有中国特色的操作系统,采用平台国际化与应用本地化融合的设计理念,通过定制本地化的桌面用户环境以及开发满足广大中文用户特定需求的应用软件来提供细腻的中文用户体验。

  记者从官方了解到,相对于14.10正式版,在细节上看到了更多创新,此次发布的测试版本累计修复了60多个Bug;更新了系统主题;系统内核升级到3.19,,能支持下一代英特尔Braswell芯片;用户桌面环境的改进包括默认打开“本地集成菜单”和启动器的“单击最小化”两个特性,更利于Windows熟练用户学习使用Unity用户界面。

  同时,新版本还升级了一些特色应用,软件中心、优客助手、优客农历等,其中优客助手2.0.1版本实现了全新的用户界面和操作方式,软件中心1.3.0版本增加了用户“完善翻译”和个人应用管理等功能。另外,开发团队与搜狗公司合作开发了搜狗输入法1.2版本,该版本已修复多个重要已知Bug,并支持细胞词库功能,当有最新更新时,已安装的用户可以自动更新。

  优麒麟获得用户认可

  绝大多数的开源软件在正式发布前会发布数版测试版本,然后会通过其技术社区不断后续完善,这也是开源软件正常的开发过程。这个版本应该是15.04正式版本之前最后发布的一个测试版本,接下来优麒麟开发团队将对系统关键组件、集成应用、中文化等进行更详细的测试以及Bug修复,这将为优麒麟推向市场奠定基础。

  去年,优麒麟系统进入了中央国家机关政府采购个人操作系统协议供货商名单。入围政府采购名单,意味着这款操作系统通过了国内最高层次的全面审核和认可,这对优麒麟来说是一个很好的发展机会。据悉,目前中央政府采购中心已经在安装试用优麒麟,而诸多部委和国家机构也正在评估试用安装优麒麟中。

  记者也从身边的朋友了解到,目前在国内很多的安卓手机开发工程师其实都是在使用优麒麟做安卓的开发,例如小米,盛大。“没有最好,只有更好”,优麒麟发布最新测试版本,就是为了更好的方便用户使用和体验,目前从下载数量看优麒麟正逐渐获得市场和消费者的认可。

Read more
David Planella

Nearly two years ago, the Ubuntu Community Donations Program was created as an extension to the donations page on ubuntu.com/download, where those individuals who download Ubuntu for free can choose to support the project financially with a voluntary contribution. In doing so, they can use a set of sliders to determine which parts of the project the amount they donate goes to (Ubuntu Desktop, Ubuntu for phone, Ubuntu for tablet, Ubuntu on public clouds, Cloud tools, Ubuntu Server with OpenStack, Community projects, Tip to Canonical).

While donations imply the trust from donors that Canonical is acting as a steward to manage their contributions, the feedback from the community back then was that the Community slider required a deeper level of attention in terms of management and transparency. With community being such an integral part of Ubuntu, and with the new opportunity to financially support new community projects, events or travel, it was just logical to ensure that the funds allocated to them were managed fairly and transparently, with public reporting every six months and a way for Ubuntu members to request funding.

Although the regular reports already provide a clear picture where the money donated for community projects is spent on, today I’d like to give an update on the bigger picture of the Community Donations Program and answer some questions community members have raised.

A successful two years

In a nutshell, we’re proud to say that the program continues to successfully achieve the goals it was set out for. Since its inception, it has given the ability to fund around 70,000 USD worth of community initiatives, conferences, travel and more. The money has always been allocated upon individual requests, the vast majority of which were accepted. Very few were declined, and when they were, we’ve always strived to provide good reasoning for the decision.

This process has given the opportunity to support a diverse set of teams and projects of the wide Ubuntu family, including flavours and sponsoring open source projects and conferences that have collaborated with Ubuntu over the years.

Program review and feedback

About two years into the Program, we felt a more thorough review was due: to assess how it has been working, to evaluate the community feedback and to decide if there are any adjustments required. Working with the Community Council on the review, we’ve also tried to address some questions from Ubuntu members that came in recently. Here is a summary of this review.

The feedback in general has been overwhelmingly positive. The Community Donations Program is not only seen as an initiative that hugely benefits the Ubuntu project, but also the figures and allocations on the reports and are a testament to this fact.

Criticism is also important to take, and when it has come, we’ve addressed it individually and updated the public policy or FAQ accordingly. Recently, it has arrived in two areas: the uncertainty in some cases where the exact cost is not known in advance (e.g. fluctuating travel costs from the date of the request until approval and booking) and the delay in actioning some of the requests. In the first case, we’ve updated the FAQ to reflect the fact that there is some flexibility allowed in the process to work with a reasonable estimate. In the second, we’ve tried to explain that while some requests are easy to approve and actioned in a matter of a few days (we review them all once a week), some others take longer due to several different factors: back and forth communication to clarify aspects of the requests, the amount of pending requests, and in some cases, the complexity of arranging the logistics. In general, we feel that it’s not unreasonable to expect sending a request at least a month in advance to what it is being planned to organize with the funds. We’re also making it clear that requests should be filled in advance as opposed to retroactively, so that community members do not end up in a difficult position should a request not be granted.

One of the questions that came in was regarding the flavour and upstream donation sliders. Originally, there were 3 community-related sliders on ubuntu.com/download: 1) Community participation in Ubuntu development, 2) Better coordination with Debian and upstreams, 3) Better support for flavours like Kubuntu, Xubuntu, Lubuntu. At some point during the 14.04 release sliders 2) and 3) were removed, leaving 1) as Community projects. Overall, this didn’t change the outcome of community allocations: since its beginning, the Community Donations Programme amounts have only come from the first slider, which is what the Canonical Community team are managing. From there, money is always allocated upon request fairly, not making a difference and benefiting Ubuntu, its flavours and upstreams equally.

All that said the lack of communication regarding the removal of the slider was something that was not intended and should have been communicated with the Community Team and the Community Council. It was a mistake for which we need to apologize. For any future changes in sliders that affect the community we will make sure that the Community Council is included in communications as an important stakeholder in the process.

Questions were also raised about the reporting on the community donations during the months in 2012/2013, between the donations page going live and the announcement of the Community Donations Program. As mentioned before, the Program was born out of the want to provide a higher level of transparency for the funds assigned to community projects. Up until then (and in the same way as they do today for the rest of the donation sliders) donors were trusting Canonical to manage the allocations fairly. Public reports were made retroactively only where it made sense (i.e. to align with fiscal quarters), but not going back all the way to the time before the start of the Program.

All in all, with these small adjustments we’re proud to say we’ll continue to support community projects with donations in the same way we’ve been doing these last two years.

And most especially, we’d like to say a big ‘thank you’ to everyone who has kindly donated and to everyone who has used the funds to help shaping the future of Ubuntu. You rock!

The post The Ubuntu Community Donations Program in review appeared first on David Planella.

Read more
bmichaelsen

Das ist alles nur geklaut und gestohlen,
nur gezogen und geraubt.
Entschuldigung, das hab ich mir erlaubt.
— die Prinzen, Alles nur geklaut

So, you might have noticed that there was no April Fools post from me this, year unlike previous years. One idea, I had was giving LibreOffice vi-key bindings — except that apparently already exists: vibreoffice. So I went looking for something else and found odpdown by Thorsten, who just started to work on LibreOffice fulltime again, and reading about it I always had the thought that it would be great to be able to run this right from your favourite editor: Vim.

And indeed: That is not hard to do. Here is a very raw video showing how to run presentations right out of vim:

Now, this is a quick hack, Linux only, requires you to have Python3 UNO-bindings installed etc. If you want to play with it: Clone the repo from github and get started. Kudos go out to Thorsten for the original odpdown on which this is piggybagging (“das ist alles nur geklaut”). So: Have fun with this — I will have to install vibreoffice now.


Read more
Barry Warsaw

Background

Snappy Ubuntu Core is a new edition of the Ubuntu you know and love, with some interesting new features, including atomic, transactional updates, and a much more lightweight application deployment story than traditional Debian/Ubuntu packaging.  Much of this work grew out of our development of a mobile/touch based version of Ubuntu for phones and tablets, but now Ubuntu Core is available for clouds and devices.

I find the transactional nature of upgrades to be very interesting.  While you still get a perfectly normal Ubuntu system, your root file system is read-only, so traditional apt-get based upgrades don't work.  Instead, your system version is image based; today you are running image 231 and tomorrow a new image is released to get you to 232.  When you upgrade to the new image, you get all the system changes.  We support both full and delta upgrades (the latter which reduces bandwidth), and even phased updates so that we can roll out new upgrades and quickly pull them from the server side if we notice a problem.  Snappy devices even support rolling back upgrades on a single device, by using a dual-partition root file system.  Phones generally don't support this due to lack of available space on the device.

Of course, the other part really interesting thing about Snappy is the lightweight, flexible approach to deploying applications.  I still remember my early days learning how to package software for Debian and Ubuntu, and now that I'm both an Ubuntu Core Developer and Debian Developer, I understand pretty well how to properly package things.  There's still plenty of black art involved, even for relatively easy upstream packages such as distutils/setuptools-based Python package available on the Cheeseshop (er, PyPI).  The Snappy approach on Ubuntu Core is much more lightweight and easy, and it doesn't require the magical approval of the archive elves, or the vagaries of PPAs, to make your applications quickly available to all your users.  There's even a robust online store for publishing your apps.

There's lots more about Snappy apps and Ubuntu Core that I won't cover here, so I encourage you to follow the links for more information.  You might also want to stop now and take the tour of Ubuntu Core (hey, I'm a poet and I didn't even realize it).

In this post, I want to talk about building and deploying snappy Python applications.  Python itself is not an officially supported development framework, but we have a secret weapon.  The system image client upgrader -- i.e. the component on the devices that checks for, verifies, downloads, and applies atomic updates -- is written in Python 3.  So the core system provides us with a full-featured Python 3 environment we can utilize.

The question that came to mind is this: given a command-line application available on PyPI, how easy is it to turn into a snap and install it on an Ubuntu Core system?  With some caveats I'll explore later, it's actually pretty easy!

Basic approach

The basic idea is this: let's take a package on PyPI, which may have additional dependencies also on PyPI, download them locally, and build them into a snap that we can install on an Ubuntu Core system.

The first question is, how do we build a local version of a fully-contained Python application?  My initial thought was to build a virtual environment using virtualenv or pyvenv, and then somehow turn that virtual environment into a snap.  This turns out to be difficult in practice because virtual environments aren't really designed for this.  They have issues with being relocated for example, and they can contain a lot of extraneous stuff that's great for development (virtual environment's actual purpose ) but unnecessary baggage for our use case.

My second thought involved turning a Python application into a single file executable, and from there it would be fairly easy to snappify.  Python has a long tradition of such tools, many with varying degrees of cross platform portability and standalone-ishness.  After looking again at some oldies but goodies (e.g. cx_freeze) and some new offerings, I decided to start with pex.

pex is a nice tool developed by Brian Wickman and the Twitter folks which they use to deploy Python applications to their production environment.  pex takes advantage of modern Python's support for zip imports, and a clever trick of zip files.

Python supports direct imports (of pure Python modules) from zip files, and the python executable's -m option works even when the module is inside a zip file.  Further, the presence of a __main__.py file within a package can be used as shorthand for executing the package, e.g. python -m myapp will run myapp/__main__.py if it exists.

Zip files are interesting because their index is at the end of the file.  This allows you to put whatever you want at the front of the file and it will still be considered a zip file.  pex exploits this by putting a shebang in the first line of the file, e.g. #!/usr/bin/python3 and thus the entire zip file becomes a single file executable of Python code.

There are of course, plenty of caveats.  Probably the main one is that Python cannot import extension modules directly from the zip, because the dlopen() function call only takes a file system path.  pex handles this by marking the resulting file as not zip safe, so the zip is written out to a temporary directory first.

The other issue of course, is that the zip file must contain all the dependencies not present in the base Python.  pex is actually fairly smart here, in that it will chase dependencies, much like pip and it will include those dependencies in the zip file.  You can also specify any missed dependencies explicitly on the pex command line.

Once we have the pex file, we need to add the required snappy metadata and configuration files, and run the snappy command to generate the .snap file, which can then be installed into Ubuntu Core.  Since we can extract almost all of the minimal required snappy metadata from the Python package metadata, we only need just a little input from the user, and the rest of work can be automated.

We're also going to avail ourselves of a convenient cheat.  Because Python 3 and its standard library are already part of Ubuntu Core on a snappy device, we don't need to worry about any of those dependencies.  We're only going to support Python 3, so we get its full stdlib for free.  If we needed access to Python 2, or any external libraries or add-ons that can't be made part of the zip file, we would need to create a snappy framework for that, and then utilize that framework for our snappy app.  That's outside the scope of this article though.

Requirements

To build Python snaps, you'll need to have a few things installed.  If you're using Ubuntu 15.04, just apt-get install the appropriate packages.  Otherwise, you can get any additional Python requirements by building a virtual environment and installing tools like pex and wheel into their, then invoking pex from that virtual environment.  But let's assume you have the Vivid Vervet (Ubuntu 15.04); here are the packages you need:
  •  python3
  •  python-pex-cli
  •  python3-wheel
  •  snappy-tools
  •  git
You'll also want a local git clone of https://gitlab.com/warsaw/pysnap.git which provides a convenient script called snap.py for automating the building of Python snaps.  We'll refer to this script extensively in the discussion below.

For extra credit, you might want to get a copy of Python 3.5 (unreleased as of this writing).  I'll show you how to do some interesting debugging with Python 3.5 later on.

From PyPI to snap in one easy step

Let's start with a simple example: world is a very simple script that can provide forward and reverse mappings of ISO 3166 two letter country codes (at least as of before ISO once again paywalled the database).  So if you get an email from guido@example.py you can find out where the BDFL has his secret lair:

$ world py
py originates from PARAGUAY

world is a pure-Python package with both a library and a command line interface. To get started with the snap.py script mentioned above, you need to create a minimal .ini file, such as:

[project]
name: world

[pex]
verbose: true

Let's call this file world.ini.  (In fact, you'll find this very file under the examples directory in the snap git repository.)  What do the various sections and variables control?
  •  name is the name of the project on PyPI.  It's used to look up metadata about the project on PyPI via PyPI's JSON API.
  •  verbose variable just defines whether to pass -v to the underlying pex command.
Now, to create the snap, just run:

$ ./snap.py examples/world.ini

You'll see a few progress messages and a warning which you can ignore.  Then out spits a file called world_3.1.1_all.snap.  Because this is pure Python, it's architecture independent.  That's a good thing because the snap will run on any device, such as a local amd64 kvm instance, or an ARM-based Ubuntu Core-compatible Lava Lamp.

Armed with this new snap, we can just install it on our device (in this case, a local kvm instance) and then run it:

$ snappy-remote --url=ssh://localhost:8022 install world_3.1.1_all.snap
$ ssh -p 8022 ubuntu@localhost
ubuntu@localhost:~$ world.world py
py originates from PARAGUAY

From git repository to snap in one easy step

Let's look at another example, this time using a stupid project that contains an extension module. This aptly named package just prints a yes for every -y argument, and no for every -n argument.

The difference here is that stupid isn't on PyPI; it's only available via git.  The snap.py helper is smart enough to know how to build snaps from git repositories.  Here's what the stupid.ini file looks like:

[project]
name: stupid
origin: git https://gitlab.com/warsaw/stupid.git

[pex]
verbose: yes

Notice that there's a [project]origin variable.  This just says that the origin of the package isn't PyPI, but instead a git repository, and then the public repo url is given.  The first word is just an arbitrary protocol tag; we could eventually extend this to handle other version control systems or origin types.  For now, only git is supported.

To build this snap:

$ ./snap.py examples/stupid.ini

This clones the repository into a temporary directory, builds the Python package into a wheel, and stores that wheel in a local directory.  pex has the ability to build its pex file from local wheels without hitting PyPI, which we use here.  Out spits a file called stupid_1.1a1_all.snap, which we can install in the kvm instance using the snappy-remote command as above, and then run it after ssh'ing in:

ubuntu@localhost:~$ stupid.stupid -ynnyn
yes
no
no
yes
no

Watch out though, because this snap is really not architecture-independent. It contains an extension module which is compiled on the host platform, so it is not portable to different architectures.  It works on my local kvm instance, but sadly not on my Lava Lamp.

Entry points

pex currently requires you to explicitly name the entry point of your Python application.  This is the function which serves as your main and it's what runs by default when the pex zip file is executed.

Usually, a Python package will define its entry point in its setup.py file, like so:

setup(
    ...
    entry_points={
        'console_scripts': ['stupid = stupid.__main__:main'],
        },
    ...
    )

And if you have a copy of the package, you can run a command to generate the various package metadata files:

$ python3 setup.py egg_info

If you look in the resulting stupid.egg_info/entry_points.txt file, you see the entry point clearly defined there.  Ideally, either pex or snap.py would just figure this out explicitly.  As it turns out, there's already a feature request open on pex for this, but in the meantime, how can we auto-detect the entry point?

For the stupid example, it's pretty easy.  Once we've cloned its git repository, we just run the egg_info command and read the entry_points.txt file.  Later, we can build the project's binary wheel from the same git clone.

It's a bit more problematic with world though because the package isn't downloaded from PyPI until pex runs, but the pex command line requires that you specify the entry point before the download occurs.

We can handle this by supporting an entry_point variable in the snap's .ini file.  For example, here's the world.ini file with an explicit entry point setting:

[project]
name: world
entry_point: worldlib.__main__:main

[pex]
verbose: true

What if we still wanted to auto-detect the entry point?  We could of course, download the world package in snap.py and run the egg-info command over that.  But pex also wants to download world and we don't want to have to download it twice.  Maybe we could download it in snap.py and then build a local wheel file for pex to consume.

As it turns out there's an easier way.

Unfortunately, package egg-info metadata is not availble on PyPI, although arguably it should be.  Fortunately, Vinay Sajip runs an external service that does make the metadata available, such as the metadata for world.

snap.py makes the entry_point variable optional, and if it's missing, it will grab the package metadata from a link like that given above.  An error will be thrown if the file can't be found, in which case, for now, you'd just add the [project]entry_point variable to the .ini file.

A little more snap.py detail

The snap.py script is more or less a pure convenience wrapper around several independent tools.  pex of course for creating the single executable zip file, but also the snappy command for building the .snap file.  It also utilizes python3 setup.py egg_info where possible to extract metadata and construct the snappy facade needed for the snappy build command.  Less typing for you!  In the case of a snap built from a git repository, it also performs the git cloning, and the python3 setup.py bdist_wheel command to create the wheel file that pex will consume.

There's one other important thing snap.py does: it fixes the resulting pex file's shebang line.  Because we're running these snaps on an Ubuntu Core system, we know that Python 3 will be available in /usr/bin/python3.  We want the pex file's shebang line to be exactly this.  While pex supports a --python option to specify the interpreter, it doesn't take the value literally.  Instead, it takes the last path component and passes it to /usr/bin/env so you end up with a shebang line like:

#!/usr/bin/env python3

That might work, but we don't want the pex file to be subject to the uncertainties of the $PATH environment variable.

One of the things that snap.py does is repack the pex file.  Remember, it's just a zip file with some magic at the top (that magic is the shebang), so we just read the file that pex spits out, and rewrite it with the shebang we want.  Eventually, pex itself will handle this and we won't need to do that anymore.

Debugging

While I was working out the code and techniques for this blog post, I ran into an interesting problem.  The world script would crash with some odd tracebacks.  I don't have the details anymore and they'd be superfluous, but suffice to say that the tracebacks really didn't help in figuring out the problem.  It would work in a local virtual environment build of world using either the (pip installed) PyPI package or run from the upstream git repository, but once the snap was installed in my kvm instance, it would traceback.  I didn't know if this was a bug in world, in the snap I built, or in the Ubuntu Core environment.  How could I figure that out?

Of course, the go to tool for debugging any Python problem is pdb.  I'll just assume you already know this.  If not, stop everything and go learn how to use the debugger.

Okay, but how was I going to get a pdb breakpoint into my snap?  This is where Python 3.5 comes in!

PEP 441, which has already been accepted and implemented in what will be Python 3.5, aims to improve support for zip applications.  Apropos this blog post, the new zipapp module can be used to zip up a directory into single executable file, with an argument to specify the shebang line, and a few other options.  It's related to what pex does, but without all the PyPI interactions and dependency chasing.  Here's how we can use it to debug a pex file.

Let's ignore snappy for the moment and just create a pex of the world application:

$ pex -r world -o world.pex -e worldlib.__main__:main
Now let's say we want to set a pdb breakpoint in the main() function so that we can debug the program, even when it's a single executable file.  We start by unzipping the pex:
$ mkdir world
$ cd world
$ unzip ../world.pex
If you poke around, you'll notice a __main__.py file in the current directory.  This is pex's own main entry point.  There are also two hidden directories, .bootstrap and .deps.  The former is more pex scaffolding, but inside the latter you'll see the unpacked wheel directories for world and its single dependency.

Drilling down a little farther, you'll see that inside the world wheel is the full source code for world itself.  Set a break point by visiting .deps/world-3.1.1-py2.py3-none-any.whl/worldlib/__main__.py in your editor.  Find the main() function and put this right after the def line:

import pdb; pdb.set_trace()

Save your changes and exit your editor.

At this point, you'll want to have Python 3.5 installed or available.  Let's assume that by the time you read this, Python 3.5 has been released and is the default Python 3 on your system.  If not, you can always download a pre-release of the source code, or just build Python 3.5 from its Mercurial repository.  I'll wait while you do this...

...and we're back!  Okay, now armed with Python 3.5, and still inside the world subdirectory you created above, just do this:

$ python3.5 -m zipapp . -p /usr/bin/python3 -o ../world.dbg

Now, before you can run ../world.dbg and watch the break point do its thing, you need to delete pex's own local cache, otherwise pex will execute the world dependency out of its cache, which won't have the break point set. This is a wart that might be worth reporting and fixing in pex itself.  For now:

$ rm -rf ~/.pex
$ ../world.dbg

And now you should be dropped into pdb almost immediately.

If you wanted to build this debugging pex into a snap, just use the snappy build command directly.  You'll need to add the minimal metadata yourself (since currently snap.py doesn't preserve it).  See the Snappy developer documentation for more details.

Summary and Caveats


There's a lot of interesting technology here; pex for building single file executables of Python applications, and Snappy Ubuntu Core for atomic, transactional system updates and lightweight application deployment to the cloud and things.  These allow you to get started doing some basic deployments of Python applications.  No doubt there are lots of loose ends to clean up, and caveats to be aware of.  Here are some known ones:

  • All of the above only works with Python 3.  I think that's a feature, but you might disagree. ;)   This works on Ubuntu Core for free because Python 3 is an essential piece of the base image.  Working out how to deploy Python 2 as a Snappy framework would be an interesting exercise.
  • When we build a snap from a git repository for an application that isn't on PyPI, I don't currently have a way to also grab some dependencies from PyPI.  The stupid example shown here doesn't have any additional dependencies so it wasn't a problem.  Fixing this should be a fairly simple matter of engineering on the snap.py wrapper (pull requests welcome!)
  • We don't really have a great story for cross-compilation of extension modules. Solving this is probably a fairly complex initiative involving the distros, setuptools and other packaging tools, and upstream Python.  For now, your best bet might be to actually build the snap on the actual target hardware.
  • Importing extension modules requires a file system cache because of limitations in the dlopen() API.  There have been rumors of extensions to glibc which would provide a dlopen()-from-memory type of API which could solve this, or upstream Python's zip support may want to grow native support for caching.
Even with these caveats, it's pretty easy to turn a Python application into a Snappy Ubuntu Core application, publish it to the world, and profit!  So what are you waiting for?  Snap to it!

Read more
Daniel Holbach

What does being an Ubuntu member mean to you? Why did you do it back then?

I became an Ubuntu member about 10 years ago. It was part of the process of becoming member of the MOTU team. Before you could apply for upload rights, you had to be an Ubuntu member though.

That wasn’t all of it though. For me it wasn’t the @ubuntu.com mail address or “fulfilling the requirements for upload rights”. As I had helped out and contributed for months already, I felt part of the tribe and luckily many encouraged me to take the next step and apply for membership. I had grown to like the people I worked with and learned from a lot. It was a bit daunting, but being recognised for my contributions was a great experience. Afterwards I would say I did my fair share of encouraging others to apply as well. :-)

Which brings me to the two calls of action I wanted to get out there.

1) Encourage members of your team who haven’t applied for Ubuntu membership!

There are so many people doing fantastic work on AskUbuntu, the Forums, in Flavour teams, the Docs team, the QA world and all over the place when it comes to phones, desktops, IoT bits, servers, the cloud and more. Many many of them should really be Ubuntu members, but they haven’t heard of it, or don’t know how or are concerned of not “having done enough”.

If you have people like that in a project you are working in, please do encourage them. In an open source project we should aim to do a good job at recognising the great work of others.

2) Join the Ubuntu Membership Boards!

If you are an Ubuntu member, seriously consider joining the Ubuntu Membership Boards. The call for nominations is still open and it’s a great thing to be involved with.

When I joined the Community Council, the CC was still in charge of approving Ubuntu members and I enjoyed the meeting (even if they were quite looooooooooooooooooooong), when we got to talk to many contributors from all parts of the globe and from all parts of the Ubuntu landscape. Welcoming many of them to Ubuntu members team was just beautiful.

Nominate yourself and be quick about it! :-)

Read more
Nicholas Skaggs


Whoosh, Spring is in the air, Winter is over (at least for us Northern Hemisphere folks). With that, it's time for polishing the final beta image for vivid.

How can I help? 
To help test, visit the iso tracker milestone page for final beta.  The goal is to verify the images in preparation for the release. Find those bugs! The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

Isotracker? 
There's a first time for everything! Check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

What if I'm late?
The testing runs through this Thursday March 26th, when the the images for final beta will be released. If you miss the deadline we still love getting results! Test against the daily image milestone instead.

Thanks and happy testing everyone!

Read more
Michael Hall

Way back at the dawn of the open source era, Richard Stallman wrote the Four Freedoms which defined what it meant for software to be free. These are:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

For nearly three decades now they have been the foundation for our movement, the motivation for many of us, and the guiding principle for the decisions we make about what software to use.

But outside of our little corner of humanity, these freedoms are not seen as particularly important. In fact, the fast majority of people are not only happy to use software that violates them, but will often prefer to do so. I don’t even feel the need to provide supporting evidence for this claim, as I’m sure all of you have been on one side or the other of a losing arguement about why using open source software is important.

The problem, it seems, is that people who don’t plan on exercising any of these freedoms, from lack of interest or lack of ability, don’t place the same value on them as those of us who do. That’s why software developers are more likely to prefer open source than non-developers, because they might actually use those freedoms at some point.

But the people who don’t see a personal value in free software are missing a larger, more important freedom. One implied by the first four, though not specifically stated. A fifth freedom if you will, which I define as:

  • Freedom 4: The freedom to have the program improved by a person or persons of your choosing, and make that improvement available back to you and to the public.

Because even though the vast majority of proprietary software users will never be interested in studying or changing the source of the software they use, they will likely all, at some point in time, ask someone else if they can fix it. Who among us hasn’t had a friend or relative ask us to fix their Windows computer? And the true answer is that, without having the four freedoms (and implied fifth), only Microsoft can truly “fix” their OS, the rest of us can only try and undo the damage that’s been done.

So the next time you’re trying to convince someone of the important of free and open software, and they chime in with the fact that don’t want to change it, try pointing out that by using proprietary code they’re limiting their options for getting it fixed when it inevitably breaks.

Read more
bmichaelsen

When logic and proportion have fallen sloppy dead
And the white knight is talking backwards
And the red queen’s off with her head
Remember what the dormouse said
Feed your head, feed your head

— Jefferson Airplane, White Rabbit

So, this was intended as a quick and smooth addendum to the “50 ways to fill your vector” post, bringing callgrind into the game and ensuring everyone that its instructions counts are a good proxy for walltime performance of your code. This started out as mostly as expected, when measuring the instructions counts in two scenarios:

implementation/cflags -O2 not inlined -O3 inlined
A1 2610061438 2510061428
A2 2610000025 2510000015
A3 2610000025 2510000015
B1 3150000009 2440000009
B2 3150000009 2440000009
B3 3150000009 2440000009
C1 3150000009 2440000009
C3 3300000009 2440000009

The good news here is, that this mostly faithfully reproduces some general observations on the timings from the last post on this topic, although the differences in callgrind are more pronounced in callgrind than in reality:

  • The A implementations are faster than the B and C implementations on -O2 without inlining
  • The A implementations are slower (by a smaller amount) than the B and C implementations on -O3 with inlining

The last post also suggested the expectation that all implementations could — and with a good compiler: should — have the same code and same speed when everything is inline. Apart from the A implementations still differing from the B and C ones, callgrinds instruction count suggest to actually be the case. Letting gcc compile to assembler and comparing the output, one finds:

  • Inline A1-3 compile to the same output on -Os, -O2, -O3 each. There is no difference between -O2 and -O3 for these.
  • Inline B1-3 compile to the same output on -Os, -O2, -O3 each, but they differ between optimization levels.
  • Inline C3 output differs from the others and between optimization levels.
  • Without inlinable constructors, the picture is the same, except that A3 and B3 now differ slightly from their kin as expected.

So indeed most of the implementations generate the same assembler code. However, this is quite a bit at odd with the significant differences in performance measured in the last post, e.g. B1/B2/B3 on -O2 created widely different walltimes. So time to test the assumption that running one implementation for a minute is producing reasonable statistically stable result, by doing 10 1-minute runs for each implementation and see what the standard deviation is. The following is found for walltimes (no inline constructors):

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 80.6 s 78.9 s 78.9 s 79.0 s
A2 78.7 s 78.1 s 78.0 s 79.2 s
A3 80.7 s 78.9 s 78.9 s 78.9 s
B1 84.8 s 80.8 s 78.0 s 78.0 s
B2 84.8 s 86.0 s 78.0 s 78.1 s
B3 84.8 s 82.3 s 79.7 s 79.7 s
C1 84.4 s 85.4 s 78.0 s 78.0 s
C3 86.6 s 85.7 s 78.0 s 78.9 s
no inline measurementsno inline measurements

And with inlining:

implementation/cflags -Os -O2 -O3 -O3 -march=
A1 76.4 s 74.5 s 74.7 s 73.8 s
A2 75.4 s 73.7 s 73.8 s 74.5 s
A3 76.3 s 74.6 s 75.5 s 73.7 s
B1 80.6 s 77.1 s 72.7 s 73.7 s
B2 81.4 s 78.9 s 72.0 s 72.0 s
B3 80.6 s 78.9 s 72.8 s 73.7 s
C1 81.4 s 78.9 s 72.0 s 72.0 s
C3 79.7 s 80.5 s 72.9 s 77.8 s
inline measurementsinline measurements

The standard deviation for all the above values is less than 0.2 seconds. That is … interesting: For example, on -O2 without inlining, B1 and B2 generate the same assembler output, but execute with a very significant difference in hardware (5.2 s difference, or more than 25 standard deviations). So how have logic and proportion fallen sloppy dead here? If the same code is executed — admittedly from two different locations in the binary — how can that create such a significant difference in walltime performance, while not being visible at all on callgrind? A wild guess, which I have not confirmed yet, is cache locality: When not inlining constructors, those might be in CPU cache from one copy of the code in the binary, but not for the other. And by the way, it might also hint at the reasons for the -march= flag (which creates bigger code) seeming so uneffective. And it might explain, why performance is rather consistent when using inline constructors. If so, the impact of this is certainly interesting. It also suggest that allowing inlining of hotspots, like recently done with the low-level sw::Ring class, produces much more performance improvement on real hardware than the meager results measured with callgrind. And it reinforces the warning made in that post about not falling in the trap of mistaking the map for the territory: callgrind is not a “map in the scale of a mile to the mile”.

Addendum: As said in the previous post, I am still interested in such measurements on other hardware or compilers. All measurements above done with gcc 4.8.3 on Intel i5-4200U@1.6GHz.


Read more
Victor Palau

I recently blogged about making a scopes in 5 minutes using youtube. I have seen also a fair amount of new scopes being created using RSS. By far, my favourite way to use scopecreator is Twitter.

If you want to check a few examples, I have published previously twitter-based scopes like breaking news, la liga and a few others. Today, I give you Formula One:

f1 f1_2 f1_3

The interesting thing about twitter is that many brands upload minute by minute new updates, which make a really good source for scopes.

To create a Formula One scope,I started by going to twitter and creating a list under my scope account (you can use your personal account). The list contains several relevant “official” Formula One accounts.  Using Twitter, I can then update the sources by adding and removing accounts from the list without the user needing to download an update for the scope.

Again, it took me about 5 min to get a working version of the scope.  Here is what needed to do:

  • First, we followed Chris’ instructions to install the scope creator tool.
  • Once we had it set up on my laptop, I run:
    scopecreator create twitter vtuson f1
    cd f1
  • Next, I configured the scope. The configuration is done in a json file called manifest.json. This file describes the content of what you will publish later to the store. You need to care about: “title”, “description”, “version” and “mantainer”. The rest are values populated by the tool:
    scopecreator edit config
    {
    "description": "Formula One scope",
    "framework": "ubuntu-sdk-14.10",
    "architecture": "armhf",
    "hooks": {
    "f1": {
    "scope": "f1",
    "apparmor": "scope-security.json"
    }
    },
    "icon": "icon",
    "maintainer": "Your Name <yourname@packagedomain>",
    "name": "f1.vtuson",
    "title": "Formula One",
    "version": "0.2"
    }
  • The following step was to set up the branding: Easy! Branding is define on an .ini file. “Display name” will be the name listed on the “manage” window once installed, and also will be the title of your scope if you don’t use a “PageHeader.Logo”. the [Appearance] section describes the colours and logos to use when banding a scope.
    scopecreator edit branding
    [ScopeConfig]
    ScopeRunner=./f1.vtuson_f1 --runtime %R --scope %S
    DisplayName=Formula One
    Description=This is an Ubuntu search plugin that enables information from Yelp $
    Author=Canonical Ltd.
    Art=
    Icon=images/icon.png
    SearchHint=Search
    [Appearance]
    PageHeader.Background=color:///#D51318
    PageHeader.ForegroundColor=#FFFFFF
    PreviewButtonColor=#D51318
  • The final part is to define the departments (drop down menu) for the scope. This is also a json file and it is unique the twitter scope template. You can either use “list” or “account” (or both) as departments.  The id is the twitter handle for the list or the account. For lists you will need to specify in the configuration section what account holds the list. As I defined a single entry, the formula one scope will have no drop down menu.
    scopecreator edit channels
    {
    “departments”: [
    {
    “name”:”Formula One”,
    “type”:”list”,
    “id”:”f1″
    }
    ],
    “configuration”: {
    “list-account”:”canonical_scope”,
    “openontwitter”:”See in Twitter”,
    “openlink”:”Open”,
    “retweet”:”Retweet”,
    “favorite”: “Favourite”,
    “reply”:”Reply”
    }
    }

After this, the only thing left to do is replace the placeholder icon, with a relevant logo:
~/f1/f1/images/logo.png
And build, check and publish the scope:
scopecreator build

This last command generates the click file that you need to upload to the store. If you have a device (for example a Nexus4 or an emulator ), it can also install it so you can test it. If you get any issues getting the scope to run, you might want to check your json files on http://jsonlint.com/. It is a great web tool that will help you make sure your json doc is ship shaped!

It is super simple to create a scope for a twitter list! so what are you going to create next?


Read more
Daniel Holbach

In the past weeks Nick, David, a few others and I worked on an app / a website, which could easily collect information which will give users of an Ubuntu device a head-start. All our collective experience and knowledge, easily added and translated.

We achieved quite a bit. We’re now very close to getting a first version of it online (both as an app in the store and as a website). We can quite reliably integrate translations and add new content.

We still have a few TODO items and it would be great if you could help out. If you can write a bit of documentation, translate content or fix some HTML/CSS bits or help out with testability. Any help at all will be appreciated.

Tasks:

  • Add content. Just check out our branch and propose a merge. Read the HACKING doc beforehand.
  • Translate. The content is likely going to change a bit in the next days still, but every edit or translation will be appreciated.
  • Hack! We have a number of things we still want to improve. Read the HACKING doc beforehand. Here’s a list of things:
    • Styling/theming/navigation:
      • Bug 1416385: Fix bullet points in the phone theme
      • Bug 1428671: Remove traces of developer.ubuntu.com
      • Bug 1428669: Clean up required CSS/JS
      • Bug 1425025: Automatically load translated pages according to the user language.
    • Testing
    • and there’s more.

Ping me on IRC, or balloons or dpm if you want to get involved. We look forward to working with you and we’ll post more updates soon.

Read more
bmichaelsen

Around the world, Around the world
— Daft Punk, Around the world

So, you still heard that unfounded myth that it is hard to get involved with and to start contributing to LibreOffice? Still? Even though that there are our Easy Hacks and the LibreOffice developer are a friendly bunch that will help you get started on mailing lists and on IRC? If those alone do not convince you, it might be because it is admittedly much easier to get started if you meet people face to face — like on one of our upcoming Events! Especially our Hackfests are a good way to get started. The next one will be at the University de Las Palmas de Gran Canaria were we had been guests last year already. We presented some introduction talks to the students of the university and then went on to hack on LibreOffice from fixing bugs to implementing new stuff. Here is how that looked like last year:

LibreOffice Hackfest Gran Canaria 2014LibreOffice Hackfest Gran Canaria 2014

One thing we learned from previous Hackfests was that it is great if newcomers have a way to start working on code right away. While it is rather easy to do that as the 5 minute video on our wiki shows, it might still take some time on some notebooks. So what if you spontaneously show up at the event without a pre-build LibreOffice? Well for that, we now have — thanks to Christian Lohmaier of the Document Foundation staffremote virtual machines prepared for Hackfests, that allow you to get started right away with everything prepared — on rather beefy hardware even, that is.

If you are a student at ULPGC or live in Las Palmas or on the Canary Islands, we invite you to join us to learn how to get started. For students, this is also a very good opportunity get involved and prepare for a Google Summer of Code on LibreOffice. Furthermore, if you are a even casual contributor to LibreOffice code already and want to help out sharing and deepen knowledge on how to work on LibreOffice code, you should get in contact with the Document Foundation — while the event is already very soon now, there still might be travel reimbursal available. You will find all the details on the wiki page for the Hackfest in Las Palmas de Gran Canaria 2015.

LibreOffice Evening HackingLibreOffice Evening Hacking in Las Palmas 2014

On the other hand, if two weeks is too short a notice for you, but the rest of this sounds really tempting, there is already the next Hackfest planned, which will take place in Cambridge in the United Kingdom in May. We will be there with a Hackfest for the first time and invite you to join us from anywhere in Europe if you either are a LibreOffice code contributor or if you are interested in learning more on how to become one. Again, there is a wiki page with the details on the LibreOffice Hackfest in Cambridge 2015, and travel reimbursals are available. Contact us!

LibreOffice Evening HackingHow I imagine Cambridge in May — Photo by Andrew Dunn CC-BY-SA 2.0 via Wikimedia

Read more
Michael Hall

A couple of weeks ago I had the opportunity to attend the thirteenth Southern California Linux Expo, more commonly known at SCaLE 13x. It was my first time back in five years, since I attended 9x, and my first time as a speaker. I had a blast at SCaLE, and a wonderful time with UbuCon. If you couldn’t make it this year, it should definitely be on your list of shows to attend in 2016.

UbuCon

Thanks to the efforts of Richard Gaskin, we had a room all day Friday to hold an UbuCon. For those of you who haven’t attended an UbuCon before, it’s basically a series of presentations by members of the Ubuntu community on how to use it, contribute to it, or become involved in the community around it. SCaLE was one of the pioneering host conferences for these, and this year they provided a double-sized room for us to use, which we still filled to capacity.

image20150220_100226891I was given the chance to give not one but two talks during UbuCon, one on community and one on the Ubuntu phone. We also had presentations from my former manager and good friend Jono Bacon, current coworkers Jorge Castro and Marco Ceppi, and inspirational community members Philip Ballew and Richard Gaskin.

I’d like thank Richard for putting this all together, and for taking such good care of those of us speaking (he made sure we always had mints and water). UbuCon was a huge success because of the amount of time and work he put into it. Thanks also to Canonical for providing us, on rather short notice, a box full of Ubuntu t-shirts to give away. And of course thanks to the SCaLE staff and organizers for providing us the room and all of the A/V equipment in it to use.

The room was recorded all day, so each of these sessions can be watched now on youtube. My own talks are at 4:00:00 and 5:00:00.

Ubuntu Booth

In addition to UbuCon, we also had an Ubuntu booth in the SCaLE expo hall, which was registered and operated by members of the Ubuntu California LoCo team. These guys were amazing, they ran the booth all day over all three days, managed the whole setup and tear down, and did an excellent job talking to everybody who came by and explaining everything from Ubuntu’s cloud offerings, to desktops and even showing off Ubuntu phones.

image20150221_162940413Our booth wouldn’t have happened without the efforts of Luis Caballero, Matt Mootz, Jose Antonio Rey, Nathan Haines, Ian Santopietro, George Mulak, and Daniel Gimpelevich, so thank you all so much! We also had great support from Carl Richell at System76 who let us borrow 3 of their incredible laptops running Ubuntu to show off our desktop, Canonical who loaned us 2 Nexus 4 phones running Ubuntu as well as one of the Orange Box cloud demonstration boxes, Michael Newsham from TierraTek who sent us a fanless PC and NAS, which we used to display a constantly-repeating video (from Canonical’s marketing team) showing the Ubuntu phone’s Scopes on a television monitor provided to us by Eäär Oden at Video Resources. Oh, and of course Stuart Langridge, who gave up his personal, first-edition Bq Ubuntu phone for the entire weekend so we could show it off at the booth.

image20150222_132142752Like Ubuntu itself, this booth was not the product of just one organization’s work, but the combination of efforts and resources from many different, but connected, individuals and groups. We are what we are, because of who we all are. So thank you all for being a part of making this booth amazing.

Read more
bmichaelsen

“The problem is all inside your head” she said to me
“The answer is easy if you take it logically”
— Paul Simon, 50 ways to leave your lover

So recently I tweaked around with these newfangled C++11 initializer lists and created an EasyHack to use them to initialize property sequences in a readable way. This caused a short exchange on the LibreOffice mailing list, which I assumed had its part in motivating Stephans interesting post “On filling a vector”. For all the points being made (also in the quick follow up on IRC), I wondered how much the theoretical “can use a move constructor” discussed etc. really meant when the C++ is translated to e.g. GENERIC, then GIMPLE, then amd64 assembler, then to the internal RISC instructions of the CPU — with multiple levels of caching in addition.

So I quickly wrote the following (thanks so much for C++11 having the nice std::chrono now).

data.hxx:

#include <vector>
struct Data {
    Data();
    Data(int a);
    int m_a;
};
void DoSomething(std::vector<Data>&);

data.cxx:

#include "data.hxx"
// noop in different compilation unit to prevent optimizing out what we want to measure
void DoSomething(std::vector<Data>&) {};
Data::Data() : m_a(4711) {};
Data::Data(int a) : m_a(a+4711) {};

main.cxx:

#include "data.hxx"
#include <iostream>
#include <vector>
#include <chrono>
#include <functional>

void A1(long count) {
    while(--count) {
        std::vector<Data> vec { Data(), Data(), Data() };
        DoSomething(vec);
    }
}

void A2(long count) {
    while(--count) {
        std::vector<Data> vec { {}, {}, {} };
        DoSomething(vec);
    }
}

void A3(long count) {
    while(--count) {
        std::vector<Data> vec { 0, 0, 0 };
        DoSomething(vec);
    }
}

void B1(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.push_back(Data());
        vec.push_back(Data());
        vec.push_back(Data());
        DoSomething(vec);
    }
}

void B2(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.push_back({});
        vec.push_back({});
        vec.push_back({});
        DoSomething(vec);
    }
}

void B3(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.push_back(0);
        vec.push_back(0);
        vec.push_back(0);
        DoSomething(vec);
    }
}

void C1(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.emplace_back(Data());
        vec.emplace_back(Data());
        vec.emplace_back(Data());
        DoSomething(vec);
    }
}

void C3(long count) {
    while(--count) {
        std::vector<Data> vec;
        vec.reserve(3);
        vec.emplace_back(0);
        vec.emplace_back(0);
        vec.emplace_back(0);
        DoSomething(vec);
    }
}

double benchmark(const char* name, std::function<void (long)> testfunc, const long count) {
    const auto start = std::chrono::system_clock::now();
    testfunc(count);
    const auto end = std::chrono::system_clock::now();
    const std::chrono::duration<double> delta = end-start;
    std::cout << count << " " << name << " iterations took " << delta.count() << " seconds." << std::endl;
    return delta.count();
}

int main(int, char**) {
    long count = 10000000;
    while(benchmark("A1", &A1, count) < 60l)
        count <<= 1;
    std::cout << "Going with " << count << " iterations." << std::endl;
    benchmark("A1", &A1, count);
    benchmark("A2", &A2, count);
    benchmark("A3", &A3, count);
    benchmark("B1", &B1, count);
    benchmark("B2", &B2, count);
    benchmark("B3", &B3, count);
    benchmark("C1", &C1, count);
    benchmark("C3", &C3, count);
    return 0;
}

Makefile:

CFLAGS?=-O2
main: main.o data.o
    g++ -o $@ $^

%.o: %.cxx data.hxx
    g++ $(CFLAGS) -std=c++11 -o $@ -c $<

Note the object here is small and trivial to copy as one would expect from objects passed around as values (as expensive to copy objects mostly can be passed around with a std::shared_ptr). So what did this measure? Here are the results:

Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 without inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 89.1 s 79.0 s 78.9 s 78.9 s
A2 89.1 s 78.1 s 78.0 s 80.5 s
A3 90.0 s 78.9 s 78.8 s 79.3 s
B1 103.6 s 97.8 s 79.0 s 78.0 s
B2 99.4 s 95.6 s 78.5 s 78.0 s
B3 107.4 s 90.9 s 79.7 s 79.9 s
C1 99.4 s 94.4 s 78.0 s 77.9 s
C3 98.9 s 100.7 s 78.1 s 81.7 s

creating a three element vector without inlined constructors
And, for comparison, following are the results, if one allows the constructors to be inlined.
Time for 1280000000 iterations on a Intel i5-4200U@1.6GHz (-march=core-avx2) compiled with gcc 4.8.3 with inline constructors:

implementation / CFLAGS -Os -O2 -O3 -O3 -march=…
A1 85.6 s 74.7 s 74.6 s 74.6 s
A2 85.3 s 74.6 s 73.7 s 74.5 s
A3 91.6 s 73.8 s 74.4 s 74.5 s
B1 93.4 s 90.2 s 72.8 s 72.0 s
B2 93.7 s 88.3 s 72.0 s 73.7 s
B3 97.6 s 88.3 s 72.8 s 72.0 s
C1 93.4 s 88.3 s 72.0 s 73.7 s
C3 96.2 s 88.3 s 71.9 s 73.7 s

creating a three element vector without inlined constructors
Some observations on these measurements:

  • -march=... is at best neutral: The measured times do not change much in general, they only even slightly improve performance in five out of 16 cases, and the two cases with the most significant change in performance (over 3%) are actually hurting the performance. So for the rest of this post, -march=... will be ignored. Sorry gentooers. ;)
  • There is no silver bullet with regard to the different implementations: A1, A2 and A3 are the faster implementations when not inlining constructors and using -Os or -O2 (the quickest A* is ~10% faster than the quickest B*/C*). However when inlining constructors and using -O3, the same implementations are the slowest (by 2.4%).
  • Most common release builds are still done with -O2 these days. For those, using initializer lists (A1/A2/A3) seem too have a significant edge over the alternatives, whether constructors are inlined or not. This is in contrast to the conclusions made from “constructor counting”, which assumed these to be slow because of additional calls needed.
  • The numbers printed in bold are either the quickest implementation in a build scenario or one that is within 1.5% of the quickest implementation. A1 and A2 are sharing the title here by being in that group five times each.
  • With constructors inlined, everything in the loop except DoSomething() could be inline. It seems to me that the compiler could — at least in theory — figure out that it is asked the same thing in all cases. Namely, reserve space for three ints on the heap, fill them each with 4711 and make the ::std::vector<int> data structure on the stack reflect that, then hand that to the DoSomething() function that you know nothing about. If the compiler would figure that out, it would take the same time for all implementations. This doesnt happen either on -O2 (differ by ~18% from quickest to slowest) nor on -O3 (differ by ~3.6%).

One common mantra in applications development is “trust the compiler to optimize”. The above observations show a few cracks in the foundations of that, esp. if you take into account that this is all on the same version of the same compiler running on the same platform and hardware with the same STL implementation. For huge objects with expensive constructors, the constructor counting approach might still be valid. Then again, those are rarely statically initialized as a bigger bunch into a vector. For the more common scenario of smaller objects with cheap constructors, my tentative conclusion so far would be to go with A1/A2/A3 — not so much because they are quickest in the most common build scenarios on my platform, but rather because the readability of them is a value on its own while the performance picture is muddy at best.

And hey, if you want to run the tests above on other platforms or compilers, I would be interested in results!

Note: I did these runs for each scenario only once, thus no standard deviation is given. In general, they seemed to be rather stable, but this being wallclock measurements, one or the other might be outliers. caveat emptor.


Read more
Daniel Holbach

I already blogged about the help app I was working on a bit in the last time. I wanted to go into a bit more detail now that we reached a new milestone.

What’s the idea behind it?

In a conversation in the Community team we noticed that there’s a lot of knowledge we gathered in the course of having used Ubuntu on a phone for a long time and that it might make sense to share tips and tricks, FAQ, suggestions and lots more with new device users in a simple way.

The idea was to share things like “here’s how to use edge swipes to do X” (maybe an animated GIF?) and “if you want to do Y, install the Z app from the store” in an organised and clever fashion. Obviously we would want this to be easily editable (Markdown) and have easy translations (Launchpad), work well on the phone (Ubuntu HTML5 UI toolkit) and work well on the web (Ubuntu Design Web guidelines) too.

What’s the state of things now?

There’s not much content yet and it doesn’t look perfect, but we have all the infrastructure set up. You can now start contributing! :-)

screenshot of web editionscreenshot of web edition screenshot of phone app editionscreenshot of phone app edition

 

What’s still left to be done?

  • We need HTML/CSS gurus who can help beautifying the themes.
  • We need people to share their tips and tricks and favourite bits of their Ubuntu devices experience.
  • We need hackers who can help in a few places.
  • We need translators.

What you need to do? For translations: you can do it in Launchpad easily. For everything else:

$ bzr branch lp:ubuntu-devices-help
$ cd ubuntu-devices-help
$ less HACKING

We’ve come a long way in the last week and with the easy of Markdown text and easy Launchpad translations, we should quickly be in a state where we can offer this in the Ubuntu software store and publish the content on the web as well.

If you want to write some content, translate, beautify or fix a few bugs, your help is going to be appreciated. Just ping myself, Nick Skaggs or David Planella on #ubuntu-app-devel.

Read more
Ben Howard

Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"


Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

Users who want to upgrade their existing instance can simply run:
  • sudo apt-get update
  • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
  • reboot

Read more