Canonical Voices

Andrea Bernabei

QtDay is the only Italian event dedicated to Qt. It is held yearly by Develer and brings together company products that are developed using Qt, as well as Qt developers and customers who want the latest developments and solutions in the Qt world. This year the conference was held in Florence, where I was lucky enough to attend and present.

I had previously attended the 2011, 2012 and 2014 QtDay events whilst I was studying Computer Science at the University of Pisa. This year it was different because Develer invited me to give a talk about Ubuntu and Qt. The funny thing was e I was already planning on sending my presentation to the Call for Proposals anyway! So I was already prepared.

What I do at Ubuntu

My role in Canonical is UX Engineer, basically a developer acting as the bridge between designers and engineers. It is a pretty cool job, and I’m very lucky to be part of such an energetic team.

Over the last year there was a strong push in both the Design and Engineering teams working on Ubuntu Touch to finalize and deliver the convergent experience. This was a great opportunity to spread the word about how to develop convergent apps for the new Ubuntu platform, and get developers interested in where we are and where we are heading.

My talk – “Standing on the shoulders of giants: developing Ubuntu convergent apps with Qt”

When I first thought about giving the presentation, I decided it would only be about the current state of the UI components provided by Ubuntu SDK, with a strong focus on their “convergent” features, and how to use them to realize your convergent apps. However, as time went by I realized it would have been more interesting for developers to also get some context about the platform itself, and how to best integrate their apps with the platform.

By the time QtDay arrived, the presentation had almost doubled in size! You can find it here.

A slideshow or an app? How about both!

This is a detail the geeks in the audience might be interested in…I thought it would be neat to talk about the development of Qt/QML apps and use the same framework to implement the presentation as well!

That’s right, the presentation is actually a QML application that uses (a modified version of) the QML presentation system available as a Qt Labs addon. Having the power of Qt underneath your presentation means you can do pretty much everything You’re not tied to the boundaries set by the “standard” presentation systems (such as Beamer, LibreOffice Impress, Microsoft Powerpoint, etc) anymore!

In my case, I exploited that to implement a live-coding view as a pull-down layer that you can open on-demand whenever you want by using keyboard shortcuts. Once the livecoding view is open, you can write code (or use the code automatically provided when you’re at one of the special “Livecoding!” slides) in the text editor on the left side and see the result in the right side in real time without leaving or interrupting your presentation. You can also increase or decrease the font size, and there’s also a sparkling particle effect that follows the cursor to make it easier for the audience to follow your changes to the text. That’s only one of the things you can do once you replace your “standard” presentation with a full featured application. If you’re a software developer then I highly recommend giving it a try for your next presentation!

The sourcecode and the PDF version of the presentation is available here and my fork of the QML presentation system is available here.

And here’s a screenshot of the livecoding view in action (sparkling particle effect included) :)

livecoding_Screenshot_2016-06-15_09-13-38

The morning

The conference was held in the middle of Florence at the Hotel Londra Conference Centre. It was quite a nice location I have to say! Very easy to reach as it is very close to the main railway station, Santa Maria Novella.

My talk was in the first time slot after the main keynote, which was good because, you know, that meant I could relax and enjoy the rest of the day!

13173300_1020935067988814_8428457406371463764_o

I started by giving an overview of the current state of Ubuntu and the fact that it’s doing great in the Cloud field. Ubuntu can now scale to run on IoT devices as well as phones, tablets, notebooks, servers and Clouds.

I then presented the concept of convergence and how the UI components provided by the Ubuntu SDK can be best utilised to create great convergent apps, including some livecoding. Livecoding is fun because it gives a pragmatic idea of how to go from theory to practice, and also keeps the attendees awake, because they know things can go wrong at any moment (demo effect) and they enjoy that, for some reason :)

After UI components section, I went on to talk about platform integration topics such as the app lifecycle management, the app isolation features, and the integration with the Content Hub which is the secure way to share data between applications.

I then briefly talked about internationalization and how to publish your application on the Ubuntu Store (it’s very easy!)

For this occasion, I brought with me a BQ M10 tablet, which is the convergent Ubuntu tablet that we released just a few months ago! I connected it to a bluetooth mouse and keyboard, and set it up on a table for people to try. Lots of people played with it. After the talk it was exciting to see the audience interest in the whole convergence story.

The other talks during the morning were very interesting as well, I particularly enjoyed Marco Piccolino’s “A design system and pattern libraries can significantly improve the quality of your UI” (Find the slides here).

And then it came to lunchtime…

Food…Italian food

The food was great and, coming from the UK, I enjoyed it even more. Big kudos to Develer (the company behind the event) for finding such a good catering company!

Here’s a pic of the goodies available during coffee breaks. Mmmm…

13131744_1020935184655469_8301292868650133109_o

Afternoon talks

The afternoon talks were as interesting as the morning ones. Marco Arena, from the Italian C++ Community, gave a talk about QCustomPlot, which is a library to draw graphs and plots using Qt (slides here).

If you’re interested in Virtual Reality, partially BCI (Brain Computer Interface) and machine learning, make sure you check out the slides of Sebastiano Galazzo’s talk (once they’re available, at that page). His project involves manipulating what the user sees in a Google Cardboard by reading his/her brain waves to interpret emotions. Pretty neat.

Stefano Cordibella’s presentation was about his experience optimizing the startup time of an application running on embedded hardware (using Qt4). They exploited the power of the QtQuick Loader component and other QML tricks to decrease the loading time of the application.
Check his slides out if you’re interested in QML optimization. I’m sure you’ll find them useful.

The final talk I attended was more of a roundtable about how to contribute to the development of Qt itself, led by Giuseppe D’Angelo, who has the role of “Approver” in the Qt Project Open Governance Model.

As a result of attending that roundtable not only I started contributing to Qt (See the changes I contributed to here), but I also improved theQt Wiki Contribution Guidelines so that it will be easier for other people to start contributing. The power of open source and open governance! :)

The closing talk also included a raffle, where a couple of developers won an embedded devboard sponsored by Atmel. I’ve been quite lucky with Qt-related raffles in the past, but this wasn’t one of those days, oh well :)

13131322_1020935404655447_4502313797681298972_o

Closing remarks

What a great day it was. I want to thank Develer for organizing the conference and the guys from Community team (Alan Pope, David Planella, Daniel Holbach) and Zsombor Egri from the SDK team at Canonical for providing feedback and ideas for the presentation.

It was also great to see so many people interested in the convergence story and in the M10 tablet. The technology has great potential and it’s our job to make the best of it :)

See you all at the next QtDay!

Note: the pictures are courtesy of Develer’s QtDay Facebook page.

Read more
liam zheng

经过一段漫长的开发过程,我们很高兴地宣布,Ubuntu SDK IDE 的下一个版本从今天起进入 Beta 测试阶段。新版本包含全新的构建器 (Builder) 和运行时后端,最终消除了 SDK IDE 目前存在的最大问题。

Ubuntu SDK IDE 的下一个迭代

简单来说︰LXD 来了

经过一段漫长的开发过程,我们很高兴地宣布,Ubuntu SDK IDE 的下一个版本从今天起进入 Beta 测试阶段。新版本包含全新的构建器 (Builder) 和运行时后端,最终消除了 SDK IDE 目前存在的最大问题。

之前已经有传闻说基于 LXD 的新构建器将会取代基于 schroot 的构建器。没错,这些传闻都是真的。在几位值得信赖的测试人员对概念验证版本进行了一段时间的内部测试后,我们认为向更多人展示新版本 IDE 的时机已经到来。

下面,在直接介绍新软件包前,我们先来回顾一下不得不放弃 schroot 构建器的一些原因:

最大的问题无疑在于安装 SDK 后立即创建新的 chroot。从实时档案文件启动引导 (Bootstrapping) 完整的 Ubuntu 根文件系统非常缓慢,而且容易出错。每当档案文件或 Overlay PPA 存在打包问题时,就无法创建新的构建目标。这基本上导致 SDK 在打包问题修复前将一直不可用。LXD 已经解决了这个问题。新容器以现成可用的压缩映像文件形式下载,下载速度比以往快得多,而且得到的容器肯定可用,因为容器在发布前已经过我们的测试,而不是像 Overlay PPA 那样不断改变。映像下载完毕后将被缓存,而从缓存启动一个新容器只需要几秒钟!

第二个要强调的问题是,我们需要在桌面本地执行应用程序,但仍支持目前官方支持的所有 Ubuntu 版本。这意味着必须解决不同 Qt 和 UITK 版本的问题。我们曾经尝试过通过提供单独的 Qt+UITK 软件包来解决这个问题。但事实证明,这种方法需要破解和重构太多的软件包,因此是不可行的。而且,这不仅仅是构建时的问题,还是一个运行时的问题。那么如何既能在桌面上运行应用,使用最新、最流行的组件,同时又能保持 LTS 兼容性呢?

答案其实很简单:使用容器作为运行时目标,并在主机的 X 服务器上显示 UI。

此外还有一些问题,例如整体速度缓慢、挂载点泄漏(每个曾经因 schroot 而设置了数百个挂载的人都能明白我的意思)以及 ecryptfs 的问题等。

现在说够了过去,我们来聊聊将来,看看新版本有了哪些变化。在开始前需要指出的是,我们已经停止了对默认桌面套件的支持。默认已不再支持在主机上构建和运行应用。除了 qmake 和 cmake 插件自动创建的配置以外,SDK IDE 不会创建其他桌面运行配置。当然,我们还是有办法在主机上构建和运用应用的,但是需要手动创建运行配置。今后,我们需要创建一个与主机架构一致的容器来执行应用程序。这意味着,在主机系统上,几乎不需要使用额外的软件包作为依赖项。

IDE 将不再使用任何现有的基于 schroot 的构建器。click chroot 仍然留在主机上,但是将与 Ubuntu SDK IDE 分离。

开始

做法很简单,我们只需添加 SDK 发行版和适用于 Ubuntu SDK 工具的 Tools Development PPA:

sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development

sudo apt update && sudo apt install ubuntu-sdk-ide

完成上面的操作后,IDE 现在完全可用。它会按照过去使用 click chroot 时相同的方式来发现容器。从各个方面看,开发者的体验并不会有太大变化。需要注意的是,目前我们还在 Beta 测试阶段,因此容器映像或 IDE 本身很有可能存在一些 Bug。请直接在 IRC 上或通过邮件列表向我们报告 Bug,更好的方式是通过 launchpad 中的官方 ubuntu-sdk-ide 项目来报告 Bug:https://bugs.launchpad.net/ubuntu-sdk-ide

已知问题和故障排除

lxd 组成员资格

通常,LXD 安装进程会配置必要的组成员资格。但如果该进程未配置成员资格,我们就需要确保当前用户属于 lxd 组。请发出以下命令:

sudo useradd -G lxd `whoami`

之后,重新登录将新组通知给登录会话。

重置 QtCreator 设置

有时,在不同版本间来回切换时,QtCreator(Ubuntu SDK IDE 的 Qt 应用程序)的设置会发生损坏。当发现已损坏或无法使用的套件、配置上可能有误的设备或者任何不寻常的问题时,按下 Qtcreator 上的重置按钮可能会有帮助。注意,这是一种相当激进的修复方法。操作上很简单,只需执行下面这条命令即可:

$ rm ~/.config/QtProject/qtcreator ~/.config/QtProject/QtC*

清理旧的 click chroot

前面提到,旧的 schroot 已与 SDK IDE 分离,但是仍留在文件系统中。使用以下命令可清理 click chroot:

$ sudo click chroot -a armhf -f ubuntu-sdk-15.04 destroy

$ sudo click chroot -a i386 -f ubuntu-sdk-15.04 destroy

这两个命令将释放大约 1.4GB 的磁盘空间。click chroot 位于 /var/lib/schroot/chroots 下。最好检查一下该文件夹是否为空,并且没有挂载任何内容。

$ mount|grep schroot

NVIDIA 显卡驱动程序

 

在使用 NVIDIA 显卡驱动程序的主机上,无法从 LXD 容器本地部署应用。如果主机具有双图形处理器,一种变通的方法是使用另一个图形处理器。

检查系统是否具有备用显卡

$ sudo lshw -class display

如果列表显示除 NVIDIA 以外的其他条目,则激活另一个显卡。prime-select 工具是一个简单易用的工具。

$ sudo prime-select intel

注意,你的系统上可能未安装这个工具,而且它不能与 bumblebee 一起使用。如果主机已安装 bumblebee 且缺少 prime-select 工具:

$ sudo apt-get remove bumblebee

$ sudo apt-get install nvidia-prime

如果主机除 NVIDIA 以外没有其他显卡,可以尝试 Nouveau 驱动程序,该驱动程序也许能用。不管怎样,这是一个非常严重的已知问题,我们目前正在着手解决。

启动新的 IDE

首先备份一些设置,以防在极少数情况下我们需要恢复回当前的 IDE。

$ tar zcvf ~/Qtproject.tar.gz ~/.config/QtProject

然后,在 Dash 中找到 Ubuntu SDK IDE 并启动它。

Ubuntu SDK IDE 首先会检查环境是否已正确设置。除非你是 LXC/LXD 超级用户,否则安全的做法是选择此对话框中的“Yes(是)”。

如果 Ubuntu SDK 是第一次启动,会打开一个欢迎向导帮助你设置套件和设备。

接下来,最好的建议是阅读向导的每个页面,并按照上面的说明操作。整个过程相当简单。
在下一页上,向导将帮助你创建套件。

按下“Create new Kit(创建新套件)”按钮,查看目标创建对话框。

在这一步中,可以在 3 种类型的目标间进行选择︰

  • Build to run on the desktop(构建以便在桌面上运行)- 筛选出所有与桌面兼容的映像
  • Build to run on device or emulator(构建以便在设备或模拟器上运行)- 筛选出所有可用于设备的映像
  • Show all available images(显示所有可用的映像)- 显示所有可用映像

我们选择“Show all available images”,查看所有现有映像的概览。

下一步,选择首选的目标架构。Ubuntu 手机和平板电脑是 armhf,主机 PC 是 i386 或 amd64。因此,要创建适用于手机的 click 包,需要 armhf 目标;要在桌面上测试应用程序,需要原生的 amd64 或 i386 目标。

我们可以为套件使用默认命名。

创建 LXD 容器需要系统管理员权限,所以下面我们需要验证自己的身份。

输入正确的密码后,LXD 映像下载将开始。

下载需要些时间,具体取决于网络的带宽。每个映像大约为 400MB。在向导下载和配置 LXD 映像期间,我们刚好有足够时间来看一篇简短的博客文章,了解一下到底什么是套件:你想了解却又羞于发问的关于套件的一切 。毫不夸张地说,花时间阅读这篇博客文章并了解开发套件是什么,是最佳的选择。

容器创建完毕后,会弹出一个简单的对话框显示一些基本详情。

向导的下一页将帮助你设置目标设备。在我们的例子中,我们已经有了一个 BQ (krillin) 手机和一个来自 rc-proposed 通道的模拟器。

但是,即使没有可用的手机、平板电脑或模拟器设备,结束向导肯定也是安全的。
在这个阶段,IDE 将自动发现 LXD 容器,并提示我们可以更新它。

这并不是一个必须要做的步骤,取消该对话框完全没有问题。

完成该向导后,IDE 将打开。

 

 

Read more
Daniel Holbach

We are in the second week of the Snappy Playpen and it’s simply beautiful to see how new folks are coming in and collaborate on getting snaps done, improve existing ones, answer questions and work together. The team spirit is strong and we’re all learning loads.

Keep up the good work everyone! </p>
            <a href=Read more

niemeyer

Over the last several months there has been noticeable and growing pain associated with the evolving integration tests around snapd, and given the project goal of being a cross-distribution platform, we are very keen on solving this problem appropriately so that stability is guaranteed everywhere.

With that mindset a more focused effort was made over the last few weeks to produce a tool that can get the project out of those problems, and onto a runway of more pleasant stability. Despite the short amount of time, I’m very happy about the Spread project which resulted from this effort.

Spread is not Jenkins or Travis, and is not a language or library either. Spread is a tool that will very conveniently ship your code to one or more systems, in parallel, and then offer the right set of options so you can run whatever you need to run to make sure the logic is working, and drive it all from the local system. That implies you can run Spread inside Travis, Jenkins, or your terminal, in a similar way to how your unit tests work.

Here is a short list of interesting facts about Spread:

  • Full-system tests with on demand machine allocation.
  • Multi-backend with Linode and LXD (for local runs) out of the box for now.
  • Multi-language since it can run arbitrary remote code.
  • Agent-less and driven via embedded ssh (kudos to Go team).
  • Convenient harness with project+backend+suite+test prepare and restore scripts.
  • Variants feature for test duplication without copy & paste.
  • Great debugging support – add -debug and stop with a shell inside every failure.
  • Reuse of servers – server allocation is fast, but not allocating is faster.
  • Reasonable test outputs with the shell’s +x mode on failures.
  • … and so forth.

This is all well documented, so I’ll just provide one example here to offer a real taste of how the system feels like.

This is spread.yaml, put in the project root to define the basics:

project: spread

backends:
    lxd:
        systems:
            - ubuntu-16.04
            - ubuntu-14.04

path: /home/test

prepare: |
    echo Entering project...
restore: |
    echo Leaving project...

suites:
    tests/: 
        summary: Integration tests
        prepare: |
            echo Entering suite...
        restore: |
            echo Leaving suite...

The suite name is also the path under which the tests are found.

Then, this is tests/hello/task.yaml:

summary: Greet the world
prepare: |
    echo "Entering task..."
restore: |
    echo "Leaving task..."
environment:
    FOO/a: one
    FOO/b: two
execute: |
    echo "Hello world!"
    [ $FOO = one ] || exit 1

The outcome should be almost obvious (intended feature :-). The one curious detail here is the FOO/a and FOO/b environment variables. This is how to introduce variants, which means this one test will in fact become two: first with FOO=one, and then with FOO=two. Now consider that such environment variables can be defined at any level – project, backend, suite, and task – and imagine how easy it is to test small variations without any copy & paste. After cascading takes place (project→backend→suite→task) all environment variables using a given variant key will be present at once on the same execution.

Now let’s try to run this configuration, including the -debug flag so we get a shell on the failures. Note how with a single test we get four different jobs, two variants over two systems, with the variant b failing as instructed:

$ spread -debug

2016/06/11 19:09:27 Allocating lxd:ubuntu-14.04...
2016/06/11 19:09:27 Allocating lxd:ubuntu-16.04...
2016/06/11 19:09:41 Waiting for LXD container to have an address...
2016/06/11 19:09:43 Waiting for LXD container to have an address...
2016/06/11 19:09:44 Allocated lxd:ubuntu-14.04.
2016/06/11 19:09:44 Connecting to lxd:ubuntu-14.04...
2016/06/11 19:09:48 Allocated lxd:ubuntu-16.04.
2016/06/11 19:09:48 Connecting to lxd:ubuntu-16.04...
2016/06/11 19:09:52 Connected to lxd:ubuntu-14.04.
2016/06/11 19:09:52 Sending project data to lxd:ubuntu-14.04...
2016/06/11 19:09:53 Connected to lxd:ubuntu-16.04.
2016/06/11 19:09:53 Sending project data to lxd:ubuntu-16.04...

2016/06/11 19:09:54 Error executing lxd:ubuntu-14.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:54 Starting shell to debug...

lxd:ubuntu-14.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-14.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 14.04.4 LTS"
lxd:ubuntu-14.04 ~/tests/hello# exit
exit

2016/06/11 19:09:55 Error executing lxd:ubuntu-16.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:55 Starting shell to debug...

lxd:ubuntu-16.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-16.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 16.04 LTS"
lxd:ubuntu-16.04 ~/tests/hello# exit
exit


2016/06/11 19:10:33 Discarding lxd:ubuntu-14.04 (spread-129)...
2016/06/11 19:11:04 Discarding lxd:ubuntu-16.04 (spread-130)...
2016/06/11 19:11:05 Successful tasks
2016/06/11 19:11:05 Aborted tasks: 0
2016/06/11 19:11:05 Failed tasks: 2
    - lxd:ubuntu-14.04:tests/hello:b
    - lxd:ubuntu-16.04:tests/hello:b
error: unsuccessful run

This demonstrates many of the stated goals (parallelism, clarity, convenience, debugging, …) while running on a local system. Running on a remote system is just as easy by using an appropriate backend. The snapd project on GitHub, for example, is hooked up on Travis to run Spread and then ship its tests over to Linode. Here is a real run output with the initial tests being ported, and a basic smoke test.

If you like what you see, by all means please go ahead and make good use of it.

We’re all for more stability and sanity everywhere.

@gniemeyer

Read more
bmichaelsen

Take your time, hurry up
The choice is yours, don’t be late
Take a rest as a friend
— Nirvana, Come As You Are

I have just updated the LibreOffice snap package. The size of the package available for download created some confusion. As LibreOffice 5.2 is still in beta, I built and packed it with full debug symbols to allow analysis of possible problems. Comparing this to the size of e.g. the default install from Ubuntu *.deb packages is misleading:

  • The Ubuntu default install misses LibreOffice Base and Java unless you explicitly install them
  • The Ubuntu default install misses debug symbols unless you install the package libreoffice-dbg too

As many people are just curious about running LibreOffice 5.2 without wanting to debug it right now, I replaced the snap package. The download and install instructions are still the same as noted here — but it is now 287MB instead of 1015MB (and it still contains Base, but no debug symbols).

The package file including full debug symbols — in case you are interested in that — has been renamed to libreoffice-debug.

(Note that if you downloaded the file while I moved files around, you might need to redo your download.)


Read more
David Callé

Yesterday, the snapcore team released a new version of snapd for Ubuntu 16.04. Snapd is the system service that enables developers and users to interact with snaps.

Features in 2.0.8

  • snap try. This command mounts any folder containing an unpackaged snap as an editable installed snap, making testing and iterating on snaps much faster. For example, if you are using snapcraft, you can run snap try prime/ in your working dir to mount prime/ as a installed snap and edit it while the snap is mounted.
  • Use os-release instead of lsb-release for cross-distro use
  • Add support for an environment map inside snap.yaml, although the matching snapcraft syntax has not landed yet.

Interfaces

New interfaces have been added with this release, giving more control to the way your snaps interact and exchange with the underlying OS (gsettings, pulseaudio, etc.). Their names are self explanatory, but for more details, you can have a look at the implementation. Note that some of these interfaces are “reserved” and will trigger a manual review in the store. Here is a summary of all changes:

  • Changes in the ‘unity7’ interface:
  • add DBUSMenu, Freedesktop and KDE notifications
  • allow AppMenu and launcher API
  • add fcitx and mozc input methods
  • add com.canonical.UrlLauncher.XdgOpen
  • Introducing the following interfaces:
  • network-manager’: allows operating as the NetworkManager service
  • cups-control’: allows access to cups control socket
  • location-control’ and ‘location-observe’: allow operating as the location service
  • pulseaudio’: allows access to audio (/etc/pulse and related paths)
  • gsettings’: allows access to global gsettings of the user's session
  • Autoconnect the ‘home’ interface
  • firewall-control’ can access the xtables lock file
  • Add socketcall() to the ‘network’ and ’network-bind’ interfaces
  • Allow using sysctl and scmp_sys_resolver for parsing kernel logs
  • Allow access to new ibus abstract socket path
  • Documentation updates

Command line

  • Implement `snap refresh --list` and `snap refresh` to view and manually apply available updates
  • Have 'snap list' display an helper message when no snaps are installed

The full changelog for this release is here. Note that the previous snapd update in 16.04  was 2.0.5, so this changelog extends from 2.0.6 to 2.0.8.

What’s next?

Here are some highlights from the list of features and fixes lined up for the next snapd release in 16.04:

  • Add new `snap run` command with hook support
  • Create SNAP_USER_DATA and common dirs in `snap run`
  • Have the installation of the core snap request a restart (on classic)
  • Install snaps in devmode on distributions without complete apparmor and seccomp support
  • Interfaces: miscellaneous policy updates for chromium, x86, opengl, etc.
  • Enable full confinement on Elementary 0.4 (Loki)

Read more
Prakash

One Plus has always provided cutting edge phones at less than half the price of leading branding. Today they launched the One Plus 3 with some great specifications. The good thing is, you don’t an invite anymore.

Here is some great specifications:

  • Snapdragon 820 Quad Core processor
  • 6GB RAM (more than my PC :))
  • Dual Nano Sims
  • Fast charging upto 60% within 30 Minutes
  • 16MP camera with Optical image stabilisation
  • Android 6.0.1
  • All metallic body
  • Finger print scanner
  • Any many more

Buy OnePlus 3  on Amazon for  Rs. 27,999

 


Read more
James Mulholland

We’re very happy to announce the return of the Ubuntu App Design Clinics.

The first session is planned for 4.00PM BST on Friday the 17th of June, with subsequent sessions occurring at 4.00PM BST on Thursdays.

We’ll be on camera talking to Dan Wood regarding his work on the OwnCloud App. Feel free to drop in and join the discussion via IRC:
http://ubuntuonair.com/

image

For more information regarding Dan Wood and his work, be sure to stop by his Google Plus. You can also stop by Dan’s Owncloud Telegram Group anytime and talk about the application with its creator.

We look forward to seeing you there!

Read more
bmichaelsen

What’s been happening in your world?
What have you been up to?
— Arctic Monkeys, Snap out of it

So — here is what I have been up to:

LibreOffice 5.2.0 beta2 installed as a snap on Ubuntu 16.04
LibreOffice 5.2.0 beta2 installed as a snap on Ubuntu 16.04

The upcoming LibreOffice 5.2 packaged as a nice new snap package. This:

  • is pretty much a vanilla build of LibreOffice 5.2 beta2, using snapcraft, which is making packaging quite easy
  • contains all the applications: Writer, Calc, Impress, Draw, Math, Base
  • installs easily on the released current LTS version of Ubuntu: 16.04
  • allows you to test and play with the upcoming LibreOffice version to your hearts delight without having to switch to a development version of Ubuntu

So — how can you “test and play with the upcoming LibreOffice version to your hearts delight” with this on Ubuntu 16.04? Like this:

wget http://people.canonical.com/~bjoern/snappy/libreoffice_5.2.0.0.beta2_amd64.snap{,.sha512sum}
sha512sum -c libreoffice_5.2.0.0.beta2_amd64.snap.sha512sum && sudo snap install --devmode libreoffice_5.2.0.0.beta2_amd64.snap
/snap/bin/libreoffice

and there you have a version of LibreOffice 5.2 running — for example, you can prepare yourself for the upcoming LibreOffice Bug Hunting Session. And its even quite easy to remove again:

sudo snap remove libreoffice

This is one of the things that snap packages will make a lot easier: upgrading or  downgrading versions of an application, having multiple installed in parallel and much more. Watch out as there are more exciting news about this coming up!

Update: As this has been asked a few times: Yes, snap packages are available on Ubuntu. No, snap packages are not only available on Ubuntu. This text has more details.

Update 2: The original download included debug symbols and thus was quite big. The download now has 287MB. This post has all the details.


Read more
Benjamin Zeller

or: here comes LXD

The next iteration of the Ubuntu SDK IDE

or: here comes LXD

After a long development process we are pleased to announce that the next version of the Ubuntu SDK IDE will go into beta testing phase as of today and it comes packed with a completely new builder and runtime backend to finally get rid of the biggest issues the SDK IDE has today.

Some people already heard rumours about new LXD based builders that should replace the schroot based ones. Well, the rumours are true and after some time of internal testing of our proof of concept version with just a few trusted testers we think it is time to show the new IDE to a bigger audience.

Now, before jumping right on the new packages let’s revisit some of the reasons why we had to move away from the schroot based builders:

The biggest issue is for sure the creation of new chroots right after installing the SDK. Bootstrapping a full Ubuntu root file system from live archives is very slow and error prone. Whenever there is a packaging issue in the archives or overlay PPA it is not possible to create new build targets. Which basically makes the SDK unusable until the packaging issues are fixes. LXD already has solved that problem, new containers are downloaded as compressed and ready-to-go image files, downloading is much faster and the resulting container will work for sure since it was tested by us before releasing it as opposed to the continuously changing Overlay PPA. Once an image has been downloaded it is cached, and spinning up a new container from the cache is a matter of seconds!

The second issue I want to highlight is our requirement to execute the applications locally on the desktop, but still supporting all Ubuntu versions that are currently officially supported. Which means we had to deal with a list of different Qt and UITK versions. We tried to solve that problem by providing a separate Qt+UITK package but it turned out we’d have to hack and rebuild so many packages to make that work that it was just not feasible. And this is not only a build time problem, but also a runtime problem. How should we continue to make it possible to run apps on the Desktop, using the hottest and newest components while maintaining LTS compatibility?

The answer was actually very simple: Use the containers as runtime targets and show the UI on the host’s X server.

There were a few more issues, like overall slowness and leaking mount points (everyone who ever had hundreds of mounts because of schroot, knows what I’m talking about), issues with ecryptfs and more.

Now, enough with the past, let’s look into the future and what has changed. It is good to know before starting that we have dropped support for the default Desktop Kit. Building and running on the Host is not supported by default anymore. The SDK IDE will not create other desktop run configurations than what automatically created by the qmake and cmake plugins. It is of course still possible to build and run on the host, but the run configuration needs to be created manually. Instead from now on it’s required to create a container that matches the host architecture where the application is executed in. It means that on the host system almost no additional packages are required as dependencies. 

All existing schroot based builders will not be used by the IDE anymore. The click chroots will remain on the host but will be decoupled from the Ubuntu SDK IDE.

Get started

Its simple, all that needs to be done is to add the SDK Release and the Tools Development PPA for the Ubuntu SDK tools:

sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development

sudo apt update && sudo apt install ubuntu-sdk-ide

 

And we are done, the IDE is now be fully usable. It will discover the containers just as it used to do with the click chroots. From all aspects, the developer experience will not change much. Please keep in mind we are still beta testing so there will be most likely some bugs, either with the container images or with the IDE itself. Please report them to us either directly on IRC or via mailing list, or even better on the official ubuntu-sdk-ide project in launchpad:  https://bugs.launchpad.net/ubuntu-sdk-ide

Known issues and troubleshooting

The lxd group membership

Normally the LXD install process takes care of configuring the necessary group membership. But if it does not then we have to make sure the current user is part of the lxd group issue this command:

sudo useradd -G lxd `whoami`

After that please relogin to make the new group known to the login session.

Reset QtCreator settings

Sometimes the settings of QtCreator (the Qt application of the Ubuntu SDK IDE) break when switching back and forth between different version. When you see broken or ghost Kits, or possible misconfigured devices, or in general anything what is weird it is possible that pushing  the reset button on Qtcreator helps. Note, that it is a rather radical fix. It can be easily done with a single command:

$ rm ~/.config/QtProject/qtcreator ~/.config/QtProject/QtC*

Clean up old click chroots

As mentioned before the old schroots are detached from the SDK IDE, but they remain on the file system. With the following commands it is possible to clean up the click chroots:

$ sudo click chroot -a armhf -f ubuntu-sdk-15.04 destroy

$ sudo click chroot -a i386 -f ubuntu-sdk-15.04 destroy

These commands will free about 1.4GB disk space. The click chroots live under the /var/lib/schroot/chroots/ It is a good idea to check if that folder is empty and nothing is mounted there

$ mount|grep schroot

NVIDIA video driver

Deploying apps locally from the LXD container i snot possible on hosts using NVIDIA graphics driver. If the host has dual graphic processor then one workaround is to use the other one.

Check if you have a backup graphics card

$ sudo lshw -class display

If that list shows other entries than the NVIDIA the activate the other video card. The prime select tool is a simple and easy tool to use.

$ sudo prime-select intel

Note that this tool might not be installed on your system and it does not work together with bumblebee. In case the host has bumblebee installed and missing the prime-select tool

$ sudo apt-get remove bumblebee

$ sudo apt-get install nvidia-prime

If the host has no other video card then the NVIDIA it is possible to use the Nouveau driver what might work. Anywhow, this is a known and very sever issue what we are working on.

Let start the new IDE

But first back up  some settings for the very unlikely case that we want to move back to the present IDE

$ tar zcvf ~/Qtproject.tar.gz ~/.config/QtProject

Then find the Ubuntu SDK IDE in the Dash and start it

The first thing the Ubuntu SDK IDE will do is checking if the environment is properly set up. Unless you are an LXC/LXD power user it is safe to choose 'Yes' in this dialog.

If the Ubuntu SDK is started for the first time, it will open a welcome wizard to help with setting up kits and devices

The best advice after this point is to read each page of the wizard and follow the instructions. It is a fairly easy process.
On the next page the wizard will offer you help to create kits

Push the "Create new Kit" button and read the target creation dialog

At this step we can choose between 3 types of targets:

  • "Build to run on the desktop", will filter for all images compatible with the desktop
  • "Build to run on device or emulator", will filter for all images that can be used for devices
  • "Show all available images", will show all available images

Let's select "Show all available images" to get an overview of all existing images.

As next we choose the preferred target arch. The Ubuntu phones and tablets are armhf and the host PC is either i386 or amd64. So for creating click packages for the phone we will need an armhf target and testing the application on the desktop we will need a native amd64 or i386 target

We can use the default naming for the kits.

Creating an LXD container requires system administrator rights, so at this point we need to authenticate ourself

Once we have entered the right password the download of the LXD image will start

It will take some time, depending on our network bandwidth. Each image is about 400MB. While the wizard downloads and configures the LXD image we have just enough time to read a quick blog post about what the Kits actually are: Everything You Always Wanted to Know About Kits But Were Afraid to Ask . It is not an exaggeration to say that the best way to invest the time is to read that blogpost and understand what the development kits are.

Once the container creation is done a simple dialog will show us some basic details

The next page of the wizard will help to set up target devices. In our case we already had a bq (krillin) phone and an emulator from the rc-proposed channel.


But even if there is no phone, tablet or emulator device available it is safe to finish the wizard.
At this stage the IDE will automatically discover the LXD container and offer us to update it.

It is not a mandatory step and perfectly safe to cancel that dialog.

After finishing the wizard the IDE will open up

 

 

Read more
David Callé

A new version of Snapcraft, the tool to create snaps to distribute your software, was recently released: Snapcraft 2.10 is packed with new features and improvements, including:

  • The ‘snapcraft init’ command now produces a template to bootstrap developers to create their snaps and uses ‘devmode’ as the default confinement mode
  • Added support for zip files, which can now be used as a source to be snapped for most Snapcraft plugins.
  • Renamed the ‘strip’ step to ‘prime’. Use of ‘strip’, the former snap lifecycle step, will print deprecation warnings
  • Initial backend support to work on the parts ecosystem
  • Migration to macaroons for authentication. The decentralized, cloud-aware authentication system will enable the addition of more features to talk to the Ubuntu Store APIs and a better developer experience. After this change, developers will need to do a one-off relogin to do uploads
  • A new ‘assumes’ field, which will be used by snapd to assert certain features are supported by the system for a particular snap to work properly
  • General polish around command output and error messages
  • Improvements to the Go and nodejs plugins

Check out the full details on all bug fixes and new features in Snapcraft 2.10.

Install Snapcraft

On Ubuntu

Simply open up a terminal with Ctrl+Alt+t and run these commands to install Snapcraft from the Ubuntu archives on Ubuntu 16.04 LTS

sudo apt update
sudo apt install snapcraft

On other platforms

Get the Snapcraft source code ›

Craft your snaps!

There is a thriving community of developers who can give you a hand getting started or unblocking you when creating your snap. You can participate and get help in multiple ways:

Last but not least the Snapcraft team would like to thank all the contributions from our community, keep them coming!

What’s next?

Next release, 2.11, will include improved documentation and getting started utilities. Subsequent releases will focus on the parts ecosystem, plugins, pull sources, and better integration with the Ubuntu Store for registration, uploads and snap releases.

Read more
Daniel Holbach

In Snappy Playpen we want to bring people together who want to create snaps, document best practices, learn from each other and have fun.

In our first Snappy Playpen event last Tuesday we simply wanted to bring people together, invite them to get to know the team, get started together and see how things go. It went great, check out the report!

Next week, on Tuesday, 14th June, we want to meet up and snap software together again. Obviously you can join #snappy on Freenode or the playpen gitter channel (or contribute to Snappy Playpen) at any time, but on Tuesday we want to get everyone together and make another push to get good stuff landed together.

This time we want to especially extend the invitation to all upstreams who are interested in getting their software snapped. If you are interested and need help, join us and we will figure out things together. If you still need to be convinced, here are a few reasons why this might make sense for your project:

  • Just run snapcraft upload to upload a snap to the store. (Maybe even hook it up with your CI?)
  • No lengthy review process. Publication within seconds.
  • Use different channels (stable, beta, edge) to ship different versions of your software to different audiences.
  • Build instructions in snapcraft.yaml are very simple, all nice and declarative.
  • Millions of Ubuntu 16.04 users can easily install your software through the software center.

We would also like to invite all Ubuntu flavours to participate. If you want to play around with snaps, we will help you get started.

  • WHAT: Snappy playpen sprint
  • WHEN: Tuesday, 14th June 2016 all day
  • WHERE: Join us on gitter or IRC

Read more
pitti

Historically, the “adt-run” command line has allowed multiple tests; as a consequence, arguments like --binary or --override-control were position dependent, which confused users a lot (#795274, #785068, #795274, LP #1453509). On the other hand I don’t know anyone or any CI system which actually makes use of the “multiple tests on a single command line” feature.

The command line also was a bit confusing in other ways, like the explicit --built-tree vs. --unbuilt-tree and the magic / vs. // suffixes, or option vs. positional arguments to specify tests.

The other long-standing confusion is the pervasive “adt” acronym, which is still from the very early times when “autopkgtest” was called “autodebtest” (this was changed one month after autodebtest’s inception, in 2006!).

Thus in some recent night/weekend hack sessions I’ve worked on a new command line interface and consistent naming. This is now available in autopkgtest 4.0 in Debian unstable and Ubuntu Yakkety. You can download and use the deb package on Debian jessie and Ubuntu ≥ 14.04 LTS as well. (I will provide official backports after the first bug fix release after this got some field testing.)

New “autopkgtest” command

The adt-run program is now superseded by autopkgtest:

  • It accepts only exactly one tested source package, and gives a proper error if none or more than one (often unintend) is given. Binaries to be tested, --override-control, etc. can now be specified in any order, making the arguments position independent. So you now can do things like:
    autopkgtest *.dsc *.deb [...]

    Before, *.deb only applied to the following test.

  • The explicit --source, --click-source etc. options are gone, the type of tested source/binary packages, including built vs. unbuilt tree, is detected automatically. Tests are now only specified with positional arguments, without the need (or possibility) to explicitly specify their type. The one exception is --installed-click com.example.myapp as possible names are the same as for apt source package names.
    # Old:
    adt-run --unbuilt-tree pkgs/foo-2 [...]
    # or equivalently:
    adt-run pkgs/foo-2// [...]
    
    # New:
    autopkgtest pkgs/foo-2
    # Old:
    adt-run --git-source http://example.com/foo.git [...]
    # New:
    autopkgtest http://example.com/foo.git [...]
    
  • The virtualization server is now separated with a double instead of a tripe dash, as the former is standard Unix syntax.
  • It defaults to the current directory if that is a Debian source package. This makes the command line particularly simple for the common case of wanting to run tests in the package you are just changing:
    autopkgtest -- schroot sid

    Assuming the current directory is an unbuilt Debian package, this will build the package, and run the tests in ./debian/tests against the built binaries.

  • The virtualization server must be specified with its “short” name only, e. g. “ssh” instead of “adt-virt-ssh”. They also don’t get installed into $PATH any more, as it’s hardly useful to call them directly.

README.running-tests got updated to the new CLI, as usual you can also read the HTML online.

The old adt-run CLI is still available with unchanged behaviour, so it is safe to upgrade existing CI systems to that version.

Image build tools

All adt-build* tools got renamed to autopkgtest-build*, and got changed to build images prefixed with “autopkgtest” instead of “adt”. For example, adt-build-lxc ubuntu xenial now produces an autopkgtest-xenial container instead of adt-xenial.

In order to not break existing CI systems, the new autopkgtest package contains symlinks to the old adt-build* commands, and when being called through them, also produce images with the old “adt-” prefix.

Environment variables in tests

Finally there is a set of environment variables that are exported by autopkgtest for using in tests and image customization tools, which now got renamed from ADT_* to AUTOPKGTEST_*:

  • AUTOPKGTEST_APT_PROXY
  • AUTOPKGTEST_ARTIFACTS
  • AUTOPKGTEST_AUTOPILOT_MODULE
  • AUTOPKGTEST_NORMAL_USER
  • AUTOPKGTEST_REBOOT_MARK
  • AUTOPKGTEST_TMP

As these are being used in existing tests and tools, autopkgtest also exports/checks those under their old ADT_* name. So tests can be converted gradually over time (this might take several years).

Feedback

As usual, if you find a bug or have a suggestion how to improve the CLI, please file a bug in Debian or in Launchpad. The new CLI is recent enough that we still have some liberty to change it.

Happy testing!

Read more
David Callé

We announced the Snappy Playpen a few days ago and yesterday was the Kickoff event where we basically invited everyone who was interested, brought in a lot of snapd and snapcraft experts and started snapping software together.

It was simply beautiful to see the level of excitement, the collaboration, how people got to know each other and how much stuff got done. Big hugs to everyone involved - great work!

People

Along with the usual #snappy IRC channel on Freenode, we used gitter.im as an experiment and it worked out well. We had at least 40 people participating there (many more on IRC and the mailing list), 850 messages in gitter alone and even after 24 hours we're still working our way through some software to go into the Playpen repository.

Without further ado, here's what already landed in the Snappy Playpen since yesterday:

Landed in the playpen:

Another beautiful thing which landed is Vincent Jobard's French video tutorial about Snapcraft just in time to celebrate the kickoff.

We have many great things which are still work in progress:

Not targeting the Snappy Playpen, but still nice snaps we worked on together as a team:

We also used this time to improve our crowdsourced docs on AskUbuntu:

The Snapcraft mailing-list has been buzzing with questions, answers and discussions:

And of course, kudos to the experts who managed to be very active and helpful, while preparing new releases of snapd and snapcraft.

Until the next Playpen event, which will be more focused on a specific software/framework/technology, we encourage you to have a look at all the snaps and snapcraft recipes available in the repo. Git clone it, cd into a project and run snapcraft to see how all the pieces are coming together to create a snap.

If you are the upstream of one of the above apps, help yourself with these branches and get in touch with us on IRC (freenode/#snappy), Gitter or on the mailing-list so we can provide support if needed.

Read more
Stéphane Graber

This is the tenth blog post in this series about LXD 2.0.

LXD logo

Introduction

Juju is Canonical’s service modeling and deployment tool. It supports a very wide range of cloud providers to make it easy for you to deploy any service you want on any cloud you want.

On top of that, Juju 2.0 also includes support for LXD, both for local deployments, ideal for development and as a way to co-locate services on a cloud instance or physical machine.

This post will focus on the local use case, going through the experience of a LXD user without any pre-existing Juju experience.

 

Requirements

This post assumes that you already have LXD 2.0 installed and configured (see previous posts) and that you’re running it on Ubuntu 16.04 LTS.

Setting up Juju

The first thing to do is to install Juju 2.0. On Ubuntu 16.04, it’s as simple as:

stgraber@dakara:~$ sudo apt install juju
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 juju-2.0
Suggested packages:
 juju-core
The following NEW packages will be installed:
 juju juju-2.0
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 39.7 MB of archives.
After this operation, 269 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 juju-2.0 amd64 2.0~beta7-0ubuntu1.16.04.1 [39.6 MB]
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 juju all 2.0~beta7-0ubuntu1.16.04.1 [9,556 B]
Fetched 39.7 MB in 0s (53.4 MB/s)
Selecting previously unselected package juju-2.0.
(Reading database ... 255132 files and directories currently installed.)
Preparing to unpack .../juju-2.0_2.0~beta7-0ubuntu1.16.04.1_amd64.deb ...
Unpacking juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Selecting previously unselected package juju.
Preparing to unpack .../juju_2.0~beta7-0ubuntu1.16.04.1_all.deb ...
Unpacking juju (2.0~beta7-0ubuntu1.16.04.1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up juju-2.0 (2.0~beta7-0ubuntu1.16.04.1) ...
Setting up juju (2.0~beta7-0ubuntu1.16.04.1) ...

Once that’s done, we can bootstrap a new “controller” using LXD. This means that Juju will not modify anything on your host, it will instead install its management service inside a LXD container.

Here, we’ll be creating a controller called “test” with:

stgraber@dakara:~$ juju bootstrap test localhost
Creating Juju controller "local.test" on localhost/localhost
Bootstrapping model "admin"
Starting new instance for initial controller
Launching instance
 - juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0
Installing Juju agent on bootstrap instance
Preparing for Juju GUI 2.1.2 release installation
Waiting for address
Attempting to connect to 10.178.150.72:22
Logging to /var/log/cloud-init-output.log on remote host
Running apt-get update
Running apt-get upgrade
Installing package: curl
Installing package: cpu-checker
Installing package: bridge-utils
Installing package: cloud-utils
Installing package: cloud-image-utils
Installing package: tmux
Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta7/juju-2.0-beta7-xenial-amd64.tgz]>
Bootstrapping Juju machine agent
Starting Juju machine agent (jujud-machine-0)
Bootstrap agent installed
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Waiting for API to become available: upgrade in progress (upgrade in progress)
Bootstrap complete, local.test now available.

This should take about a minute, at which point you’ll see a new LXD container running:

stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
|                         NAME                        |  STATE  |          IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+

On the Juju side of things, you can confirm that it’s responding and that nothing is running yet:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 

[Machines] 
ID STATE DNS INS-ID SERIES AZ

You can also access the Juju GUI in your web browser with:

stgraber@dakara:~$ juju gui
Opening the Juju GUI in your browser.
If it does not open, open this URL:
https://10.178.150.72:17070/gui/97fa390d-96ad-44df-8b59-e15fdcfc636b/

Juju web UI

Though I prefer the command line so that’s what I’ll be using next.

Deploying a minecraft server

So lets start with something very trivial, just deploy a service that uses a single Juju unit in a single container.

stgraber@dakara:~$ juju deploy cs:trusty/minecraft
Added charm "cs:trusty/minecraft-3" to the model.
Deploying charm "cs:trusty/minecraft-3" with the charm series "trusty".

This should return pretty much immediately. It however doesn’t mean the service is already up and running. Instead you’ll want to look at “juju status”:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 
minecraft maintenance false cs:trusty/minecraft-3 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 
minecraft/1 maintenance executing 2.0-beta7 1 10.178.150.74 (install) Installing java 

[Machines] 
ID STATE DNS INS-ID SERIES AZ 
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty 

Here we can see it’s currently busy installing java in the LXD container it just created.

stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
|                         NAME                        |  STATE  |          IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 | RUNNING | 10.178.150.74 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+----------------------+------+------------+-----------+

After a little while, the service will be done deploying as can be seen here:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 
minecraft active false cs:trusty/minecraft-3 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 
minecraft/1 active idle 2.0-beta7 1 25565/tcp 10.178.150.74 Ready 

[Machines] 
ID STATE DNS INS-ID SERIES AZ 
1 started 10.178.150.74 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-1 trusty

At which point you can fire up your minecraft client, point it at 10.178.150.74 on port 25565 and play with your all new minecraft server!

When you want to get rid of it, just run:

stgraber@dakara:~$ juju destroy-service minecraft

Wait a few seconds and everything will be gone.

Deploying a more complex web application

Juju’s main focus is on modeling complex services and deploying them in a scallable way.

To better show that, lets deploy a Juju “bundle”. This bundle is a basic web service, made of a website, an API endpoint, a database, a static web server and a reverse proxy.

So that’s going to expand to 4, inter-connected LXD containers.

stgraber@dakara:~$ juju deploy cs:~charmers/bundle/web-infrastructure-in-a-box
added charm cs:~hp-discover/trusty/node-app-1
service api deployed (charm cs:~hp-discover/trusty/node-app-1 with the series "trusty" defined by the bundle)
annotations set for service api
added charm cs:trusty/mongodb-3
service mongodb deployed (charm cs:trusty/mongodb-3 with the series "trusty" defined by the bundle)
annotations set for service mongodb
added charm cs:~hp-discover/trusty/nginx-4
service nginx deployed (charm cs:~hp-discover/trusty/nginx-4 with the series "trusty" defined by the bundle)
annotations set for service nginx
added charm cs:~hp-discover/trusty/nginx-proxy-3
service nginx-proxy deployed (charm cs:~hp-discover/trusty/nginx-proxy-3 with the series "trusty" defined by the bundle)
annotations set for service nginx-proxy
added charm cs:~hp-discover/trusty/website-3
service website deployed (charm cs:~hp-discover/trusty/website-3 with the series "trusty" defined by the bundle)
annotations set for service website
related mongodb:database and api:mongodb
related website:nginx-engine and nginx:web-engine
related api:website and nginx-proxy:website
related nginx-proxy:website and website:website
added api/0 unit to new machine
added mongodb/0 unit to new machine
added nginx/0 unit to new machine
added nginx-proxy/0 unit to new machine
deployment of bundle "cs:~charmers/bundle/web-infrastructure-in-a-box-10" completed

A few seconds later, you’ll see all the LXD containers running:

stgraber@dakara:~$ lxc list juju-
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
|                         NAME                        |  STATE  |           IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-745d1be3-e93d-41a2-80d4-fbe8714230dd-machine-0 | RUNNING | 10.178.150.72 (eth0)  |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-2 | RUNNING | 10.178.150.98 (eth0)  |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-3 | RUNNING | 10.178.150.29 (eth0)  |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-4 | RUNNING | 10.178.150.202 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+
| juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 | RUNNING | 10.178.150.214 (eth0) |      | PERSISTENT | 0         |
+-----------------------------------------------------+---------+-----------------------+------+------------+-----------+

After a couple of minutes, all the services should be deployed and running:

stgraber@dakara:~$ juju status
[Services] 
NAME STATUS EXPOSED CHARM 
api unknown false cs:~hp-discover/trusty/node-app-1 
mongodb unknown false cs:trusty/mongodb-3 
nginx unknown false cs:~hp-discover/trusty/nginx-4 
nginx-proxy unknown false cs:~hp-discover/trusty/nginx-proxy-3 
website false cs:~hp-discover/trusty/website-3 

[Relations] 
SERVICE1 SERVICE2 RELATION TYPE 
api mongodb database regular 
api nginx-proxy website regular 
mongodb mongodb replica-set peer 
nginx website nginx-engine subordinate 
nginx-proxy website website regular 

[Units] 
ID WORKLOAD-STATUS JUJU-STATUS VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE 
api/0 unknown idle 2.0-beta7 2 8000/tcp 10.178.150.98 
mongodb/0 unknown idle 2.0-beta7 3 27017/tcp,27019/tcp,27021/tcp,28017/tcp 10.178.150.29 
nginx-proxy/0 unknown idle 2.0-beta7 5 80/tcp 10.178.150.214 
nginx/0 unknown idle 2.0-beta7 4 10.178.150.202 
 website/0 unknown idle 2.0-beta7 10.178.150.202 

[Machines] 
ID STATE DNS INS-ID SERIES AZ 
2 started 10.178.150.98 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-2 trusty 
3 started 10.178.150.29 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-3 trusty 
4 started 10.178.150.202 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-4 trusty 
5 started 10.178.150.214 juju-97fa390d-96ad-44df-8b59-e15fdcfc636b-machine-5 trusty

At which point, you can hit the reverse proxy on port 80 with http://10.178.150.214 and you’ll hit the Juju academy web service.

Juju Academy web service

Cleaning everything up

If you want to get rid of all the containers Juju created and don’t mind having to bootstrap again next time, the easiest way to destroy everything is with:

stgraber@dakara:~$ juju destroy-controller test --destroy-all-models
WARNING! This command will destroy the "local.test" controller.
This includes all machines, services, data and other resources.

Continue [y/N]? y
Destroying controller
Waiting for hosted model resources to be reclaimed
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines, 5 services
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 4 machines
Waiting on 1 model, 2 machines
Waiting on 1 model
Waiting on 1 model
All hosted models reclaimed, cleaning up controller machines

And we can confirm that it’s all gone:

stgraber@dakara:~$ lxc list juju-
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

Conclusion

Juju 2.0’s built-in LXD support makes for a very clean way to test a whole variety of services.

There are quite a few pre-made “bundles” for you to deploy in the Juju charm store and even more “charms” that you can use to piece together the architecture you want.

Juju with LXD is the perfect solution for easily developing anything from a small web service to a big scale out infrastructure, all on your own machine, without creating a mess on your system!

Extra information

The Juju website can be found at: http://www.ubuntu.com/cloud/juju
The Juju charm store is available at: https://jujucharms.com

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Read more
facundo

Universidad y finales


El otro día estaba charlando con un amigo sobre algo relacionado a la Universidad, a una materia que cursé en la misma, y no me acordaba cuando la había rendido. En el momento no le dí importancia, pensé "cuando vuelvo a casa me fijo en la libreta universitaria".

Claro, nunca me fijé, porque la libreta está ahí escondida en un cajón de dificil acceso, el típico lugar donde alguien más o menos ordenado tiene los títulos viejos, certificados, papeles importantes diversos, y eso (los que no son medianamente ordenados en general no tienen puta idea donde están estas cosas).

Primera página de la libreta

Entones, se me ocurrió que, habiendo sido la Universidad una etapa tan importante en mi vida, podría tener la info de cuando rendí las materias mucho más a mano.

Busqué la libreta, pasé todas las materias (con fecha de final y la nota), y ahora guardo todo eso acá. Lástima que no tengo los nombres de las/los profesoras/es (me acuerdo algunos, pero la mayoría no...).

Read more
Daniel Holbach

Announcing the Snappy Playpen

With snaps and the store, it finally became easy again to publish software in Ubuntu. Snappy Playpen is a project in which we want to collaboratively snap software, learn from each other and document best-practices.

Snappy Playpen is on Github and it's where we want to work together on snapping new software. This will provide excellent examples to new users of snapcraft, we will be able to document best practices, learn from each other and create an incubator for new snaps to be added to the store.

Snappy Playpen won't be a collection of production-ready snaps, we are treating it a bit like a combination of research project and documentation.

If you are curious, just check out our main github page and read the docs there. It's easy and we're quite accessible. Find us on gitter, IRC or the mailing list to find out how to get involved.

You can get started at any time and contribute whatever you feel makes sense, but we want to host themed "sprint" weeks as well. If you have suggestions (e.g. a IoT-related week, a KDE-related week, server app, etc.), let us know. For those weeks we will make sure we have experts there to help us figure this out together.

Next week will be our first Snappy Playpen sprint and it will be a "free for all" week. This will help us to figure out the details and learn about what you all exactly want to do.

On Tuesday, 7th June 2016, we will make a big push and make sure our snapd and snapcraft engineers are there to answer questions and help figure out solutions together. Mark the day in your calendar and check out our docs to find out how to get started.

  • WHAT: Snappy playpen sprint
  • WHEN: Tuesday, 7th June 2016 all day
  • WHERE: Join us on gitter or IRC

Read more
liam zheng

Ubuntu手机现在迎来第十一个重要更新:OTA-11,这次更新的亮点主要为Wifi Display(无线投射模式),借助Wifi Display的功能用户可以体验无线投射屏幕加桌面融合(convergence)的巨大便利。只要将Ubuntu手机连接至显示器或电视,桌面版的Ubuntu模式即可使用,一个移动设备可变身集多窗口模式的全尺寸桌面。目前该功能仅支持魅族PRO5 Ubuntu版,后续还将支持其他型号Ubuntu手机。

Ubuntu手机今天迎来第十一个重要更新:OTA-11,这次更新主要的亮点为Wifi Display(屏幕无线投射),借助Wifi Display的功能用户可以体验无线投射屏幕加桌面融合(convergence)的巨大便利。只要将Ubuntu手机连接至显示器或电视(需要支持Miracast),桌面版的Ubuntu界面即可使用,一个移动设备将变身集多窗口模式的全尺寸桌面环境。目前该功能仅支持魅族PRO5 Ubuntu版,后续还将支持其他型号Ubuntu手机。

 

Wifi Display:点击观看

Wifi Display使用的是魅族PRO 5 Ubuntu版内建的p2p(peer-to-peer)连接方式启动Ubuntu桌面模式,如直接将手机通过Wifi连接显示器或电视,那么手机将充当触摸板的功能,如已连接蓝牙鼠标、键盘,则将拥有传统桌面模式的体验,重要的是,手机的短信、电话功能可展现在外接显示器上。

 

Scope支持自动横屏模式

在OTA-11以前,Unity 8 Dash的Scope只能竖屏显示。而在OTA-11更新后,Scope将可以横屏显示,对于喜欢横屏的用户来说又多一个选择。并且主页Scope(今日、Nearby)会在解锁屏幕前完成更新,解锁屏幕后可获取最新的信息。

 

新增繁体中文输入法

Ubuntu 手机OTA-11又一新特点是支持繁体中文输入法——注音键盘布局。可通过设置——语言&文字——键盘布局,选择注音输入法即可使用。

 

DGU让Scope、App自动适配多端显示

桌面融合(convergence)作为Ubuntu手机的杀手锏功能,已经让给很多经常背包的用户减轻不必要的负担,作为开发者而言,Unity 8用户界面将支持DGU(dynamic grid units),在开发应用和Scope时更简单,一次开发就可以在多个显示端自动适配。

 

平板:M10性能改善

OTA-11是BQ M10 Ubuntu版的第一个大版本更新,改善操控体验,图形处理,提示性能。

 

其他改进总结:

  • 地理位置服务改善,获取地理位置更加准确;

  • 网络管理器更新到1.2版,在上网时更加安全;

  • 应用程序支持多窗口显示(M10桌面融合);

  • UITK滚动条设计更新,Head支持副标题;

  • VPN支持用户名和密码认证;

  • 浏览器应用改进;支持google hangout,重新设计的权限提示对话框;

  • 在桌面融合模式蓝牙鼠标反应更敏捷;

  • 修复了以下bug:语言包翻译,性能问题,自定义通知声音

Read more
UbuntuTouch

有兴趣的朋友可以阅读我们全球开发者网站的文章"On Screen Keyboard tricks".里面讲了许多的东西.在今天的文章中,我们直接来一个活生生的例程来阐述这个问题.


Main.qml


import QtQuick 2.4
import Ubuntu.Components 1.3

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "osk.liu-xiao-guo"

    width: units.gu(60)
    height: units.gu(85)

    // anchorToKeyboard: true

    Page {
        title: i18n.tr("onscreenkeyboard")
        anchors.fill: parent

        Flickable {
            id: sampleFlickable

            clip: true
            contentHeight: mainColumn.height + units.gu(5)
            anchors {
                top: parent.top
                left: parent.left
                right: parent.right
                bottom: createButton.top
                bottomMargin: units.gu(2)
            }

            Column {
                id: mainColumn

                spacing: units.gu(2)

                anchors {
                    top: parent.top
                    left: parent.left
                    right: parent.right
                    margins: units.gu(2)
                }

                TextField {
                    id: username
                    width: parent.width
                    placeholderText: "username"
                }

                TextField {
                    id: password
                    width: parent.width
                    placeholderText: "password"
                }

                TextField {
                    id: email
                    width: parent.width
                    placeholderText: "email"
                }
            }
        }

        Button {
            id: createButton
            text: "Create Account"
            anchors {
                horizontalCenter: parent.horizontalCenter
                bottom: parent.bottom
                margins: units.gu(2)
            }
            onClicked: {
                console.log("it is clicked")
            }
        }
    }
}

在上面的例程中,我们有意识地把:

    anchorToKeyboard: true

注释掉,我们可以运行我们的应用:


从上面的截图中可以看出来,当我们点击最上面的任何一个输入框时,我们最下面的"Createt Account"按钮就被键盘所在的位置遮住了.那么我们怎么才能可以见到我们的按钮呢?答案就是在我们的MainView中,把如下的开关打开:

    anchorToKeyboard: true

再次重新运行我们的应用:



从上面我们可以看出来.当我们点击email进行输入的时候,"Create Account"按钮也自动跑到我们键盘的上面.这样当我们输入完我们的内容的时候,我们可以直接按下按钮来进行创建一个账号,而不用先把键盘弄消失掉,再提交.

整个项目的源码在:https://github.com/liu-xiao-guo/osk

我们来用另外一个例程来展示OSK的用法:

Main.qml

import QtQuick 2.4
import Ubuntu.Components 1.3

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "osk1.liu-xiao-guo"

    width: units.gu(100)
    height: units.gu(75)

  //  anchorToKeyboard: true

    Page {
        header: PageHeader {
            id: pageHeader
            title: i18n.tr("osk1")
            StyleHints {
                foregroundColor: UbuntuColors.orange
                backgroundColor: UbuntuColors.porcelain
                dividerColor: UbuntuColors.slate
            }
        }

        TextField {
            anchors.bottom: parent.bottom
            anchors.horizontalCenter: parent.horizontalCenter
            anchors.bottomMargin: units.gu(2)
            placeholderText: "please input something"
        }
    }
}

在上面的例程中,当我们注释掉:

    anchorToKeyboard: true

运行我们的应用,并在最下面的输入框中点击进行输入:

  

从上面可以看出来,当我们进行输入的时候,我们的输入框被键盘挡住了.我们看不见我们的输入框.而当我们把下面的属性设为真时:

  anchorToKeyboard: true

在重新运行我们的应用:

  



从上面我们可以看出来,当我们输入的时候我们可以看见我们的输入框,并可以输入为我们所需要的内容.

整个项目的源码在:https://github.com/liu-xiao-guo/osk1



作者:UbuntuTouch 发表于2016/4/18 13:54:51 原文链接
阅读:269 评论:0 查看评论

Read more