Canonical Voices

Nicholas Skaggs

Testing Vivid Vervet final images

Ubuntu 15.04, otherwise known as the vivid vervet, is nearing release. We are now in the final week before the release on April 23rd. That means it's time to test some images!

Everyone can help!
For the final images, I'd like to extend the call for testing beyond those brave souls willing to run alpha and beta software. I encourage everyone to make a backup (as always!) and upgrade / install vivid. Then report your results on the tracker. Positive results are extremely helpful for this milestone, so please report those too. As a bonus, you can enjoy vivid a few days before the rest of the world (there's no need to re-install the final image), and avoid the upgrade rush after release.


How can I help?
To help test, visit the iso tracker milestone page for the final milestone.  The goal is to verify the images in preparation for the release. The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

Isotracker? 
There's a first time for everything! Check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

Thanks and happy testing everyone!

Read more
Prakash

We’re setting up a new production web server for our own site and as it’s a chance to start fresh, the thought of course turned to “what’s the best web server for our site?” After looking around at various benchmarks and reviews of the more common web servers, none of the benchmarks seemed to have been run in the last few years or focussed on thousands of connections with static content. This wasn’t the scenario I wanted to see data on.

So, I set about running a few benchmarks on what I considered to be the top 3 Linux based web servers for a moderately busy site. This is why I’ve labelled the article “Part 1”, as I want to cover multiple scenarios in a few follow-up articles to encompass a variety of scenarios. For this test we’ll be using WordPress, however I’ll be testing other platforms in the follow-up articles as well.

Read More: http://www.texnologist.net/sec/right_sidebar_two.html

Read more
Shuduo

日期:2015-04-14 作者:新闻更新 出处:IT.com.cn(IT世界网)

近日,Canonical与中国移动联合发起的“Ubuntu开发者创新大赛”正在各大高校如火如荼的展开,公开征集优秀适配Ubuntu操作系统的Scope、应用等作品。其中,优麒麟作为本次大赛的开发平台,为开发者提供了开放的、便捷的桌面操作系统,该系统易于开发者使用,开发者可以将更多的精力投入到创新作品的开发中。


中国移动&Ubuntu开发者创新大赛

本次赛事活动将使国内开发者接触到Ubuntu所带来的新一代移动体验。这将打破自第一款iPhone以来所形成的对于应用为王的固定模式,也将突破屏幕上惟有应用图标排列组成的单一视觉效果和使用上的局限。Ubuntu系统实现了内容和服务的前置,可直接呈现于屏幕上,从而有效的为用户创造一个丰富、快速且不碎片化的体验。而开发者只需花费相对于传统应用开发和维护而言很小的成本,便可在系统级别上创造出与应用同样的用户体验。这些独特体验是通过使用Ubuntu Scopes来完成的,它是Ubuntu独有的全新UI工具,可通过Ubuntu SDK获取。想了解更多Ubuntu Scopes和Ubuntu手机的信息,可访问Ubuntu官网。

赛事面向学生群体、职业开发者和开源社区,共设置6个奖项,奖励包括7万人民币现金和手机等奖品,并为学生组的获奖参赛者提供Canonical实习机会。报名截止时间为2015年5月15日,2015年6月进行决赛评选。开发者可访问“和你圆梦”百万青年创业就业计划官网报名参赛,

据悉,本次大赛的参赛开发者大都在使用优麒麟Ubuntu Kylin做Ubuntu手机开发的操作系统,作为一个中文化本地化的Ubuntu分支优麒麟,让国内开发者可以更轻松的使用Ubuntu手机的SDK来进行开发。

为了让国内业界开发者以及校内学生,更快更好的了解Ubuntu手机平台和开发技术,Canonical公司已在全国校内外场所举办多场落地培训互动。由知名专家高级工程师带队,为国内开发者个人团体从平台介绍到实际上手开发详尽讲解,并在现场手把手教大家如何开发Scope。同时,他们还为广大的开发者朋友准备了一定数量的优麒麟Ubuntu Kylin启动盘,内含Ubuntu SDK,以及一整套的培训教程资料,为开发者们提供了全面的技术支持。

开发者使用优麒麟平台进行Ubuntu手机应开发,可以轻松实现自己的创新想法、潮流创意等,让青年开发者率先接触到了新兴的移动生态系统,在获得崭新的创业机遇同时助力TD 产业蓬勃发展。日前,Ubuntu开发者创新大赛线上线下的技术培训活动已经在北京邮电、中科大、长沙中南大学、中山大学等高校展开。后续培训安排日程可通过关注Ubuntu微信账号(UbuntuByCanonical)获得。

Read more
Shuduo

2015-04-14 16:48 作者:www.guigu.org HV:87 来源:硅谷网 编辑:书明寒

  近日,记者了解到国产操作系统优麒麟正式发布了15.04 Beta 2测试版本,对重量级特色软件进行了升级,更好的方便用户体验,这款适合个人应用、大型企业和政府机构办公的操作系统在细节上的创新升级,提升了其品质和服务能力,获得了用户的认可,对于国内Linux操作系统的发展也起到了很好的示范作用。

  满足中国用户的使用习惯

  优麒麟设计之初就是为了做更有中国特色的操作系统,采用平台国际化与应用本地化融合的设计理念,通过定制本地化的桌面用户环境以及开发满足广大中文用户特定需求的应用软件来提供细腻的中文用户体验。

  记者从官方了解到,相对于14.10正式版,在细节上看到了更多创新,此次发布的测试版本累计修复了60多个Bug;更新了系统主题;系统内核升级到3.19,,能支持下一代英特尔Braswell芯片;用户桌面环境的改进包括默认打开“本地集成菜单”和启动器的“单击最小化”两个特性,更利于Windows熟练用户学习使用Unity用户界面。

  同时,新版本还升级了一些特色应用,软件中心、优客助手、优客农历等,其中优客助手2.0.1版本实现了全新的用户界面和操作方式,软件中心1.3.0版本增加了用户“完善翻译”和个人应用管理等功能。另外,开发团队与搜狗公司合作开发了搜狗输入法1.2版本,该版本已修复多个重要已知Bug,并支持细胞词库功能,当有最新更新时,已安装的用户可以自动更新。

  优麒麟获得用户认可

  绝大多数的开源软件在正式发布前会发布数版测试版本,然后会通过其技术社区不断后续完善,这也是开源软件正常的开发过程。这个版本应该是15.04正式版本之前最后发布的一个测试版本,接下来优麒麟开发团队将对系统关键组件、集成应用、中文化等进行更详细的测试以及Bug修复,这将为优麒麟推向市场奠定基础。

  去年,优麒麟系统进入了中央国家机关政府采购个人操作系统协议供货商名单。入围政府采购名单,意味着这款操作系统通过了国内最高层次的全面审核和认可,这对优麒麟来说是一个很好的发展机会。据悉,目前中央政府采购中心已经在安装试用优麒麟,而诸多部委和国家机构也正在评估试用安装优麒麟中。

  记者也从身边的朋友了解到,目前在国内很多的安卓手机开发工程师其实都是在使用优麒麟做安卓的开发,例如小米,盛大。“没有最好,只有更好”,优麒麟发布最新测试版本,就是为了更好的方便用户使用和体验,目前从下载数量看优麒麟正逐渐获得市场和消费者的认可。

Read more
Zoltán Balogh

14.04 - 1.0 release

The 1.0 release of the UITK was built mostly for demonstrative purposes, but works well to a certain extent, it is the LTS release after all. Available from the Trusty archive (0.1.46+14.04.20140408.1-0ubuntu1) and from the SDK PPA (0.1.46+14.10.20140520-0ubuntu1~0trusty2). The “demonstrative purpose” in this context is a pretty serious thing. This release was the ultimate proof of concept that the Qt (5.2 by then) and QML technology with our design and components provides a framework for a charmingly beautiful and killing fast user interface. Obviously there is no commercial touch device with this UITK release, but it is good to make a simple desktop application with the UX of a mobile app. If your desktop PC is running 14.04 LTS Ubuntu and you have installed the Ubuntu SDK then the IDE is using this release of the UITK.

The available components and features are documented either online https://developer.ubuntu.com/api/qml/sdk-14.04/Ubuntu.Components/ or offline under the file:///usr/share/ubuntu-ui-toolkit/doc/html local directory if the ubuntu-ui-toolkit-doc is installed.


14.10 - 1.1 release

It was the base for the first real Ubuntu phone. Most mission critical components and toolkit features were shipped with this edition.  The highlights of the goodies you can see on the Utopic edition of the UITK (version 1.1.1279+14.10.20141007-0ubuntu1):

  • Settings API

  • Ubuntu.Web

  • ComboButton

  • Header replaces bottom toolbar

  • PullToRefresh

  • Ubuntu.DownloadManager

  • Ubuntu.Connectivity

The focus of the UITK development was to complete the component set and achieve superb performance. It is important to note that these days, this exact version you can find only on very few community ported Ubuntu Touch devices, and even those early adaptations should be updated to 15.04.  The most common place to meet this edition of the UITK is the 14.10 Ubuntu desktop. This UITK can be indeed used to build pretty nice looking desktop applications. The Ubuntu specific UI extensions of the QtCreator IDE are built on our very own UITK. So, the UITK is ported and available for desktop app development with some limitations since 14.04.


14.09  - the RTM release

The development of the RTM (Ready To Market) branch of the UITK  was focusing on bugfixes and final polishing of the components. Dozens of functional, visual and performance related issues were tackled and closed in this release.

A few of relevant changes in the RTM branch:

  • Internationalization related improvements

  • Polishing the haptics feedback of components

  • Fixes in the ActivityIndicator

  • UX improvements of the TextField/TextArea

  • Dialog component improvements

This extended 1.1 release of the UITK is what is shipped with the bq Aquaris E4.5 devices. This is pretty serious stuff. Providing the very building rocks for the user experience is a big responsibility. During the development of this  release one of the most significant changes happened behind the scenes. The release process of the UITK was renewed and we have enforced very strict rules for accepting any changes.

To make sure that with the continuous development of the UITK we do not introduce functional problems and do not cause regressions we not only force to run about 400 autopilot test cases on the UITK, but an automatic test script validates all core and system apps with the release candidates. It means running thousands of  automatic functional tests before each release.


15.04 - 1.2 release

After the 14.09 aka RTM release was found good and the bq devices started to leave the factory lines the UITK development started to focus on two major areas. First of all we brought back to the development trunk all the fixes and improvements landed on the RTM branch and we merged back the whole RTM branch to the main line. The second area was to open the 1.2 queue of the toolkit and release the new features:

  • ListItem

  • New UbuntuShape rendering properties

  • New Header

Releasing the 1.2 UITK makes the first big iteration of the toolkit development complete.  In the last three cycles the Ubuntu application framework went through three minor Qt upgrades (5.2 - 5.3 - 5.4) and continuously adapted to the improving design and platform.


15.10 - 1.3 release

The upcoming cycle the focus is on convergence. We have shipped a super cool UI Toolkit for touch environment, now it is time to make it offer as complete and as fast toolkit for other form factors and for devices with other capabilities. The emphasis here is on capability. Not only form factor or device mode. The next release (1.3) of the UITK will adopt to the host environment according to its capabilities. Like input capabilities, size and others.

The highlights of the upcoming features:

  • Resolution independence

  • Improve visual rendering (pixel perfectness at any device ratio)

  • Improve performance (CPU and GPU wise)

  • Convergence

    • Tooltips

    • Key navigation - Tab/Shift+Tab

    • Date and Time Pickers

    • Menus

      • Application and

      • context menus

  • Support Sub-theming

  • Support of ListItem expansion

  • Text input magnification on selection

  • Simplified Popovers

  • Text input context menu

  • Deprecate Dialer (Ubuntu.Components.Pickers)

  • Deprecate PopupBase (Ubuntu.Components.Popups)

  • Focused component highlight

  • Support for OSK to keep the focus component above the key rectangle

  • Integrate scope toolkit from Unity with the UI Toolkit

The 1.3 version of the UITK will be the first with the promise that application developers can create both fully functional desktop and phone applications. In practice it means that the runtime UITK will be the same as in the build environment.


16.04 - 2.0 release

Looking forward to our next LTS release our ambition is to polish together all the features and tune the UI Toolkit for the next major release. This edition of the toolkit will serve app developers for long time. The 2.0 will be the “mission completed”.  We expect few features to move from our original 15.10 plans to the 16.04:

  • Clean up deprecated components

  • Rename ThemeSettings to Theme

  • Toolbars for convergence

  • Modal Dialogs

  • Device mode (aka capability) detection

  • Complete scopes support

  • Backend for Alarm services

  • Separate service components from UI components

Read more
Prakash

Benchmarks show Linux beats OSX on a MacMini:

All of the benchmarks under both OS X and Linux were facilitated using the open-source Phoronix Test Suite benchmarking software. All of the hardware was the same throughout testing: the reported differences on the automated table above just come down to differences in what the OS reports, such as the difference between the CPU base frequency and turbo frequency, etc. On the following pages are the initial results with more interesting data points to come shortly.

Read More: http://www.phoronix.com/scan.php?page=article&item=osx-fedora21-vivid&num=1

Read more
David Planella

Nearly two years ago, the Ubuntu Community Donations Program was created as an extension to the donations page on ubuntu.com/download, where those individuals who download Ubuntu for free can choose to support the project financially with a voluntary contribution. In doing so, they can use a set of sliders to determine which parts of the project the amount they donate goes to (Ubuntu Desktop, Ubuntu for phone, Ubuntu for tablet, Ubuntu on public clouds, Cloud tools, Ubuntu Server with OpenStack, Community projects, Tip to Canonical).

While donations imply the trust from donors that Canonical is acting as a steward to manage their contributions, the feedback from the community back then was that the Community slider required a deeper level of attention in terms of management and transparency. With community being such an integral part of Ubuntu, and with the new opportunity to financially support new community projects, events or travel, it was just logical to ensure that the funds allocated to them were managed fairly and transparently, with public reporting every six months and a way for Ubuntu members to request funding.

Although the regular reports already provide a clear picture where the money donated for community projects is spent on, today I’d like to give an update on the bigger picture of the Community Donations Program and answer some questions community members have raised.

A successful two years

In a nutshell, we’re proud to say that the program continues to successfully achieve the goals it was set out for. Since its inception, it has given the ability to fund around 70,000 USD worth of community initiatives, conferences, travel and more. The money has always been allocated upon individual requests, the vast majority of which were accepted. Very few were declined, and when they were, we’ve always strived to provide good reasoning for the decision.

This process has given the opportunity to support a diverse set of teams and projects of the wide Ubuntu family, including flavours and sponsoring open source projects and conferences that have collaborated with Ubuntu over the years.

Program review and feedback

About two years into the Program, we felt a more thorough review was due: to assess how it has been working, to evaluate the community feedback and to decide if there are any adjustments required. Working with the Community Council on the review, we’ve also tried to address some questions from Ubuntu members that came in recently. Here is a summary of this review.

The feedback in general has been overwhelmingly positive. The Community Donations Program is not only seen as an initiative that hugely benefits the Ubuntu project, but also the figures and allocations on the reports and are a testament to this fact.

Criticism is also important to take, and when it has come, we’ve addressed it individually and updated the public policy or FAQ accordingly. Recently, it has arrived in two areas: the uncertainty in some cases where the exact cost is not known in advance (e.g. fluctuating travel costs from the date of the request until approval and booking) and the delay in actioning some of the requests. In the first case, we’ve updated the FAQ to reflect the fact that there is some flexibility allowed in the process to work with a reasonable estimate. In the second, we’ve tried to explain that while some requests are easy to approve and actioned in a matter of a few days (we review them all once a week), some others take longer due to several different factors: back and forth communication to clarify aspects of the requests, the amount of pending requests, and in some cases, the complexity of arranging the logistics. In general, we feel that it’s not unreasonable to expect sending a request at least a month in advance to what it is being planned to organize with the funds. We’re also making it clear that requests should be filled in advance as opposed to retroactively, so that community members do not end up in a difficult position should a request not be granted.

One of the questions that came in was regarding the flavour and upstream donation sliders. Originally, there were 3 community-related sliders on ubuntu.com/download: 1) Community participation in Ubuntu development, 2) Better coordination with Debian and upstreams, 3) Better support for flavours like Kubuntu, Xubuntu, Lubuntu. At some point during the 14.04 release sliders 2) and 3) were removed, leaving 1) as Community projects. Overall, this didn’t change the outcome of community allocations: since its beginning, the Community Donations Programme amounts have only come from the first slider, which is what the Canonical Community team are managing. From there, money is always allocated upon request fairly, not making a difference and benefiting Ubuntu, its flavours and upstreams equally.

All that said the lack of communication regarding the removal of the slider was something that was not intended and should have been communicated with the Community Team and the Community Council. It was a mistake for which we need to apologize. For any future changes in sliders that affect the community we will make sure that the Community Council is included in communications as an important stakeholder in the process.

Questions were also raised about the reporting on the community donations during the months in 2012/2013, between the donations page going live and the announcement of the Community Donations Program. As mentioned before, the Program was born out of the want to provide a higher level of transparency for the funds assigned to community projects. Up until then (and in the same way as they do today for the rest of the donation sliders) donors were trusting Canonical to manage the allocations fairly. Public reports were made retroactively only where it made sense (i.e. to align with fiscal quarters), but not going back all the way to the time before the start of the Program.

All in all, with these small adjustments we’re proud to say we’ll continue to support community projects with donations in the same way we’ve been doing these last two years.

And most especially, we’d like to say a big ‘thank you’ to everyone who has kindly donated and to everyone who has used the funds to help shaping the future of Ubuntu. You rock!

The post The Ubuntu Community Donations Program in review appeared first on David Planella.

Read more
Nick Moffitt

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Benjamin Zeller

Inner workings of the SDK

From time to time app developers ask how to manually build click packages from their QMake or CMake projects. To understand the answer to that question, knowing about how the SDK does things internally and the tools it uses helps a lot.

First we have to know about the click command. It is one of the most important tools we are about to use, because it provides ways to:

  • create a build environment
  • maintain the build environment
  • execute commands in the build environment
  • build click packages
  • review click packages
  • query click packages

Issuing click --help will show a complete list of options. The click command is not only used on the development machines but also on the device images, as it is also responsible for installing/removing click packages and to provide informations about the frameworks a device has to offer.

Assuming that the project source is already created, probably from a SDK template, and ready to be packed up in ~/myproject, creating a click package requires the following steps:

  1. Create a build target for the device that should be targeted
    click chroot -a armhf -f ubuntu-sdk-15.04 create
  2. Run qmake/cmake on the project to create the Makefiles
    mkdir ~/myproject-build
    cd ~/myproject-build
    click chroot -a armhf -f ubuntu-sdk-15.04 run cmake ../myproject #for cmake
    click chroot -a armhf -f ubuntu-sdk-15.04 run qt5-qmake-arm-linux-gnueabihf ../myproject #for qmake
  3. Run make to compile the project and run custom build steps
    click chroot -a armhf -f ubuntu-sdk-15.04 run make
  4. Run make install to collect all required files in a deploy directory
    rm -rf /tmp/deploy-myproject #make sure the deploy dir is clean
    click chroot -a armhf -f ubuntu-sdk-15.04 run make DESTDIR=/tmp/deploy-myproject install #for cmake
    click chroot -a armhf -f ubuntu-sdk-15.04 run make INSTALL_ROOT=/tmp/deploy-myproject install #for qmake
  5. Run click build on the deploy directory
    click build /tmp/deploy-myproject

We will look into each step at a greater detail and explain the tools behind it starting with:

Creating a build chroot and what exactly is that:

When building applications for a different architecture as the currently used development machine , for example x86 host vs armhf device, cross build toolchains are required. However toolchains are not easy to maintain and it requires a good deal of effort to make them work correctly. So our decision is to use "build chroots" to ease the maintenance of those toolchains. A build chroot is in fact nothing else as the normal Ubuntu you are using on your host machine. Probably its a different version, but it is still coming from the archive. That means we can make sure the toolchains, libraries and tools that are used to build click packages are well tested and receive the same updates as the ones on the device images.

To create a build chroot the following command is used:

click chroot -a armhf -f ubuntu-sdk-15.04 create

Grab a coffee while this is running, it will take quite some time. After the chroot was created for the first time, it is possible to keep it up to date with:

click chroot -a armhf -f ubuntu-sdk-15.04 upgrade

But how exactly does this work? A chroot environment is another complete Ubuntu root filesystem put inside a directory. The "chroot" command makes it possible to treat exactly this directory as the "root directory" for a login shell. Commands running inside that environment can not access the outer filesystem and do not know they are actually inside a virtualized Ubuntu installation. That makes sure your host file system can not be tainted by anything that is done inside the chroot.

To make things a bit easier, /home and /tmp directories are mounted into the chroot. That means those paths are the same inside and outside the chroot. No need to copy files around. But that also means projects can only be in /home by default. It is possible to change that but thats not in the scope of this blog post (hint: check /etc/schroot/default/fstab).

Run qmake/cmake on the project to create the Makefiles

In order to compile the project, CMake or QMake need to create a Makefile from the project description files. The SDK IDE always uses a builddirectory to keep the source clean. That is the recommended way of building projects.

Now that we have a chroot created, we need a way to actually execute commands inside the virtual environment. It is possible to log into the chroot or just run single commands. The click chroots have 2 different modes, one of them is the production mode and one is the  maintenance mode.

Everything that is changed on the chroot filesystem in production mode will be reverted when the active session is closed to make sure the chroot is always clean. The maintenance mode can be used to install build dependencies, but its the job of the user to make sure those dependencies are available on the phone images as well. Rule of thumb is, if something is not installed in the chroot by default it is probably not officially supported and might go away anytime.


click chroot -a armhf -f ubuntu-sdk-15.04 run  #production
click chroot -a armhf -f ubuntu-sdk-15.04 maint #maintenance

Running one of these commands without specifying which command should be executed inside the chroot will open a login shell inside the chroot environment. If multiple successive commands should be executed it is faster to use a login shell, because the chroot is mounted/unmounted every time a session is opened/closed.

For QMake projects usually the IDE takes care of selecting the correct QMake binary, however in manual mode the user has to call the qt5-qmake-arm-linux-gnueabihf in armhf chroots instead of the plain qmake command. The reason for this is that qmake needs to be compiled in a special way for cross build targets and the "normal" qmake can not be used.

Run make to compile the project and run custom build steps

This step does not need much explanations, it triggers the actual build of the project and needs to be executed inside the chroot again of course.

Run make install to collect all required files in a deploy directory

Now that the project build was successful, step 4 collects all the required files for the click package and installs them into a deploy directory. When building with the IDE the directory is located in the current build dir and is named ".ubuntu-sdk-deploy".

It is a good place to check if all files were put into the right place or check if the manifest file is correct.

In order for that step to work correctly all files that should be in the click package need to be put into the INSTALL targets. The app templates in the SDK should give a good hint on how this is to be done.

The deploy directory now contains the directory structure of the final click package.

Run click build on the deploy directory

The last step now is to build the actual click package. This command needs to be executed outside the chroots, simply because of the fact that the click command is not installed by default inside the chroots. What will happen now is that all files inside /tmp/deploy-myproject will be put inside the click package and a click review is executed. The click review will tell if the click package is valid and can be uploaded to the store.


If all went well, the newly created click package should show up in the directory where click was executed, it can now be uploaded to the store or installed on a device.

Read more
facundo


Hace algunos años me empezó a pasar que leía (o me pasaban) recomendaciones de lugares para comer, tomar algo, jugar al pool, etc, y luego cuando estaba en la calle, no las recordaba y terminaba yendo a cualquier lado al azar.

Se me ocurrió empezar a registrar los lugares en un mapa. La solución que usé fue basada en Google Maps: en la interfaz web creé una capa mía, en la cual empecé a cargar todos esos lugares. Luego, en el teléfono, iba a Google Maps, le decía que me mostrara esa capa, y ahí tenía el mapa con muchas estrellitas (cada lugar que había cargado) y podía ver qué tenía cerca, o para donde iba, etc.

Con el tiempo, se empezó a complicar.

En un momento, Google decidió que la versión del teléfono de Maps no iba a mostrar más "custom layers" (o sea, las capas que creabas vos). En otras palabras, ¡no podía ver más mis datos! Esto lo solucioné instalando una versión vieja de Google Maps en el teléfono (lo cual no es sencillo de hacer, pero funcionaba). Más adelante, Google empezó a complicar el uso de las capas en la versión web también. Y hace algunos meses, dejó de servir esa información, con lo cual aunque en el teléfono tuviera una versión que pedía esas capas al servidor, el servidor no las contestaba.

Esta foto es vieja, pero me encanta

En paralelo, hace un par de años largos que quiero empezar a irme en lo posible de los servicios de Google, y en función de eso en los últimos meses empecé a usar los mapas de OpenStreetMap ("OSM"), por recomendaciones de Nico, Humitos y Marcos Dione. Desde mitad del año pasado también lo puse en el teléfono, mediante la gran aplicación OsmAnd primero, y desde hace un par de semanas con MAPS.ME (que es bastante más rápida al mostrar los datos, y creo que es mejor decidiendo dónde mostrarte los nombres de las calles, lo cual es importante).

La gran ventaja de OsmAnd y MAPS.ME es que usan los mapas de OpenStreetMap (que son mejores en su calidad que los de Google Maps, y además son abiertos y colaborativos), y que además lo usan de forma offline. O sea, te bajás los mapas que te interesan (por ejemplo, el de Argentina) cuando tenés una buena conexión de internet y luego el mapa está en tu teléfono, con lo cual no necesitás internet cuando estás en la calle para consultar estos datos.

Pero, aunque estaba contento con la solución de "mapas" en su forma genérica, me faltaba esto de "anotar mis lugares". Hasta que Humitos me recomendó umap, donde podés justamente crear capas de lugares arriba de los mapas de OpenStreetMap (hay una gran cantidad de sitios que utilizan los datos de OSM y dan servicios arriba de ellos, ejemplos que me pasó Humitos: su propio "puntos de interes", otro con fotos de ciudades, y uno donde la gente registra árboles frutales).

En ese sitio, entonces, creé mi mapa de lugares para ir de parranda (no volví a armarlo de cero, sino que importé lo que exporté previamente de google maps). Para llevar estos datos a mi teléfono, exporté un KML, me lo mandé por mail, y en el teléfono le dije que lo abra con el MAPS.ME.

Y listo, :)

Read more
facundo

Hay días en el laburo...


Hay días en el laburo donde tenés que hacer un trabajo, lo planeás, te juntás con gente, se decide que cosas se van a hacer, se separa todo en tres o cuatro partes, hacés cada una (con tests y todo), todo bien, te hacen los review, entra en trunk, va a producción, todo muy lindo, mirás las métricas, suben y bajan como corresponde, y sos feliz.

Hay otros días en el laburo, donde empezás a ver algo y decís "esto no puede ser", empezás rastrear por qué está ese número ahí y te das cuenta que los logs tienen un problema, entonces lo querés contrastar con las métricas, y te das cuentas que en las métricas falta data, decidís cruzarlo con otro dato y te das cuenta que todavía no están sincronizados los archivos donde eso está, lo tenés que pedir y te tardan tres o cuatro horas en dartelo, y después cuando lo podés cruzar te das cuenta que deberías haber estado registrando algún otro número más, pero que no todo está perdido porque lo podés sacar de forma indirecta, hacés un script para parsear un quintillón de registros, te da un resultado más o menos coherente, pero todavía tenés que resolver como puede ser que el problema realmente esté sucediendo, mirás el código, te das cuenta que esa función está siendo llamada desde siete lados de los cuales solamente te acordabas tres, y de esos siete lados hay dos que no tenes datos de cómo están llamados... 

Está todo roto

En fin, la mayoría de las veces termina todo con un final feliz, pero realmente estás uno, dos o tres días rascándote grupalmente la cabeza con tus compañeros de trabajo hasta que se resuelve el acertijo.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150407 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

Our Vivid kernel remains based on the upstream v3.19.3 stable kernel.
Vivid kernel freeze is this Thurs Apr 9. We are still chasing down some
recent regressions, but we intend to prepare and upload our proposed
final kernel for Vivid no later than tomorrow Wed Apr 8. If you have
any patches which need to land for 15.04′s release, please let us know
and get them submitted to the list now.
—–
Important upcoming dates:
Thurs Apr 09 – Kernel Freeze (~2 days away)
Thurs Apr 23 – 15.04 Release (~2 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – None (no update)
  • Precise – Testing & Verification
  • Trusty – Testing & Verification
  • Utopic – Testing & Verification

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 20-Mar through 11-Apr
    ====================================================================
    20-Mar Last day for kernel commits for this cycle
    22-Mar – 28-Mar Kernel prep week.
    29-Mar – 11-Apr Bug verification; Regression testing; Release

    NOTE: Lucid goes EOL on April 30.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Corey Bryant

== Agenda ==

* Review ACTION points from previous meeting
* V Development
* https://wiki.ubuntu.com/VividVervet/ReleaseSchedule
* http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-v-tracking-bug-tasks.html#ubuntu-server
* Server & Cloud Bugs (caribou)
* Weekly Updates & Questions for the QA Team (matsubara)
* Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
* Ubuntu Server Team Events
* Open Discussion
* Announce next meeting date, time and chair
* ACTION: meeting chair (of this meeting, not the next one) to carry out post-meeting procedure (minutes, etc) documented at https://wiki.ubuntu.com/ServerTeam/KnowledgeBase

== Minutes ==

==== Weekly Updates & Questions for the QA Team (matsubara) ====

* matsubara reports that some of the server smoke tests are still failing. He’ll investigate and report bugs as necessary.
* ”LINK:”: http://d-jenkins.ubuntu-ci:8080/view/Vivid/view/Smoke%20Testing/

==== Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges) ====

* smb reports that two bcache bugs need feedback: bug [[http://launchpad.net/bugs/1425288|1425288]] and bug [[http://launchpad.net/bugs/1425128|1425128]]. coreycb left action in place for James Page to have a look at those.

* smb reports that the UE is sprinting next week, so there is a high chance I might miss next weeks irc meeting (and probably same for arges and sforshee )

==== Meeting Actions ====

* James to give feedback on bugs 1425288 and 1425128

==== Agree on next meeting date and time ====

Next meeting will be on Tuesday, Apr 14th at 16:00 UTC in #ubuntu-meeting.

Read more
bmichaelsen

Das ist alles nur geklaut und gestohlen,
nur gezogen und geraubt.
Entschuldigung, das hab ich mir erlaubt.
— die Prinzen, Alles nur geklaut

So, you might have noticed that there was no April Fools post from me this, year unlike previous years. One idea, I had was giving LibreOffice vi-key bindings — except that apparently already exists: vibreoffice. So I went looking for something else and found odpdown by Thorsten, who just started to work on LibreOffice fulltime again, and reading about it I always had the thought that it would be great to be able to run this right from your favourite editor: Vim.

And indeed: That is not hard to do. Here is a very raw video showing how to run presentations right out of vim:

Now, this is a quick hack, Linux only, requires you to have Python3 UNO-bindings installed etc. If you want to play with it: Clone the repo from github and get started. Kudos go out to Thorsten for the original odpdown on which this is piggybagging (“das ist alles nur geklaut”). So: Have fun with this — I will have to install vibreoffice now.


Read more
Barry Warsaw

Background

Snappy Ubuntu Core is a new edition of the Ubuntu you know and love, with some interesting new features, including atomic, transactional updates, and a much more lightweight application deployment story than traditional Debian/Ubuntu packaging.  Much of this work grew out of our development of a mobile/touch based version of Ubuntu for phones and tablets, but now Ubuntu Core is available for clouds and devices.

I find the transactional nature of upgrades to be very interesting.  While you still get a perfectly normal Ubuntu system, your root file system is read-only, so traditional apt-get based upgrades don't work.  Instead, your system version is image based; today you are running image 231 and tomorrow a new image is released to get you to 232.  When you upgrade to the new image, you get all the system changes.  We support both full and delta upgrades (the latter which reduces bandwidth), and even phased updates so that we can roll out new upgrades and quickly pull them from the server side if we notice a problem.  Snappy devices even support rolling back upgrades on a single device, by using a dual-partition root file system.  Phones generally don't support this due to lack of available space on the device.

Of course, the other part really interesting thing about Snappy is the lightweight, flexible approach to deploying applications.  I still remember my early days learning how to package software for Debian and Ubuntu, and now that I'm both an Ubuntu Core Developer and Debian Developer, I understand pretty well how to properly package things.  There's still plenty of black art involved, even for relatively easy upstream packages such as distutils/setuptools-based Python package available on the Cheeseshop (er, PyPI).  The Snappy approach on Ubuntu Core is much more lightweight and easy, and it doesn't require the magical approval of the archive elves, or the vagaries of PPAs, to make your applications quickly available to all your users.  There's even a robust online store for publishing your apps.

There's lots more about Snappy apps and Ubuntu Core that I won't cover here, so I encourage you to follow the links for more information.  You might also want to stop now and take the tour of Ubuntu Core (hey, I'm a poet and I didn't even realize it).

In this post, I want to talk about building and deploying snappy Python applications.  Python itself is not an officially supported development framework, but we have a secret weapon.  The system image client upgrader -- i.e. the component on the devices that checks for, verifies, downloads, and applies atomic updates -- is written in Python 3.  So the core system provides us with a full-featured Python 3 environment we can utilize.

The question that came to mind is this: given a command-line application available on PyPI, how easy is it to turn into a snap and install it on an Ubuntu Core system?  With some caveats I'll explore later, it's actually pretty easy!

Basic approach

The basic idea is this: let's take a package on PyPI, which may have additional dependencies also on PyPI, download them locally, and build them into a snap that we can install on an Ubuntu Core system.

The first question is, how do we build a local version of a fully-contained Python application?  My initial thought was to build a virtual environment using virtualenv or pyvenv, and then somehow turn that virtual environment into a snap.  This turns out to be difficult in practice because virtual environments aren't really designed for this.  They have issues with being relocated for example, and they can contain a lot of extraneous stuff that's great for development (virtual environment's actual purpose ) but unnecessary baggage for our use case.

My second thought involved turning a Python application into a single file executable, and from there it would be fairly easy to snappify.  Python has a long tradition of such tools, many with varying degrees of cross platform portability and standalone-ishness.  After looking again at some oldies but goodies (e.g. cx_freeze) and some new offerings, I decided to start with pex.

pex is a nice tool developed by Brian Wickman and the Twitter folks which they use to deploy Python applications to their production environment.  pex takes advantage of modern Python's support for zip imports, and a clever trick of zip files.

Python supports direct imports (of pure Python modules) from zip files, and the python executable's -m option works even when the module is inside a zip file.  Further, the presence of a __main__.py file within a package can be used as shorthand for executing the package, e.g. python -m myapp will run myapp/__main__.py if it exists.

Zip files are interesting because their index is at the end of the file.  This allows you to put whatever you want at the front of the file and it will still be considered a zip file.  pex exploits this by putting a shebang in the first line of the file, e.g. #!/usr/bin/python3 and thus the entire zip file becomes a single file executable of Python code.

There are of course, plenty of caveats.  Probably the main one is that Python cannot import extension modules directly from the zip, because the dlopen() function call only takes a file system path.  pex handles this by marking the resulting file as not zip safe, so the zip is written out to a temporary directory first.

The other issue of course, is that the zip file must contain all the dependencies not present in the base Python.  pex is actually fairly smart here, in that it will chase dependencies, much like pip and it will include those dependencies in the zip file.  You can also specify any missed dependencies explicitly on the pex command line.

Once we have the pex file, we need to add the required snappy metadata and configuration files, and run the snappy command to generate the .snap file, which can then be installed into Ubuntu Core.  Since we can extract almost all of the minimal required snappy metadata from the Python package metadata, we only need just a little input from the user, and the rest of work can be automated.

We're also going to avail ourselves of a convenient cheat.  Because Python 3 and its standard library are already part of Ubuntu Core on a snappy device, we don't need to worry about any of those dependencies.  We're only going to support Python 3, so we get its full stdlib for free.  If we needed access to Python 2, or any external libraries or add-ons that can't be made part of the zip file, we would need to create a snappy framework for that, and then utilize that framework for our snappy app.  That's outside the scope of this article though.

Requirements

To build Python snaps, you'll need to have a few things installed.  If you're using Ubuntu 15.04, just apt-get install the appropriate packages.  Otherwise, you can get any additional Python requirements by building a virtual environment and installing tools like pex and wheel into their, then invoking pex from that virtual environment.  But let's assume you have the Vivid Vervet (Ubuntu 15.04); here are the packages you need:
  •  python3
  •  python-pex-cli
  •  python3-wheel
  •  snappy-tools
  •  git
You'll also want a local git clone of https://gitlab.com/warsaw/pysnap.git which provides a convenient script called snap.py for automating the building of Python snaps.  We'll refer to this script extensively in the discussion below.

For extra credit, you might want to get a copy of Python 3.5 (unreleased as of this writing).  I'll show you how to do some interesting debugging with Python 3.5 later on.

From PyPI to snap in one easy step

Let's start with a simple example: world is a very simple script that can provide forward and reverse mappings of ISO 3166 two letter country codes (at least as of before ISO once again paywalled the database).  So if you get an email from guido@example.py you can find out where the BDFL has his secret lair:

$ world py
py originates from PARAGUAY

world is a pure-Python package with both a library and a command line interface. To get started with the snap.py script mentioned above, you need to create a minimal .ini file, such as:

[project]
name: world

[pex]
verbose: true

Let's call this file world.ini.  (In fact, you'll find this very file under the examples directory in the snap git repository.)  What do the various sections and variables control?
  •  name is the name of the project on PyPI.  It's used to look up metadata about the project on PyPI via PyPI's JSON API.
  •  verbose variable just defines whether to pass -v to the underlying pex command.
Now, to create the snap, just run:

$ ./snap.py examples/world.ini

You'll see a few progress messages and a warning which you can ignore.  Then out spits a file called world_3.1.1_all.snap.  Because this is pure Python, it's architecture independent.  That's a good thing because the snap will run on any device, such as a local amd64 kvm instance, or an ARM-based Ubuntu Core-compatible Lava Lamp.

Armed with this new snap, we can just install it on our device (in this case, a local kvm instance) and then run it:

$ snappy-remote --url=ssh://localhost:8022 install world_3.1.1_all.snap
$ ssh -p 8022 ubuntu@localhost
ubuntu@localhost:~$ world.world py
py originates from PARAGUAY

From git repository to snap in one easy step

Let's look at another example, this time using a stupid project that contains an extension module. This aptly named package just prints a yes for every -y argument, and no for every -n argument.

The difference here is that stupid isn't on PyPI; it's only available via git.  The snap.py helper is smart enough to know how to build snaps from git repositories.  Here's what the stupid.ini file looks like:

[project]
name: stupid
origin: git https://gitlab.com/warsaw/stupid.git

[pex]
verbose: yes

Notice that there's a [project]origin variable.  This just says that the origin of the package isn't PyPI, but instead a git repository, and then the public repo url is given.  The first word is just an arbitrary protocol tag; we could eventually extend this to handle other version control systems or origin types.  For now, only git is supported.

To build this snap:

$ ./snap.py examples/stupid.ini

This clones the repository into a temporary directory, builds the Python package into a wheel, and stores that wheel in a local directory.  pex has the ability to build its pex file from local wheels without hitting PyPI, which we use here.  Out spits a file called stupid_1.1a1_all.snap, which we can install in the kvm instance using the snappy-remote command as above, and then run it after ssh'ing in:

ubuntu@localhost:~$ stupid.stupid -ynnyn
yes
no
no
yes
no

Watch out though, because this snap is really not architecture-independent. It contains an extension module which is compiled on the host platform, so it is not portable to different architectures.  It works on my local kvm instance, but sadly not on my Lava Lamp.

Entry points

pex currently requires you to explicitly name the entry point of your Python application.  This is the function which serves as your main and it's what runs by default when the pex zip file is executed.

Usually, a Python package will define its entry point in its setup.py file, like so:

setup(
    ...
    entry_points={
        'console_scripts': ['stupid = stupid.__main__:main'],
        },
    ...
    )

And if you have a copy of the package, you can run a command to generate the various package metadata files:

$ python3 setup.py egg_info

If you look in the resulting stupid.egg_info/entry_points.txt file, you see the entry point clearly defined there.  Ideally, either pex or snap.py would just figure this out explicitly.  As it turns out, there's already a feature request open on pex for this, but in the meantime, how can we auto-detect the entry point?

For the stupid example, it's pretty easy.  Once we've cloned its git repository, we just run the egg_info command and read the entry_points.txt file.  Later, we can build the project's binary wheel from the same git clone.

It's a bit more problematic with world though because the package isn't downloaded from PyPI until pex runs, but the pex command line requires that you specify the entry point before the download occurs.

We can handle this by supporting an entry_point variable in the snap's .ini file.  For example, here's the world.ini file with an explicit entry point setting:

[project]
name: world
entry_point: worldlib.__main__:main

[pex]
verbose: true

What if we still wanted to auto-detect the entry point?  We could of course, download the world package in snap.py and run the egg-info command over that.  But pex also wants to download world and we don't want to have to download it twice.  Maybe we could download it in snap.py and then build a local wheel file for pex to consume.

As it turns out there's an easier way.

Unfortunately, package egg-info metadata is not availble on PyPI, although arguably it should be.  Fortunately, Vinay Sajip runs an external service that does make the metadata available, such as the metadata for world.

snap.py makes the entry_point variable optional, and if it's missing, it will grab the package metadata from a link like that given above.  An error will be thrown if the file can't be found, in which case, for now, you'd just add the [project]entry_point variable to the .ini file.

A little more snap.py detail

The snap.py script is more or less a pure convenience wrapper around several independent tools.  pex of course for creating the single executable zip file, but also the snappy command for building the .snap file.  It also utilizes python3 setup.py egg_info where possible to extract metadata and construct the snappy facade needed for the snappy build command.  Less typing for you!  In the case of a snap built from a git repository, it also performs the git cloning, and the python3 setup.py bdist_wheel command to create the wheel file that pex will consume.

There's one other important thing snap.py does: it fixes the resulting pex file's shebang line.  Because we're running these snaps on an Ubuntu Core system, we know that Python 3 will be available in /usr/bin/python3.  We want the pex file's shebang line to be exactly this.  While pex supports a --python option to specify the interpreter, it doesn't take the value literally.  Instead, it takes the last path component and passes it to /usr/bin/env so you end up with a shebang line like:

#!/usr/bin/env python3

That might work, but we don't want the pex file to be subject to the uncertainties of the $PATH environment variable.

One of the things that snap.py does is repack the pex file.  Remember, it's just a zip file with some magic at the top (that magic is the shebang), so we just read the file that pex spits out, and rewrite it with the shebang we want.  Eventually, pex itself will handle this and we won't need to do that anymore.

Debugging

While I was working out the code and techniques for this blog post, I ran into an interesting problem.  The world script would crash with some odd tracebacks.  I don't have the details anymore and they'd be superfluous, but suffice to say that the tracebacks really didn't help in figuring out the problem.  It would work in a local virtual environment build of world using either the (pip installed) PyPI package or run from the upstream git repository, but once the snap was installed in my kvm instance, it would traceback.  I didn't know if this was a bug in world, in the snap I built, or in the Ubuntu Core environment.  How could I figure that out?

Of course, the go to tool for debugging any Python problem is pdb.  I'll just assume you already know this.  If not, stop everything and go learn how to use the debugger.

Okay, but how was I going to get a pdb breakpoint into my snap?  This is where Python 3.5 comes in!

PEP 441, which has already been accepted and implemented in what will be Python 3.5, aims to improve support for zip applications.  Apropos this blog post, the new zipapp module can be used to zip up a directory into single executable file, with an argument to specify the shebang line, and a few other options.  It's related to what pex does, but without all the PyPI interactions and dependency chasing.  Here's how we can use it to debug a pex file.

Let's ignore snappy for the moment and just create a pex of the world application:

$ pex -r world -o world.pex -e worldlib.__main__:main
Now let's say we want to set a pdb breakpoint in the main() function so that we can debug the program, even when it's a single executable file.  We start by unzipping the pex:
$ mkdir world
$ cd world
$ unzip ../world.pex
If you poke around, you'll notice a __main__.py file in the current directory.  This is pex's own main entry point.  There are also two hidden directories, .bootstrap and .deps.  The former is more pex scaffolding, but inside the latter you'll see the unpacked wheel directories for world and its single dependency.

Drilling down a little farther, you'll see that inside the world wheel is the full source code for world itself.  Set a break point by visiting .deps/world-3.1.1-py2.py3-none-any.whl/worldlib/__main__.py in your editor.  Find the main() function and put this right after the def line:

import pdb; pdb.set_trace()

Save your changes and exit your editor.

At this point, you'll want to have Python 3.5 installed or available.  Let's assume that by the time you read this, Python 3.5 has been released and is the default Python 3 on your system.  If not, you can always download a pre-release of the source code, or just build Python 3.5 from its Mercurial repository.  I'll wait while you do this...

...and we're back!  Okay, now armed with Python 3.5, and still inside the world subdirectory you created above, just do this:

$ python3.5 -m zipapp . -p /usr/bin/python3 -o ../world.dbg

Now, before you can run ../world.dbg and watch the break point do its thing, you need to delete pex's own local cache, otherwise pex will execute the world dependency out of its cache, which won't have the break point set. This is a wart that might be worth reporting and fixing in pex itself.  For now:

$ rm -rf ~/.pex
$ ../world.dbg

And now you should be dropped into pdb almost immediately.

If you wanted to build this debugging pex into a snap, just use the snappy build command directly.  You'll need to add the minimal metadata yourself (since currently snap.py doesn't preserve it).  See the Snappy developer documentation for more details.

Summary and Caveats


There's a lot of interesting technology here; pex for building single file executables of Python applications, and Snappy Ubuntu Core for atomic, transactional system updates and lightweight application deployment to the cloud and things.  These allow you to get started doing some basic deployments of Python applications.  No doubt there are lots of loose ends to clean up, and caveats to be aware of.  Here are some known ones:

  • All of the above only works with Python 3.  I think that's a feature, but you might disagree. ;)   This works on Ubuntu Core for free because Python 3 is an essential piece of the base image.  Working out how to deploy Python 2 as a Snappy framework would be an interesting exercise.
  • When we build a snap from a git repository for an application that isn't on PyPI, I don't currently have a way to also grab some dependencies from PyPI.  The stupid example shown here doesn't have any additional dependencies so it wasn't a problem.  Fixing this should be a fairly simple matter of engineering on the snap.py wrapper (pull requests welcome!)
  • We don't really have a great story for cross-compilation of extension modules. Solving this is probably a fairly complex initiative involving the distros, setuptools and other packaging tools, and upstream Python.  For now, your best bet might be to actually build the snap on the actual target hardware.
  • Importing extension modules requires a file system cache because of limitations in the dlopen() API.  There have been rumors of extensions to glibc which would provide a dlopen()-from-memory type of API which could solve this, or upstream Python's zip support may want to grow native support for caching.
Even with these caveats, it's pretty easy to turn a Python application into a Snappy Ubuntu Core application, publish it to the world, and profit!  So what are you waiting for?  Snap to it!

Read more
rvr

Inspired by  Crea un voltímetro tan solo con tu placa Arduino y un par de cables (How to create a multimeter with an Arduino board and a couple of wires), I've created the following tutorial to show how easy and fun is to program with Visualino. Here you'll learn how to measure the voltage a battery with Arduino. For that, a multimeter is often used, but multimeters aren't smart. Arduinos are!

Arduino boards have two set of pins. The digital pins are the ones we usually use to blink a LED. Digital means they only have two states: ON and OFF. But we also have the analog pins, which are able to measure current and convert it from a voltage to a number, that can be read. To build the circuit these components are needed:

  • Arduino Uno or Nano (or any other one).
  • Two resistors 1K Ohms.
  • Some wires.
  • A battery: in my case 9V.

Next, place the components like this:

20150401divisor
Now, we are ready to program the Arduino board using Visualino. You can see how to do that in the following video. First, we use the blocks to create the program. The program do these:

  1. It reads a number from the analog pin #0 and stores it in the "read" variable.
  2. It re-scales that number from 0-1023 to 0-900 and stores the result in "voltage". My battery is 9V. If your battery has max 3 volts, then use the appropriate max value (e.g. 300). 
  3. The measured voltage is printed.
  4. After half a second, the measure is repeated.

After the program has been created, we build it and transfer it to the Arduino board.

As we can see in the video, Visualino has a Serial monitor. There we can see the voltage readings. That was easy, uh? :) But to make things easier, Visualino is able to convert those numbers in the Serial monitor to neat real-time chart! 

Screenshot from 2015-04-01 19:09:20
Screenshot from 2015-04-01 19:09:20
Screenshot from 2015-04-01 19:09:20

 

And that's it! Stay tuned as other awesome features will be coming soon to Visualino. Enjoy!

Read more
Diogo Matsubara

== Agenda ==

* Review ACTION points from previous meeting
* V Development
* https://wiki.ubuntu.com/VividVervet/ReleaseSchedule
* http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-v-tracking-bug-tasks.html#ubuntu-server
* Server & Cloud Bugs (caribou)
* Weekly Updates & Questions for the QA Team (matsubara)
* Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
* Ubuntu Server Team Events
* Open Discussion
* Announce next meeting date, time and chair
* ACTION: meeting chair (of this meeting, not the next one) to carry out post-meeting procedure (minutes, etc) documented at https://wiki.ubuntu.com/ServerTeam/KnowledgeBase

== Minutes ==

==== Weekly Updates & Questions for the QA Team (matsubara) ====

* matsubara reports that some of the server smoke tests are still failing. He’ll investigate and report bugs as necessary.
* ”LINK:”: http://d-jenkins.ubuntu-ci:8080/view/Vivid/view/Smoke%20Testing/

==== Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges) ====

* smb reports that two bcache bugs need feedback: bug [[http://launchpad.net/bugs/1425288|1425288]] and bug [[http://launchpad.net/bugs/1425128|1425128]]. matsubara assigned a action to James Page to have a look at those.

==== Meeting Actions ====

* James to give feedback on bugs 1425288 and 1425128

==== Agree on next meeting date and time ====

Next meeting will be on Tuesday, Apr 7th at 16:00 UTC in #ubuntu-meeting.

Read more
rvr

To celebrate the 10th anniversary of Arduino, and the Arduino Day, today I am proud to present Visualino. What is it? It's a visual programming environment for Arduino, a project that I begun last year and has been actively developing in the last months, with the help of my friends at Arduino Gran Canaria.

Arduino is a microcontroller board that allows to connect to sensors and other electronic components. It has a companion program called the Arduino IDE, which makes really easy to program the microcontroller. The language is based in C/C++ but the functions are quite easy to learn. This easiness is part of the revolution. Making LEDs blink and moving robots with Arduino is easy and fun. But it can be easier! Kids and adults who don't know programming often struggle with C/C++ coding strictness: commas and brackets must be correctly placed, or the program won't run. How to make it even more intuitive? Visual programming to the rescue!

Scratch is a popular visual programming environment for kits, developed at MIT. Instead of keyboards and codes, kids use the mouse and blocks to create games like a puzzle. And there is an extension called Scratch for Arduino that allows to control the board from Scratch. However, the program runs in Scratch, so the Arduino board must be always connected to the PC.

So, what does Visualino do? It's a Scratch-like program: it allows to create programs for Arduino like a puzzle. But it directly programs the Arduino board, and the PC connection is no longer needed for it to run. Also it generates the code in real time, so the user knows what's happening. The environment is very similar to Arduino IDE, with the same main options: Verify, Build, Save, Load and Monitor. Visualino can be seen at work in this screencast:

Visualino is based in Google Blockly and bq's bitbloqs. It is open source, multiplatform and multilanguage. It just requires Arduino 1.6, which is the actual engine used to program Arduino boards. You can download the beta version right now for Ubuntu, Mac and Windows. The code is available at github.com/vrruiz/visualino. Right now it works out of the box. It needs some documentation and translations to Catalan, Italian and Portuguese will be welcomed. 

  • Screenshot from 2015-03-25 15:27:30
  • Screenshot from 2015-03-25 15:28:04
Screenshot from 2015-03-25 15:28:04

Visualino was presented this week to a group of educators at an Arduino Workshop, and next month, we'll have a three-hour session to teach how to use it. So I hope it will be used soon at schools here at home.

So, go to download and use it. Feedback is welcome. And stay tuned, as there are some niceties coming very soon :)

Read more
UbuntuTouch

[原]Ubuntu OS系统融合(英文视频)

在这个视频里,我们可以看见Ubuntu系统在不断地演进。在未来手机,平板,电视及桌面将使用一个操作系统。Ubuntu手机操作系统正在为这一切做准备。


http://v.youku.com/v_show/id_XOTA5NDA0OTUy.html

作者:UbuntuTouch 发表于2015/3/11 13:26:59 原文链接
阅读:483 评论:0 查看评论

Read more
UbuntuTouch

[原]Ubuntu 手机开发培训准备

在这篇文章中,我们将介绍学生如何做培训准备前的准备工作。提前准备并安装好自己的环境是做好一个培训非常重要的步骤。否则我们将浪费我们自己很多的宝贵的时间在课堂上!


                 

1)安装好自己的SDK


如果想在自己的电脑上安装Ubntu系统

学生可以按照文章“Ubuntu SDK 安装”安装好自己的Ubuntu系统及SDK。让后根据文章“创建第一个Ubuntu for phone应用”来检验自己安装的环境是否正确。这种安装通常需要在电脑上安装多个系统,或虚拟机(模拟器在虚拟机的效果可能并不好,在虚拟机中模拟器目前不能正常启动)。

如果想做一个专为Ubuntu手机开发而做的Live USB

请参照文章“如何制作Ubuntu SDK Live USB盘”来专门制作一个可以启动的Live USB盘。这个盘可以直接插入到电脑中的USB口中,并启动Ubuntu系统。这个USB盘中已经安装好整个可以供开发的SDK,不需要安装任何额外的软件即可开发。

a) 在BIOS中启动硬件虚拟化功能,这样会使得模拟器的运行速度加快
b) 在BIOS中设置优选顺序以使得USB可以优先启动,或在启动的时候按下F12功能键,并选择由USB来启动Ubuntu

在启动Ubuntu系统后,Ubuntu SDK已经完全安装好了。开发者可以直接进行开发了。建议参阅文章“创建第一个Ubuntu for phone应用”来检验自己安装的环境是否正确。


在开发过程中如果使用手机进行安装时,如果需要密码解锁手机的话,这个密码是“0000”。

2)Ubuntu手机介绍


对不熟悉Ubuntu手机的开发者来说,可以先观看视频“如何使用Ubuntu手机”来了解Ubuntu手机。如果你想对Ubuntu SDK有更深的认识,请观看视频“如何使用Ubuntu SDK (视频)”。开发者也可以观看Ubuntu手机的官方宣传视频来更进一步了解。

你可以在地址“Ubuntu手机介绍”下载有关Ubuntu手机介绍的幻灯片,并在地址观看相应的视频


3)QML应用开发


Flickr应用开发

阅读文章“使用Ubuntu SDK开发Flickr应用教程”,并观看视频“Ubuntu手机应用QML开发 (视频)”。幻灯片“Ubuntu应用开发”。

教程的源码在: bzr branch lp:~liu-xiao-guo/debiantrial/flickr7
我们可以在Shell中输入以上的指令来下载源码。

DeveloperNews RSS阅读器

首先我们可以阅读文章“从零开始创建一个Ubuntu应用--一个小的RSS阅读器”及文章“如何在Ubuntu中使用条件布局”。视频在“在Ubuntu平台上开发Qt Quick QML应用 (视频)

教程的源码在:bzr branch lp:~liu-xiao-guo/debiantrial/developernews4

我们可以在Shell中输入以上的指令来下载源码。


网址也有很多的教程哦!

4)Scope 开发


大家可以先观看视频“Ubuntu Scope简介及开发流程”来了解Ubuntu OS上的Scope开发流程。

阅读文章“在Ubuntu OS上创建一个dianping Scope (Qt JSON)”,并观看视频“如何在Ubuntu OS上开发Scope (视频)”。教程的另外一个视频在地址观看。

幻灯片“Scope技术开发”。幻灯片讲演的视频在地址观看。

教程的源码在: bzr branch lp:~liu-xiao-guo/debiantrial/dianpianclient8
我们可以在Shell中输入以上的指令来下载源码。

更多关于Scope开发的例程可以在链接找到。


5)HTML 5开发


我们可以参阅文章“在Ubuntu手机平台上创建一个HTML 5的应用”来学习如何在Ubuntu平台上开发HTML 5的应用。源码在地址下载:

git clone https://gitcafe.com/ubuntu/html-rssreader6.git

Ubuntu上的HTML5开发幻灯片:Ubuntu上的HTML5开发。幻灯片视频

大家可以利用在线Webapp生成器来生产我们喜欢的网页的click安装包。具体教程“如何使用在线Webapp生成器生成安装包

更多例程:
  • 百度翻译: bzr branch lp:~liu-xiao-guo/debiantrial/baidutranslator
  • 字典: bzr branch lp:~liu-xiao-guo/debiantrial/meanings

6)更多的培训材料


我们也有更多的英文的培训材料。开发者可以在地址下载。


如果有任何问题,请在该文章处评论。我会尽力回答你们的问题。大家也可以到Ubuntu手机专有讨论区来讨论问题



在教学中的过程中如果需要联网,请使用如下的用户名及密码

CM: Huawei-E5375-E16E 密码:ji69ea97   

手机的解锁密码为:0000


作者:UbuntuTouch 发表于2015/1/4 15:36:54 原文链接
阅读:2447 评论:6 查看评论

Read more