Canonical Voices

UbuntuTouch

QQuickImageProvider提供了一个可以供我们对QPixmap及多线程的Image请求。这个请求的文件甚至可以在网络上。它的好处是:

  • 在Image中装载一个QPixmap或QImage而不是一个具体的图像文件
  • 在另外一个thread异步装载图片
通过访问一个图片可以通过如下的方式:

Column {
    Image { source: "image://colors/yellow" }
    Image { source: "image://colors/red" }
}

显然,这里的yellow和red不是文件名。它的提供依赖于在QQuickImageProvider中的requestImage的具体实现。

下面我们来通过一个具体的例程来介绍如何使用QQuickImageProvider来从网路上请求一个我们需要的图像。


myimageprovider.h


#ifndef MYIMAGEPROVIDER_H
#define MYIMAGEPROVIDER_H

#include <QQuickImageProvider>
class QNetworkAccessManager;

class MyImageProvider : public QQuickImageProvider
{
public:
    MyImageProvider(ImageType type, Flags flags = 0);
    ~MyImageProvider();
    QImage requestImage(const QString & id, QSize * size, const QSize & requestedSize);

protected:
    QNetworkAccessManager *manager;
};

#endif // MYIMAGEPROVIDER_H


myimageprovider.cpp


#include "myimageprovider.h"

#include <QNetworkAccessManager>
#include <QNetworkReply>
#include <QEventLoop>

MyImageProvider::MyImageProvider(ImageType type, Flags flags) :
    QQuickImageProvider(type,flags)
{
    manager = new QNetworkAccessManager;
}

MyImageProvider::~MyImageProvider()
{
    delete manager;
}

QImage MyImageProvider::requestImage(const QString &id, QSize *size, const QSize &requestedSize)
{
    qDebug() << "id: " << id;
    qDebug() << "reequestedSize: " << requestedSize.width() + " " + requestedSize.height();
    QUrl url("http://lorempixel.com/" + id);
    QNetworkReply* reply = manager->get(QNetworkRequest(url));
    QEventLoop eventLoop;
    QObject::connect(reply, SIGNAL(finished()), &eventLoop, SLOT(quit()));
    eventLoop.exec();
    if (reply->error() != QNetworkReply::NoError)
        return QImage();
    QImage image = QImage::fromData(reply->readAll());
    size->setWidth(image.width());
    size->setHeight(image.height());
    return image;
}

上面我们从网路地址“http://lorempixel.com/”取得文件,并转化为一个QImage。QQuickImageProvider要求我们必须实现如下的一个virtual方法。

QQuickImageProvider(ImageType type, Flags flags = 0)
virtual	~QQuickImageProvider()
Flags	flags() const
ImageType	imageType() const
virtual QImage	requestImage(const QString & id, QSize * size, const QSize & requestedSize)
virtual QPixmap	requestPixmap(const QString & id, QSize * size, const QSize & requestedSize)
virtual QQuickTextureFactory *	requestTexture(const QString & id, QSize * size, const QSize & requestedSize)

我们可以在QML中通过如下的方式来访问一个图片:

 Image { source: "image://myprovider/500/500/" }

显然我们看到的source不是一个具体的文件。并且,它的source是以“image://”开始的。

我们在我们的main.cpp中做如下的实现:

#include "myimageprovider.h"

int main(int argc, char *argv[])
{
    QGuiApplication app(argc, argv);

    QQuickView view;
    QQmlEngine *engine = view.engine();
    MyImageProvider *imageProvider = new MyImageProvider(QQmlImageProviderBase::Image);
    engine->addImageProvider("myprovider", imageProvider );
    view.setSource(QUrl(QStringLiteral("qrc:///Main.qml")));
    view.setResizeMode(QQuickView::SizeRootObjectToView);
    view.show();
    return app.exec();
}

注意这里的“myprovider”和我们上面的Image中访问的对应起来。

我们的main.qml文件如下:

main.qml


import QtQuick 2.0
import Ubuntu.Components 1.1

/*!
    \brief MainView with a Label and Button elements.
*/

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "imageprovider.liu-xiao-guo"

    /*
     This property enables the application to change orientation
     when the device is rotated. The default is false.
    */
    //automaticOrientation: true

    // Removes the old toolbar and enables new features of the new header.
    useDeprecatedToolbar: false

    width: units.gu(60)
    height: units.gu(85)

    Page {
        title: i18n.tr("imageprovider")

        Image {
            id: img
            anchors.centerIn: parent
            source: "image://myprovider/500/500/"
            anchors.fill: parent
            onStatusChanged: {
                if(status == Image.Ready)
                    indicator.running = false;
            }

            ActivityIndicator {
                id: indicator
                anchors.centerIn: parent
                running: false
            }

            MouseArea {
                anchors.fill: parent
                onClicked: {
                    indicator.running = true;
                    img.source = "image://myprovider/500/500/?seed=" + Math.random(1000)
                }
            }
        }
    }
}


我们在点击图片时,它会自动地随机地从网站取得下一个图片,并显示出来:

  

整个项目的源码在:git clone https://gitcafe.com/ubuntu/imageprovider.git

作者:UbuntuTouch 发表于2015/7/29 13:22:17 原文链接
阅读:177 评论:0 查看评论

Read more
facundo

Piedra libre


¡Juguemos a las escondidas!

alt

Si no recuerdo mal, esta foto la tomé en el bosque que hay atrás de un hotel en Bruselas.

Read more
Nicholas Skaggs

I wanted to share a unique opportunity to get invovled with ubuntu and testing. Last cycle, as part of a datacenter shuffle, the automated installer testing that was occurring for ubuntu flavors stopped running. The images were being test automatically via a series of autopilot tests, written originally by the community (Thanks Dan et la!). These tests are vital in helping reduce the burden of manual testing required for images by running through the base manual test cases for each image automatically each day.

When it was noticed the tests didn't run this cycle, wxl from Lubuntu accordingly filed an RT to discover what happened. Unfortunately, it seems the CI team within Canonical can no longer run these tests. The good news however is that we as a community can run them ourselves instead.

To start exploring the idea of self-hosting and running the tests, I initially asked Daniel Chapman to take a look. Given the impending landing of dekko in the default ubuntu image, Daniel certainly has his hands full. As such Daniel Kessel has offered to help out and begun some initial investigations into the tests and server needs. A big thanks to Daniel and Daniel!

But they need your help! The autopilot tests for ubiquity have a few bugs that need solving. And a server and jenkins need to be setup, installed, and maintained. Finally, we need to think about reporting these results to places like the isotracker. For more information, you can read more about how to run the tests locally to give you a better idea of how they work.

The needed skillsets are diverse. Are you interested in helping make flavors better? Do you have some technical skills in writing tests, the web, python, or running a jenkins server? Or perhaps you are willing to learn? If so, please get in touch!


Read more
Colin Watson

Users of some email clients, particularly Gmail, have long had a problem filtering mail from Launchpad effectively.  We put lots of useful information into our message headers so that heavy users of Launchpad can automatically filter email into different folders.  Unfortunately, Gmail and some other clients do not support filtering mail on arbitrary headers, only on message bodies and on certain pre-defined headers such as Subject.  Figuring out what to do about this has been tricky.  Space in the Subject line is at a premium – many clients will only show a certain number of characters at the start, and so inserting filtering tags at the start would crowd out other useful information, so we don’t want to do that; and in general we want to avoid burdening one group of users with workarounds for the benefit of another group because that doesn’t scale very well, so we had to approach this with some care.

As of our most recent code update, you’ll find a new setting on your “Change your personal details” page:

Screenshot of email configuration options

If you check “Include filtering information in email footers”, Launchpad will duplicate some information from message headers into the signature part (below the dash-dash-space line) of message bodies: any “X-Launchpad-Something: value” header will turn into a “Launchpad-Something: value” line in the footer.  Since it’s below the signature marker, it should be relatively unobtrusive, but is still searchable.  You can search or filter for these in Gmail by putting the key/value pair in double quotes, like this:

Screenshot of Gmail filter dialog with

At the moment this only works for emails related to Bazaar branches, Git repositories, merge proposals, and build failures.  We intend to extend this to a few other categories soon, particularly bug mail and package upload notifications.  If you particularly need this feature to work for some other category of email sent by Launchpad, please file a bug to let us know.

Read more
Prakash

With 10 firms, India claims the second-highest number of companies for the fifth year in a row on Forbes Asia Fabulous 50 list again dominated by China with 25 companies.

The Fab 50’s brightest star over the decade, India’s HDFC Bank, did not debut until 2006. However, it has now made the list nine times, more than any other company, noted the US business magazine.

Read More: http://www.firstpost.com/business/hdfc-lenovo-ten-indian-firms-among-forbes-asia-fabulous-50-2358866.html

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150728 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Kernel Prep
  • Trusty – Kernel Prep
  • lts-Utopic – Kernel Prep
  • Vivid – Kernel Prep
    Current opened tracking bugs details:
  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html
    Schedule:

    cycle: 26-Jul through 15-Aug
    ====================================================================
    24-Jul Last day for kernel commits for this cycle
    26-Jul – 01-Aug Kernel prep week.
    02-Aug – 08-Aug Bug verification & Regression testing.
    09-Aug – 15-Aug Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Prakash

When a subordinate of President Kalam at DRDO couldn’t take his children to an exhibition due to work pressure, Kalam surprised his subordinate and took the children instead!

During a significant project of the DRDO, the work pressure was high. A scientist approached his boss – Dr. Kalam – and asked to leave early that day considering he had promised his children to take them to an exhibition. Kalam generously granted the permission, and the scientist got back to work. When he did, he lost the track of time and forgot to leave early. He reached home, feeling guilty, and looked for his kids, but could only find his wife. He asked for the kids, and to his surprise she told him: “your manager was here around 5:15 and he took the kids for the exhibition!”

Apparently, Dr. Kalam had been observing the scientist and noticed that he might never realise he had to go home. Feeling for the kids, he decided to take the kids instead. If that’s not sweet, what is?

Read More: http://www.youthconnect.in/2014/11/13/12-rare-stories-about-dr-apj-abdul-kalam-will-make-your-day-today/

Read more
Prakash

India was the sole emerging market bright-spot in IBM’s second-quarter earnings, as the other BRIC countries weighed down the technology giant’s results.

Read more at: http://economictimes.indiatimes.com/articleshow/48170664.cms

Read more
Richard McCartney

Converting old guidelines to vanilla

How the previous guidelines worked

Guidelines essentially is a framework built by the Canonical web design team. The whole framework has an array of tools to make it easy to create a Ubuntu themed sites. The guidelines were a collaboration between developers and designers and followed consistent look which meant in-house teams and community websites could have a consistent brand feel.

It worked in one way, a large framework of modules, helpers and components which built the Ubuntu style for all our sites. The structure of this required a lot of overrides and work arounds for different projects and added to a bloated nature that the guidelines had become. Canonical and cloud sites required a large set of overrides to imprint their own visual requirements and created a lot of duplication and overhead for each site.

There was no build system nor a way to update to the latest version unless using the hosted pre-compiled guidelines or pulled from our bazaar repository. Not having any form of build step meant having to rely on a local Sass compiler or setup a watcher for each project. Also we had no viable way to check linting errors or create a concrete coding standard.

The actual framework its self was a ported CSS framework into Sass. Not utilising placeholders or mixins correctly and with a bloated amount of variables. To change one colour for example or changing the size of an element wouldn’t be as easy as passing a mixin with set values or changing one variable.

Unlike how we have currently built in Vanilla, all preprocessor styles are created via mixins. Creating responsive changes would be done in a large media query at the end of any document and this again would be repeated for our Canonical or Cloud styles too.

Removing Ubuntu and Canonical from theme

Our first task in building Vanilla was referencing all elements which were ‘Ubuntu’ centric. Anything which had a unique class, colour or style. Once identified the team systematically took one section of each part of guidelines and removed the classes or variables and creating new versions. Once this stage was achieved the team was able to then look at refactoring and updating the code.

Clean-up and making it generic

We decided when starting this project to update how we write any new module / element. Linting was a big factor and when using a build system like gulp we finally had the ability to adhere to a coding standard. This meant a lot of modules / elements had to be rewritten and also improved upon, trimming down the Sass nesting, applying new techniques such as flex box and cleaning duplicated styles.

But the main goal was to make it generic, extendable and easy. Not the simplest of tasks, this meant removing any custom modules or specific style / classes but also building the framework to change via a variable update or a value change with in a mixin. We wanted the Vanilla theme to inherit another developers style and that would cascade through out the whole framework with ease. Setting the brand colour for example would effect the whole framework and change a multiple of modules / elements. But you are not restricted which we had as a bottle neck with the old guidelines.

Using Sass mixins

Mixins are a powerful part of Sass which we weren’t utilising. In guidelines they were used to create preprocessor polyfills, something which was annoying. Gulp now replaces that need. We used mixins to modularise the entire framework, thus giving flexibility over which parts of the framework a project requires.

The ability to easily turn on/off a section of vanilla felt very powerful but required. We wanted a developer to choose what was needed for their project. This was the opposite of guidelines where you would receive the entire framework. In Vanilla, each section our elements or modules would also be encapsulated with in mixins and on some have values which would effect them. For example the buttons mixin;

@mixin vf-button($button-color, $button-bg, $border-color) {
  @extend %button-pattern;
  color: $button-color;
  background: $button-bg;
    
  @if $border-color != null {
    border: 1px solid $border-color;
  }
    
  &:hover {
    background: darken($button-bg, 6.2%);
      
    @if $button-bg == $transparent {
      text-decoration: underline;
    }
  }
}



The above code shows how this mixin isn’t attached to fixed styles or colours. When building a new Vanilla theme a few variable changes will style any button to the projects requirements. This is something we have replicated through out the project and creates a far better modular framework.

Creating new themes

As I have mentioned earlier a few changes can setup a whole new theme in Vanilla, using it as a base and then adding or extending new styles. Change the branding or a font family just requires overwriting the default value e.g $brand-colour: $orange !default; is set in the global variables document. Amending this in another document and setting it to $brand-colour: #990000; will change any element effected by brand colour thus creating the beginning of a new theme.

We can also take this per module mixin. Including the module into a new class or element and then extend or add upon it. This means themes are not constricted to just using what is there but gives more freedom. This method is particularly useful for the web team as we build themes for Ubuntu, Canonical and Cloud products.

An example of a live theme we have created is the Ubuntu vanilla theme. This is an extension of the Vanilla framework and is set up to override any required variables to give it the Ubuntu brand. Diving into the theme.scss It shows all elements used from Vanilla but also Ubuntu specific modules. These are exclusively used just for the Ubuntu brand but are also structured in the same manner as the Vanilla framework. This reduces complexity in maintaining these themes and developers can easily pick up what has been built or use it as a reference to building their own theme versions.

Read more
Daniel Holbach

Announcing UbuContest 2015

Have you read the news already? Canonical, the Ubucon Germany 2015 team, and the UbuContest 2015 team, are happy to announce the first UbuContest! Contestants from all over the world have until September 18, 2015 to build and publish their apps and scopes using the Ubuntu SDK and Ubuntu platform. The competion has already started, so register your competition entry today! You don’t have to create a new project, submit what you have and improve it over the next two months.

But we know it's not all about shiny new apps and scopes! A great platform also needs content, great design, testing, documentation, bug management, developer support, interesting blog posts, technology demonstrations and all of the other incredible things our community does every day. So we give you, our community members, the opportunity to nominate other community members for prizes!

We are proud to present five dedicated categories:

  1. Best Team Entry: A team of up to three developers may register up to two apps/scopes they are developing. The jury will assign points in categories including "Creativity", "Functionality", "Design", "Technical Level" and "Convergence". The top three entries with the most points win.

  2. Best Individual Entry: A lone developer may register up to two apps/scopes he or she is developing. The rest of the rules are identical to the "Best Team Entry" category.

  1. Outstanding Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something "exceptional" on a technical level. The nominated candidate with the most jury votes wins.

  1. Outstanding Non-Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something exceptional, but non-technical, to bring the Ubuntu platform forward. So, for example, you can nominate a friend who has reported and commented on all those phone-related bugs on Launchpad. Or nominate a member of your local community who did translations for Core Apps. Or nominate someone who has contributed documentation, written awesome blog articles, etc. The nominated candidate with the most jury votes wins.

  1. Convergence Hero: The "Best Team Entry" or "Best Individual Entry" contribution with the highest number of "Convergence" points wins. The winner in this category will probably surprise us in ways we have yet to imagine.

Our community judging panel members Laura Cowen, Carla Sella, Simos Xenitellis, Sujeevan Vijayakumaran and Michael Zanetti will select the winners in each category. Successful winners will be awarded items from a huge pile of prizes, including travel subsidies for the first-placed winners to attend Ubucon Germany 2015 in Berlin, four Ubuntu Phones sponsored by bq and Meizu, t-shirts, and bundles of items from the official Ubuntu Shop.

We wish all the contestants good luck!

Go to ubucontest.eu or ubucon.de/2015/contest for more information, including how to register and nominate folks. You can also follow us on Twitter @ubucontest, or contact us via e-mail at contest@ubucon.de.

 

Read more
Kick In

 

  • smoser to check with Odd_Bloke on status of high priority bug 1461242
  • Think about numad integration
  • Next meeting will be on Jul 29st 16:00:00 UTC in #ubuntu-meeting

Full agenda and log

 

Read more
April Wang

手机系统7月更新

7月份Ubuntu手机系统的更新信息列表:

常规类改善

  • 图标更新,包括应用和提示类图标
  • Shell rotation
  • 增加了更多欧洲小语种语言类键盘,包括罗马尼亚语,苏格兰盖尔语,希腊语,挪威文,乌兰克语,斯洛伐克文,冰岛文

Scopes

  • 默认聚合类新闻,照片,和今天Scope中都已经支持关键词标注

商店更新

  • 退款功能 (Ubuntu商店目前允许用户在购买应用后15分钟内取消订单)
  • 新增应用评级编辑

浏览器更新

  • 新增书签文件夹
  • 键盘便捷键功能

其他

  • 改善了来电转发的用户界面(在系统设置 > 手机)
  • 添加了WPA企业级支持到系统设置和网络中
  • 在魅族MX4手机上实现了,通过LED闪灯提示用户手机有最新提示讯息
  • 新增在拨号和短信应用中编辑联络人信息的功能
  • 支持在拨号和短信应用中使用群发信息
  • 添加了GPS位置标注到照相机中
  • SDK最新添加了让显示屏持续开启的功能(例如,在游戏开发中,开发者可以避免屏幕自动超时黑屏的情况)
  • 另外还修复了50多个小八哥

Read more
facundo

Cena Gurmé


El sábado a la noche hice en casa la "Cena Gurmé", una cena más elaborada de lo que hago normalmente cuando invito gente, para pocas personas (básicamente porque la mesa más grande de casa no es demasiado grande).

Esta es la invitación que les mandé a los pocos que tuvieron la oportunidad de venir:

Invitación a la Cena Gurmé

La idea de este post no es sólo contar que salió todo bien, que la pasamos bárbaro, que terminamos con la panza bien llena y agradecerle a Moni por toda la ayuda, sino pasar las recetas de cada plato.

Ahí vamos.


Bruschetas a los tres mares

Como podrán adivinar, son bruschetas. Tres. Y cuyo componente principal en cada caso se saca del mar :p

Antes que nada hay que conseguirse un pan rico en la panadería que se pueda cortar bastante en diagonal y obtener una pieza alargada. Luego mandar las fetas de pan al horno hasta tostar, rociándolos con aceite de oliva.

Para la primeras bruschetas armar una pasta con queso crema, jugo de limón, y rayadura de cáscara de limón (recuerden usar sólo la parte amarilla, la blanca es muy amarga). La pasta tiene que quedar con sabor alimonado, y no se tiene que notar la rayadura del limón (sino sería mucho! incluso sin verlos se sienten en boca).

Entonces, untar generosamente las bruschetas con esta pasta, y luego acomodar una buena porción de salmón rosado crudo. Completar con un poco de jugo de limón.

Bruschetas de salmón sobre crema alimonada

La salsa de las segundas bruschetas implica tener preparada de antes una provenzal casera, al menos con tres o cuatro días de anticipación para que sea sabrosa. Mezclarla con mayonesa y queso crema.

En el momento, saltear en aceite de oliva langostinos pelados, hasta empezar a dorar.

El armado es sencillo: una buena ración de la crema a la provenzal, dos o tres langostinos (en función del tamaño de los mismos y la tostada), y completar con algunas gotas de jugo de limón y un poco de perejil fresco picado en el momento.

Bruschetas de langostinos saltados sobre crema a la provenzal

La tercer tanda termina de subir en sabor, e incluso es caliente. El pescado en este caso son sardinas naturales (¡no en latas); yo conseguí unas portuguesas que vienen congeladas, en el Mercado Central... no sé en qué otro lado conseguirlas, mi segunda opción sería "la pescadería del supermercado" del barrio chino, en Capital (si conocés el barrio chino, sabés de qué estoy hablando ;). Y las conseguí enteras, así que las limpié (cabeza, cola, tripas), desespiné, y dejé listos por cada sardinas dos filetes.

Para poner sobre el pan, rehogar bien chiquito cebolla, ajo, y tomate. Los filetitos de sardinas también rehogarlos, que queden cocinados y calentitos, y armar así la bruscheta. Terminar rociándolas con aceite de oliva y un poco de perejil fresco.

Bruschetas de sardinas portuguesas naturales sobre colchón de rehogado de cebolla, tomate y ajo


Bondiola atrapada

Este es el plato principal, y lleva su tiempo armarlo. Tiene dos variantes, la más rica, y la apta para los que no les gusta la comida agridulce :)

Algunas horas antes (medio día, un día), cortar la bondiola en cubos (tamaño bocado) y meterlos en un tupper o una bolsa, para dejar macerar, con cerveza negra, ajo y perejil.

Pre-hervir papa (versión no-agridulce) y/o batata (versión rica), pero no que quede demasiado blanda. Poner un poco de sal, pero no mucha. También rehogar cebolla y cebolla de verdeo, salpimentar (de nuevo, ¡poca sal!).

Un rato antes de comer, luego del tiempo de macerado, saltar los cubitos de bondiola, e irlos separando. Levantar el fondo de cocción con salsa de soja (que es salada, por eso los avisos de arriba de guarda con la sal).

En unos cacharritos aptos para horno, poner una base de pedacitos de la papa (versión aburrida), o de la batata, agregando ciruela pasa (versión copada). Luego los pedacitos de carne y la cebolla, más lo que se levantó de la sartén. Si parece que queda "seco", no se preocupen.

Hacer una masa de pizza o de pan, y al final del amasado mezclar con romero. Con esta masa tapar los cacharritos de manera de cubrirlos bien y que sobre un poco en el borde. Les decía que no se preocupen si quedó seco, porque va a evaporar poco, y a la carne le quedan jugos para largar.

Meter los cacharritos en el horno, y dejar un rato, unos 15 o 20 minutos. Un ratín luego de meterlos, antes de que la tapa de masa se termine de cocinar, rociarla con aceite de oliva.

Sacar y servir caliente. ¡Guarda que quema!

Bondiolita macerada con batata y ciruela, terminada al horno en cazuela cerrada


Flan cara y ceca

Acá no hay muchos secretos, pero la clave está en hacer un buen flan. Con huevos, azucar, leche, etc., no una porquería de polvito, eh! Bien sabroso, como siempre lo hace Moni.

Servirlo con tu dulce de leche preferido, y crema chantilly de verdad (de nuevo, nada de porquerías de tubito en aerosol), batida un rato antes para que tenga heladera y quede en su punto justo (a mí me salió un poco chirla, debo reconocer).

Flan casero casero con dulce de leche regalado por hermana y crema Chantilly


Las bebidas

La gente llegó puntual, alrededor de una hora y pico antes que salieran las primeras bruschetas.

Arrancamos con un aperitivo refrescante. Como teníamos ganas de jugar, Lucio preparó algo que nadie sabía como hacer, pero con una base que habíamos elegido antes (pisco peruano, agua tónica, limón).

Con las bruschetas abrimos un vino blanco, bien fresco, que estuvo ideal para las primeras dos tandas. También se tomó cerveza.

Para la tercer tanda de bruschetas, y especialmente el plato principal, lo mejor es vino tinto. Lo que hice acá fue ofrecer unas seis o siete botellas distintas, y dejar que la gente elija lo que prefiera: empezamos con un Trumpeter malbec, seguimos con un Gascón reserva Cabernet Sauvignon... pero la idea es que elijan un poco en función del gusto de los comensales.

Para terminar, un cafecito rico :). ¡Provecho!

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150721 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: Wily Development Kernel

We have rebased the master and master-next branch of our Wily repo to
4.1 and uploaded to the archive. We’ll move master-next to start
tracking 4.2.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs July 30 – Alpha 2 (~1 weeks away)
    Thurs Aug 6 – 14.04.3 (~2 weeks away)
    Thurs Aug 20 – Feature Freeze (~4 weeks away)
    Thurs Aug 27 – Beta 1 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 04-Jul through 25-Jul
    ====================================================================
    03-Jul Last day for kernel commits for this cycle
    05-Jul – 11-Jul Kernel prep week.
    12-Jun – 25-Jul Bug verification; Regression testing; Release
    ** NOTE: This cycle produces the kernel that will be in the 14.04.3
    point release.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
Prakash

GOOGLE HAS BECOME the biggest name yet to back the open source cloud system OpenStack. Specifically, Google will help integrate its own open source container management software Kubernetes.

This may seem like in-the-enterprise-weeds news, but it represents another significant step as Google tries to make up ground against Amazon’s wildly popular AWS suite of cloud products.

Read More: http://www.wired.com/2015/07/google-backs-open-source-system-cloud-battle-amazon/

Read more
Dustin Kirkland


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
└── mprime
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── juju-info-relation-changed
│   ├── juju-info-relation-departed
│   ├── juju-info-relation-joined
│   ├── start
│   ├── stop
│   └── upgrade-charm
├── icon.png
├── icon.svg
├── metadata.yaml
├── README.md
└── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See http://www.mersenne.org/ for more details!
tags:
- misc
subordinate: true
requires:
juju-info:
interface: juju-info
scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
apache2:
charm: cs:trusty/apache2-14
exposed: false
service-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
relations:
juju-info:
- mprime
units:
apache2/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:56:03-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
machine: "1"
public-address: 23.20.147.158
subordinates:
mprime/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:58:52-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:58:56-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
upgrading-from: local:trusty/mprime-1
public-address: 23.20.147.158
mprime:
charm: local:trusty/mprime-1
exposed: false
service-status: {}
relations:
juju-info:
- apache2
subordinate-to:
- apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
- docker
services:
- name: mprime
description: "Search for Mersenne Prime Numbers"
start: start.sh
caps:
- docker_client
- networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin

Read more
bmichaelsen

They sentenced me to twenty years of boredom
For trying to change the system from within
— Leonard Cohen, I’m your man, First we take Manhattan

Advance warning: This blog post talks about C++ coding style, and given the “expressiveness” (aka a severe infection with TimTowTdi) this is bound to contain significant amounts of bikeshedding, personal opinion/preference. As such, be invited to ignore all this as the ramblings of a raging lunatic.

Anyone who observed me spotting a Pimpl in code will know that I am not a fan of this idom. Its intend is to reduce build times by using a design pattern to move implementation details out of headers — a workaround for C++s misfeature of by default needing a recompile even for changing implementation details only without changing the public interface. Now I personally always thought a pure abstract base class to be a more “native” and less ugly way to tell this to the compiler. However, without real testing, such gut feelings are rarely good advisors in a complex language like C++.

So I did some testing on the real life performance of a pure abstract base class vs. a pimpl (each of course in a different compilation unit to prevent the compiler to optimize away what we want to measure) — and for reference, a class with functions that can be completely inlined. These are the three test implementations, inline:

-- header (hxx) --
class InlineClass final
{
	public:
		InlineClass(int nFirst, int nSecond)
			: m_nFirst(nFirst), m_nSecond(nSecond), m_nResult(0)
		{};
		void Add()
			{ m_nResult = m_nFirst + m_nSecond; };
		int GetResult() const
			{ return m_nResult; };
	private:
		const int m_nFirst;
		const int m_nSecond;
		int m_nResult;
};

Pimpl, as suggested by Effective Modern C++ when using C++11, but not C++14:

-- header (hxx) --
#include <memory>
class PimplClass final
{
	public:
		PimplClass(int nFirst, int nSecond);
		~PimplClass();
		void Add();
		int GetResult() const;
	private:
		struct Impl;
		std::unique_ptr<Impl> m_pImpl;
};
-- implementation (cxx) --
#include "pimpl.hxx"
struct PimplClass::Impl
{
	Impl(int nFirst, int nSecond)
		: m_nFirst(nFirst), m_nSecond(nSecond), m_nResult(0)
	{};
	const int m_nFirst;
	const int m_nSecond;
	int m_nResult;
};
PimplClass::PimplClass(int nFirst, int nSecond)
	: m_pImpl(std::unique_ptr<Impl>(new Impl(nFirst, nSecond)))
{}
PimplClass::~PimplClass()
	{}
void PimplClass::Add()
	{ m_pImpl->m_nResult = m_pImpl->m_nFirst + m_pImpl->m_nSecond; }
int PimplClass::GetResult() const
	{ return m_pImpl->m_nResult; }

Pure abstract base class:

-- header (hxx) --
#include <memory>
struct AbcClass
{
	static std::shared_ptr<AbcClass> Create(int nFirst, int nSecond);
	virtual ~AbcClass() {};
	virtual void Add() =0;
	virtual int GetResult() const =0;
};
-- implementation (cxx) --
#include "abc.hxx"
#include <memory>
struct AbcClassImpl final : public AbcClass
{
	AbcClassImpl(int nFirst, int nSecond)
		: m_nFirst(nFirst), m_nSecond(nSecond)
	{};
	virtual void Add() override
		{ m_nResult = m_nFirst + m_nSecond; };
	virtual int GetResult() const override
		{ return m_nResult; };
	const int m_nFirst;
	const int m_nSecond;
	int m_nResult;
};
std::shared_ptr<AbcClass> AbcClass::Create(int nFirst, int nSecond)
	{ return std::shared_ptr<AbcClass>(new AbcClassImpl(nFirst, nSecond)); }

Comparing these we find:

implementation lines added for GetResult() source entropy added source entropy for GetResult() runtime
inline 2 187 17 100%
Pimpl 3 316 26 168% (174%)
pure ABC 3 295 (273) 19 (16) 158%

So the abstract base class has less complex source code (entropy)1, needs less additional entropy to expand and is still faster in the end on common hardware (Intel i5-4200U) with common compiler optimization switches (-O2)2.

Additionally, in a non-trivial code base you might actually need to use virtual functions for your implementation anyway as you are deriving from or implementing an existing interface. In the Pimpl case, this means using two indirections (resolving the virtual function and then resolving the m_pImpl pointer in that function on top of that). In the abstract base class case thats not happening and in addition, it means that you can spare yourself the pure virtual declarations in the *.hxx (the virtual ... =0 ones), as those are already declared in the class derived from. In LibreOffice, this is true for any class implementing UNO interfaces. So the first numbers are actually biased against an abstract base class for real world code bases — the numbers in parathesis show the results when an interface is already defined elsewhere.

So unless the synthetic example used here is some kind of weird cornercase, this suggests abstract base classes being the better alternative over a Pimpl once the class goes beyond being a plain value type with completely inlineable accessor member functions.

Thanks for bearing with me on this rant about one of my personal pet peeves here!

1 entropy is measured as cat abc.[hc]xx|gzip|wc -c or cat pimpl.[hc]xx|sed -e 's/Pimpl/Abc/g'|gzip|wc -c.
2 Here is the code run for that comparision:

constexpr int repeats = 100000;

int pimplrun(long count)
//int abcrun(long count)
{
        std::vector< std::shared_ptr<PimplClass /* AbcClass */ > > vInstances;
        vInstances.reserve(count);
        while(--count)
                vInstances.emplace_back(std::make_shared<PimplClass>(4711, 4711));
                //vInstances.emplace_back(AbcClass::Create(4711, 4711));
        int result(0);
        count = vInstances.size();
        while(--count)
                for(auto pInstance : vInstances)
                {
                        pInstance->Add();
                        result += pInstance->GetResult();
                }
        return result;
}

Instances are stored in shared pointers as anything that a Pimpl is considered for would be “heavy” enough to be handled by reference instead of by value.

Update 1: Out of curiosity, I looked a bit deeper at this with callgrind. This is what I found for running the above (with 1000 repeats) and --cache-sim=yes:

I1 cache: 32768 B, 64 B, 8-way
D1 cache: 32768 B, 64 B, 8-way
LL cache: 3145728 B, 64 B, 12-way

event inline ABC Pimpl
Ir 23,356,163 38,652,092 38,620,878
Dr 5,066,041 14,109,098 12,107,992
Dw 3,060,033 5,094,790 5,099,991
I1ir 34 127 29
D1mr 499,952 253,006 999,013
D1mw 501,636 998,312 500,097
ILmr 28 126 24
DLmr 2 845 0
DLmw 0 1,285 250

I dont know exactly what to derive from that, but what is clear is that purely by instruction counts Ir this can not be explained. So you need --cache-sim=yes which gives the additional event counts. Actually Pimpl looks slightly better on most stats, so as it is slower in real life, the cache misses on the first level data cache D1mr might have quite an impact?

Update 2: This post made it to reddit, so I looked into some of the feedback from there. A common suggestion was to use for(auto& pInstance : vInstances) instead of for(auto pInstance : vInstances) in the benchmarking function. This had no significant impact on walltime measurements nor made it callgrind event counts show some clearer picture. I also played around with the order of linked objects to see if it has any impact (via cache locality etc.). While runtime measurements fluctuated quite a bit (even when using the same binary), the order was always the same: inlining quickest, then abstract base class and pimpl slowest.


Read more
Michael

I was recently in the situation of wanting to transition traffic gradually from an old deployment to a new deployment. It’s a large production system, so rather than just switching the DNS entries to point at the new deployment, I wanted to be able to shift the traffic over in a couple of controlled steps.

It turns out, Apache’s mod_proxy makes this relatively straight forward. You can choose which resource for which you want to move traffic, and easily update the proportion of traffic for that resource which should go through to the new env. Might be old news to some, but not having needed this before, I was quite impressed by Apache2’s configurability:

# Pass any requests for specific-url through to the balancer (defined below)
# to transition traffic from the old to new system.
ProxyPass /myapp/specific-url/ balancer://transition-traffic/myapp/specific-url/
ProxyPassReverse /myapp/specific-url/ balancer://transition-traffic/myapp/specific-url/

# Send all other requests straight to the backend for the old system.
ProxyPass /myapp/ http://old.backend.ip:1234/myapp/
ProxyPassReverse /myapp/ http://backend.ip:1234/myapp/

# Send 50% of the traffic to the old backend, and divide the rest between the
# two new frontends.
<Proxy balancer://transition-traffic>
    BalancerMember http://old.backend.ip:1234 timeout=60 loadfactor=2
    BalancerMember http://new.frontend1.ip:80 timeout=60 loadfactor=1
    BalancerMember http://new.frontend2.ip:80 timeout=60 loadfactor=1
    ProxySet lbmethod=byrequests
</Proxy>

Once the stats verify that the new env isn’t hitting any firewall or load issue, the loadfactor can be updated (only need to graceful apache) to ramp up traffic so that everything is hitting the new env. Of course, it adds one extra hop for serving requests, but it’s then much safer to switch the DNS entries when you *know* your new system is already handling the production traffic.


Filed under: Uncategorized

Read more
Steph Wilson

We have given our monochromatic icons a small facelift to make them more elegant, lighter and consistent across the platform by incorporating our Suru language and font style.

The rationale behind the new designs are similar to that of our old guidelines, where we have kept to our recurring font patterns but made them more streamlined and legible with lighter strokes, negative spaces, and a minimal solid shape.

What we have changed:

  • Reduced and standardized the strokes width from 6 or 8 pixels to 4.
  • Less solid shapes and more outlines.
  • The curvature radius of rectangles and squares has been slightly reduced (e.g message icon) to make them less ‘clumsy’.
  • Few outlines are ‘broken’ (e.g bookmark, slideshow, contact, copy, paste, delete) for more personality. This negative space can also represent a shadow cast.

 

Less solid shapes

Before

Screenshot 2015-07-15 16.39.59

After

Screenshot 2015-07-15 16.38.19

Lighter strokes

 

Before

Screenshot 2015-07-15 17.27.20

After

Screenshot 2015-07-15 17.26.34

Negative spaces

 

Before

Screenshot 2015-07-15 17.30.01

 

After

Screenshot 2015-07-15 17.50.50

 

Font patterns 

Oblique lines are slightly curved

Screenshot 2015-07-16 13.39.03

Arcs are not perfectly rounded but rather curved

 

Screenshot 2015-07-15 16.38.19

Uppercase letters use right or sharp angles

Screenshot 2015-07-16 13.42.56

Vertical lines have oblique upper terminations.

Screenshot 2015-07-15 17.26.34

Nice soft curves

Screenshot 2015-07-16 13.44.16

 

Action

blogpost-actions

Devices

blogpost-devices

Indicators

blogpost-indicators

Weather

blogpost-weather

Read more
Prakash

Last month, LinuxGizmos.com and the Linux Foundation’s Linux.com community website sponsored a 10-day SurveyMonkey survey that asked readers of both sites to choose their favorite three Linux- or Android-based open-spec single-board computers. This year, 1,721 respondents — more than twice the number from the 2014 survey — selected their favorites from a list of 53 SBCs, compared to last year’s 32.

2015sbcsurvey_sbc_pref_scores

Read More: http://linuxgizmos.com/raspberry-pi-stays-sky-high-in-2015-hacker-sbc-survey/



Read more