Canonical Voices

Posts tagged with 'uncategorized'

Brandon Schaefer

Mir windowing system in Kodi

Here is a video of Kodi running on mir [1]. The source for the port is sitting [2 xbmc/xbmc/windowing/mir ]. My plan is to upstream the changes and maintain the port [6]. There are some more things to fix up, but the main part is ready to upstream.

It will be compile time support, meaning you’ll need to compile with mir support specifically. As Kodi does not support runtime windowing selection. This also means it’ll have a low risk to the main code base. The port supports both opengl and opengles. I also need to test this out on an embedded device (such as raspi/dragon board). Ideally creating a nice kiosk Kodi media center would be awesome running on mir!

I have also snapped up Kodi with mir support [3] with help from [4] and Alberto Aguirre [5]. This is an amazing use case of a snap, having Kodi in a kiosk type system. The main goal as well is to get this working on an embedded device so you install a ubuntu core image, then just install this snap through the store and you’ve a fully working media center!

If you’ve any questions feel free to poke me (bschaefer) in freenode #ubuntu-mir anytime or email me at brandon.schaefer@canonical.com

Read more
Brandon Schaefer

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Alan Griffiths

MirAL-0.3

There’s a new MirAL release (0.3.0) available in ‘Zesty Zapus’ (Ubuntu 17.04). MirAL is a project aimed at simplifying the development of Mir servers and particularly providing a stable ABI and sensible default behaviors.

Unsurprisingly, given the project’s original goal, the ABI is unchanged. The changes in 0.3.0 fall into three categories:

  1. bugfixes;
  2. enabling keymap in newer Mir versions; and,
  3. additional features for shell developers to use.

Bugfixes

#1631958 Crash after closing Qt dialog

#1632325, #1633052 tooltips positioned wrong with mir-0.24

#1625849 [Xmir] Alt+` switch between different X11-apps not just windows.

Added miral-xrun as a better way to use Xmir

(unnumbered) miral-shell splash screen should be fullscreen.

(unnumbered) deduplicate logging of WindowSpecification::top_left

Enabling Keyboard Map in newer Mir versions

A new class miral::Keymap allows the keyboard map to be specified either programmatically or (by default) on the command line. Being in the UK I can get a familiar keyboard layout like this:

miral-shell --keymap gb

The class is also provided on Mir versions prior to 0.24.1, but does nothing.

Additional Features For Shell Developers To Use

#1629349 Shell wants way to associate initially requested window creation state with the window later created.

Shell code can now set a userdata property on the WindowSpecification in place_new_surface() and this is transferred to the WindowInfo.

Added miral/version.h to allow permit compile-time feature detection. If you want to detect different versions of MirAL at compile time you can, for example, write:

#include <miral/version.h>
#if MIRAL_VERSION >= MIR_VERSION_NUMBER(0, 3, 0)
#include <miral/keymap.h>
#endif

A convenient overload of WindowManagerTools::modify_window() that
doesn’t require the WindowInfo

Read more
Victor Palau

Ok, ok.. sorry for the click-bait headline – but It is mainly true.. I recently got a Nextcloud box , it was pretty easy to set up and here are some great instructions.

But this box is not just a Nextcloud box, it is  a box of unlimited possibilities. In just a few hours I added to my personal cloud  a WIFI access point and  chat server.   So here are some amazing facts you should know about Ubuntu and snaps:

Amazing fact #1 – One box, many apps

With snaps you can transform you single function device, into a box of tricks. You can add software to extend its functionality after you have made it. In this case I created an WIFI access point and added a Rocketchat server to it.

You can release a drone without autonomous capabilities, and once you are sure that you have nailed, you can publish a new app for it… or even sale a pro-version autopilot snap.

You can add an inexpensive Zigbee and Bluetooth module to your home router, and partner with a security firm to provide home surveillance services.. The possibilities are endless.

Amazing fact #2 – Many boxes, One heart

Maybe an infinite box of tricks is attractive to a geek like me,  but what it is interesting is product makers is :make one hardware, ship many products.

Compute parts (cpu,memory,storage) make a large part of  bill of materials of any smart device. So does validation and integration of this components with your software base… and then you need to provide updates for the OS and the kernel for years to come.

What if I told you could build (or buy) a single multi-function core – pre-integrated with a Linux OS  and use it to make drones, home routers, digital advertisement signs, industrial and home automation hubs, base stations, DSLAMs, top-of-rack switches,…

This is the real power of Ubuntu Core, with the OS and kernel being their own snaps – you can be sure the nothing has changes in them across these devices, and that you can reliably update of them.  You not only are able to share validation and maintenance cost across multiple projects, you would be able to increase the volume of your part order and get a better price.

20160927_101912

How was the box of tricks made:

Ingredients for the WIFI ap:

 

I also installed the Rocketchat server  snap for the store.

 


Read more
kevin gunn

hey just a very quick update. Had some more time to play around today and touch is working after all (only difference is I left my usb keyboard disconnected today so maybe it was getting confused)

Anyhow, here’s videos of Qt clocks with touch and Qt samegame with touch

Read more
Alan Griffiths

–window-management-trace

A feature added to the lp:miral trunk yesterday is making life a lot easier for developers working on MirAL based servers, and the toolkit extensions that support Mir. The feature is:

miral-shell --window-management-trace

Actually, the –window-management-trace switch will work with any MirAL base server (so miral-kiosk and egmde support it too).

What this does is cause the server to log every interaction with the window management policy – all the notifications it receives and all the calls it makes to the tools as a result.

This means that it is easy to find out that, for example, a “modal” gtk based dialog window is being created without specifying a parent. (Which is why the behaviour isn’t quite as expected – Mir servers will treat it as non-modal.)

To use this feature before the next MirAL release you do need build it yourself, but this only takes a minute (depending on your kit).It really is the easiest way to see exactly what is going on and why.

Read more
Victor Palau

In order to test k8s you can always deploy a single-node setup locally using minikube, however it is a bit limited if you want to test interactions that require your services to be externally accessible from a mobile or web front-end.

For this reason, I created a basic k8s setup for a Core OS single node in Azure using https://coreos.com/kubernetes/docs/latest/getting-started.html . Once I did this, I decided to automate its deployment via script.

It requires a Core OS instance running, then connect to it and:

git clone https://github.com/vtuson/k8single.git k8
cd k8
./kubeform.sh [myip-address] –> ip associated to eth, you can find it using ifconfig

This will deploy k8 into a single node, it sets up kubectl in the node and deploys skydns add on.

It also includes a busybox node file that can be deployed by:

kubectl create -f files/busybox

This might come useful to debug issues with the set up. To execute commands in busybox run:
kubectl exec busybox — [command]

The script and config files can be access at https://github.com/vtuson/k8single

If you hit any issues while deploying k8s in a single node a few things worth checking are:


sudo systemctl status etcd
sudo systemctl status flanneld
sudo systemctl status docker

Also it is worth checking what docker containers are running and if necessarily check the logs

docker ps -a
docker logs [container-id]


Read more
Victor Palau

I recently blogged about deploying kubernetes in Azure.  After doing so, I wanted to keep an eye on usage of the instances and pods.

Kubernetes recommends Heapster as a cluster aggregator to monitor usage of nodes and pods. Very handy if you are deploying in Google Compute (GCE) as it has a pre-build dashboard to hook it to.

Heapster runs on each node, collects statistics of the system and pods which pipes to a storage backend of your choice. A very handy part of Heapster is that export user labels as part of metadata, which I believe can be used to create custom reports on services across nodes.

monitoring-architecture

If you are not using GCE or just don’t want to use their dashboard, you can deploy a combo of InfluxDB and Grafana as a DIY solution. While this seems promising the documentation, as usual, is pretty short on details..

Start by using the “detailed” guide to deploy the add on, which basically consists of:

**wait! don’t run this yet until you finished reading article**

git clone https://github.com/kubernetes/heapster.git
cd heapster
kubectl create -f deploy/kube-config/influxdb/

These steps exposes Grafana and InfluxDB via the api proxy, you can see them in your deployment by doing:

kubectl cluster-info

This didn’t quite work for me, and while rummaging in the yamls, I found out that this is not really the recommended configuration for live deployments anyway…

So here is what I did:

  1. Remove env variables influxdb-grafana-controller.yaml
  2. Expose service as NodePort or LoadBalancer depends of your preference in grafana-service.yaml. E.g. Under spec section add: type: NodePort
  3. Now run >kubectl create -f deploy/kube-config/influxdb/

You can see the expose port for Grafana by running:
kubectl --namespace=kube-system describe service grafana-service

In this deployment, all the services, rc and pods are added under the kube-system namespace, so remember to add the –namespace flag to your kubectl commands.

Now you should be able to access Grafana on any external ip or dns on the port listed under NodePort. But I was not able to see any data.

Login to Grafana as admin (admin:admin by default), select DataSources>influxdb-datasource and test the connection. The connection is set up as http://monitoring-influxdb:8086, this failed for me.

Since InfluxDB and Grafana are both in the same pod, you can use localhost to access the service. So change the url to http://localhost:8086, save and test the connection again. This worked for me and a minute later I was getting realtime data from nodes and pods.

Proxying Grafana

I run an nginx proxy that terminates https  requests for my domain and a created a https://mydomain/monitoring/ end point as part of it.

For some reason, Grafana needs to know the root-url format that is being accessed from to work properly. This is defined in a config file.. while you could change it and rebuild the image, I preferred to override it via an enviroment variable in the influxdb-grafana-controller.yaml kubernetes file. Just add to the Grafana container section:

env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s:%(http_port)s/monitoring"

You can do this with any of the Grafana config values, which allows you to reuse the official Grafana docker image straight from the main registry.


Read more
Victor Palau

I recently looked to do my first live deployment of kubernetes, after having playing succesfully with minikube.

When trying to deploy kubernetes in public cloud, there is a couple of base options. You could start from scratch or use one of the turnkey solutions.

You have two turnkey solutions fro Azure, Fannel or Weave based. Basically these are two different networking solutions, but the actual turnkey solutions differ more than just the networking layer. I tried both and had issues with both, yeay!! However, I liked the fannel solution over Weave’s straight away. Flannel’s seems to be able to configure and used Azure better. For example, It uses a VM scale sets for the slave nodes, and configures external ips and security groups. This might be because the Flannel solution is sponsored by Microsoft, so I ended up focusing on it over Weave’s.

The documentation is not bad, but a bit short on some basic details. I  did the deployment in both Ubuntu 14.04 and OSX10 and worked in both. The documetation details jq and docker as the main dependencies. I found issues with older versions of jq that are part of the 14.04 Ubuntu archive, so make sure to install the lastest version from the jq website.

Ultimately, Kube-up.sh seems to be a basic configuration wrapper around azkube, a link to it is burried at the end of the kubernetes doc. Cole Mickens is the main developer for azkube and the turnkey soultion. While looking around his github profile, I found this very useful link on the status of support for Kubernetes in Azure. I would hope this eventually lands in the main kubernetes doc site.

As part of the first install instructions, you will need to provide the subscription and tenant id. I found the subscription id easily enough from the web console, but the tenant id was a bit more elusive. Altough the tenant id is not required for installations of 1.3, the script failed to execute without it. It seems like the best way to find it is the Azure cli tool, which you can get from node.js


npm install azure
azure login
azure account show

This will give you ll the details that you need to set it up. You can then just go ahead or you can edit deatils in  cluster/azure/config-default.sh

You might want to edit the number of VMs that the operation will create. Once you run kube-up.sh, you should hopefully get a working kubernetes deployment.

If for any reason, you would like to change the version to be install, you will need to edit the file called “version” under the kubernetes folder setup by the first installation step.

The deployment comes with a ‘utils’ script that makes it very simple do a few things. One is to copy the ssh key that will give you access to the slaves to the master.

$ ./util.sh copykey

From the master, you just need to access the internal ip using the “kube” username and specify your private key for authentication.

Next, I would suggest to configure your local kubectl and deploy the SkyDNS addon. You will really need this to easly access services.

$ ./util.sh configure-kubectl
$ kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/skydns.yaml

And that is it, if you run kubectl get nodes, you will be able to see the master and the slaves.

Since Azure does not have direct integretion for loadbalancer, any services that you expose you will need to configure with a self-deployed solution. But it seems that version 1.4  ofKubernetes is comming with equivalent support for Azure that the current versions boast for  AWS and Co.


Read more
Victor Palau

First of all, I wanted to recommend the following recipe from Digital Ocean on how to rollout your own Docker Registry in Ubuntu 14.04. As with most of their stuff, it is super easy to follow.

I also wanted to share a small improvement on the recipe to include a UI front-end to the registry.

Once you have completed the recipe and have a repository secured and running, you extend your docker-compose file to look like this:

nginx:
 image: "nginx:1.9"
 ports:
 - 443:443
 - 8080:8080
 links:
 - registry:registry
 - web:web
 volumes:
 - ./nginx/:/etc/nginx/conf.d:ro

web:
 image: hyper/docker-registry-web
 ports:
 - 8000:8080
 links:
 - registry
 environment:
 REGISTRY_HOST: registry

registry:
 image: registry:2
 ports:
 - 127.0.0.1:5000:5000
 environment:
 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
 volumes:
 - ./data:/data

You will also need to include a configuration file for web in the nginx folder.

file: ~/docker-registry/nginx/web.conf

upstream docker-registry-web {
 server web:8080;
 }

server {
 listen 8080;
 server_name [YOUR DOMAIN];

# SSL
 ssl on;
 ssl_certificate /etc/nginx/conf.d/domain.crt;
 ssl_certificate_key /etc/nginx/conf.d/domain.key;

location / {

# To add basic authentication to v2 use auth_basic setting plus add_header
 auth_basic "registry.localhost";
 auth_basic_user_file /etc/nginx/conf.d/registry.password;

proxy_pass http://docker-registry-web;
 proxy_set_header Host $http_host; # required for docker client's sake
 proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_read_timeout 900;
 }
 }

docker-compose up and you should be able to have a ssl secured UI frontend in port 8080 (https://yourdomain:8080/)
If you have any improvement tips I am all ears!


Read more
Craig Bender

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Robie Basak

Meeting information

Agenda

Minutes

Review ACTION points from previous meeting

The discussion about “Review ACTION points from previous meeting” started at 16:00.

  • No actions from previous meeting.

Development Release

The discussion about “Development Release” started at 16:00.

Assigned merges/bugwork (rbasak)

The discussion about “Assigned merges/bugwork (rbasak)” started at 16:02.

  • rbasak has triaged some bugs, tagging some “bitesize”, and advised prospective Ubuntu developers to take a look at these.

Server & Cloud Bugs (caribou)

The discussion about “Server & Cloud Bugs (caribou)” started at 16:05.

  • Squid, Samba and MySQL bugs noted; all are in progress. Nothing else in particular to bring up.

Weekly Updates & Questions for the QA Team

The discussion about “Weekly Updates & Questions for the QA Team” started at 16:06.

  • No questions to or from the QA team
  • jgrimm noted that the Canonical Server Team currently have an open QA position and invited applications and recommendations.

Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)” started at 16:07.

  • No questions to/from the kernel team

Upcoming Call For Papers

The discussion about “Upcoming Call For Papers” started at 16:10.

  • No upcoming CfPs of note

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:11.

  • No upcoming events of note

Open Discussion

The discussion about “Open Discussion” started at 16:11.

Announce next meeting date, time and chair

The discussion about “Announce next meeting date, time and chair” started at 16:18.

  • The next meeting will be at Tue 17 May 16:00:00 UTC 2016. jamespage will chair.

Generated by MeetBot 0.1.5 (http://wiki.ubuntu.com/meetingology)

Read more
David Henningsson

So; assume that you have some new hardware that works for the most part, but you have some problems with your built-in sound card. The problem has been fixed upstream, but if you start using that particular upstream kernel only, you will lose Ubuntu kernel security updates. In some cases, bug fixes will come to Ubuntu kernels too – after some time – but in other cases these fixes won’t, for a variety of reasons.

You want to run a standard Ubuntu kernel, except for your sound driver (or some other driver), which you want to be something different. This actually happens quite often when our team enables hardware that isn’t yet on the market, and therefore lack full support in already released kernels.

DKMS

To the rescue comes DKMS (short for Dynamic kernel module support), which installs the source of the actual driver on the machine, and, whenever the Ubuntu kernel is upgraded, automatically recompiles the driver to fit the new kernel. The compiled modules are installed into the right directory for them to be used at next boot. We’ve used this tool for several years, and found it to be incredibly useful.

Launchpad automation

Launchpad got a feature called recipes, which combines one or more bzr branches and automatically makes a source package whenever one of the source packages change. The source package is then uploaded to a ppa, which builds a binary package from the source package.

What is then the result of all this well-oiled machinery? That every day, you have the latest sound driver which is ready for you to install and use to see if it fixes your sound issues – and because it’s packaged as a normal Debian package, uninstallation is easy in case it does not work. We have had this up and running for the Intel HDA driver for several years now, and it’s been useful for both Canonical and the Ubuntu community.

Details

That’s the birds-eye overview. In practice, things are a bit more complicated. Get ready for the mandatory boxes-and-arrows picture:

hda-build-flow2

Preparing for import

Our main source is the master branch of sound.git, maintained by Takashi Iwai. However, Launchpad does not yet support git in recipe builds, therefore, a machine somewhere in the cloud runs a preparation script. This script checks the git branch for updates every hour and if there is one, starts with filtering out the “sound” directory (this is a simple optimization, because kernel trees are huge). The result is added to a bzr branch.

Actually this cloud machine does one more thing, but it’s more of a bonus: it runs some hda-emu based testing. Hda-emu is a tool for emulating an HD-audio codec, and takes alsa-info as input. So, we contributed a lot of alsa-infos from machines Canonical enable to the upstream hda-emu repository, along with some scripts to run some emulation tests on all of them. So, in case something breaks, we get an early warning, before the code reaches more people. The most common case for the test to break however is not an actual bug, but that the hda-emu tool needs updating to handle changes in the kernel driver. Therefore, the script is not stopped when this happens, it just puts a warning message in the bzr commit log.

The cloud machine runs a bzr server, which Launchpad then checks a few times per day for updates, and imports changes into a Launchpad hosted mirror branch.

Making a DKMS package

As our launchpad recipe detects that the bzr branch has changed, it re-runs the recipe. The recipe is quite simple – it only copies files from different branches into some directory, creates a source package out of the result, and uploads that package to a PPA. That’s where we combine the upstream source with our DKMS configuration. There is some scripting involved to e g figure out the names of the built kernel modules – if you’re making your own DKMS package, it will probably be easier to write that file by hand.

Unfortunately, compiling a new driver on an older kernel can be challenging, when the driver starts relying on features only present in the new kernel. Therefore, we regularly need to manually add patches to the new driver to make it compile on the older kernel.

Launchpad build

This part is just like any other build on a Launchpad PPA – it takes a source package and builds a binary package. This is where the backport patches actually get applied to the source. Please note that even though this is a binary package, what’s inside the package is not compiled binaries, it’s the source code for the driver. This is because the final compilation occurs on the end user machine.

(A funny quirk: when DKMS is invoked, it creates a .deb file by itself, but for some reason Launchpad wouldn’t accept this .deb file. I never really figured out why, but instead worked around it by manually unpacking DKMS’s .deb, then repacking it again using low-level dpkg-gencontrol and dpkg-deb tools.)

The binary package is then published in the PPA, downloaded/copied by the end user to his/her machine, and installed just like any other Debian package.

On the end user machine

The final step; where the driver source is combined with a standard Ubuntu kernel, is done on the end user’s machine. DKMS itself installs triggers on the end user machine that will be called every time a new kernel is installed, upgraded or removed.

On installation of a new kernel, DKMS will verify that the relevant kernel header package is also installed, then use these headers to recompile all installed DKMS binary packages against the new kernel. The resulting files are copied into /lib/modules/<kernel>/updates/dkms. On installation of a new DKMS binary package, the default is to recompile the new package against the latest kernel and the currently running kernel.

DKMS also runs depmod to ensure the kernel will pick up the newly compiled modules.

Final remarks

There are some caveats which might be worth mentioning.

First, if you combine the regular Ubuntu kernel (with security updates) with a DKMS driver, you will get security updates for the entire kernel except that specific driver, so in theory, you could be left with a security issue if the vulnerability is in the specific driver you use DKMS for. However, in practice the vast majority of security bugs are userspace facing code, rather than deep down in hardware specific drivers.

Second, on every Ubuntu kernel released there is a potential risk for breakage, e g, if the DKMS driver calls a function in the kernel and that function changes its signature, then the DKMS driver will fail to compile and install on the new kernel. Or even worse, the function changes behavior without changing signature, so that the DKMS driver will compile just fine, but break in some way when the driver runs. All I can say about that is that, to my knowledge, if this can happen then it happens very rarely – I’ve never seen it cause any problems in practice.

Read more
Lorn Potter

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
David Henningsson

2.1 surround sound is (by a very unscientific measure) the third most popular surround speaker setup, after 5.1 and 7.1. Yet, ALSA and PulseAudio has since a long time back supported more unusual setups such as 4.0, 4.1 but not 2.1. It took until 2015 to get all pieces in the stack ready for 2.1 as well.

The problem

So what made adding 2.1 surround more difficult than other setups? Well, first and foremost, because ALSA used to have a fixed mapping of channels. The first six channels were decided to be:

1. Front Left
2. Front Right
3. Rear Left
4. Rear Right
5. Front Center
6. LFE / Subwoofer

Thus, a four channel stream would default to the first four, which would then be a 4.0 stream, and a three channel stream would default to the first three. The only way to send a 2.1 channel stream would then be to send a six channel stream with three channels being silence.

This was not good enough, because some cards, including laptops with internal subwoofers, would only support streaming four channels maximum.

(To add further confusion, it seemed some cards wanted the subwoofer signal on the third channel of four, and others wanted the same signal on the fourth channel of four instead.)

ALSA channel map API

The first part of the solution was a new alsa-lib API for channel mapping, allowing drivers to advertise what channel maps they support, and alsa-lib to expose this information to programs (see snd_pcm_query_chmaps, snd_pcm_get_chmap and snd_pcm_set_chmap).

The second step was for the alsa-lib route plugin to make use of this information. With that, alsa-lib could itself determine whether the hardware was 5.1 or 2.1, and change the number of channels automatically.

PulseAudio bass / treble filter

With the alsa-lib additions, just adding another channel map was easy.
However, there was another problem to deal with. When listening to stereo material, we would like the low frequencies, and only those, to be played back from the subwoofer. These frequencies should also be removed from the other channels. In some cases, the hardware would have a built-in filter to do this for us, so then it was just a matter of setting enable-lfe-remixing in daemon.conf. In other cases, this needed to be done in software.

Therefore, we’ve integrated a crossover filter into PulseAudio. You can configure it by setting lfe-crossover-freq in daemon.conf.

The hardware

If you have a laptop with an internal subwoofer, chances are that it – with all these changes to the stack – still does not work. Because the HDA standard (which is what your laptop very likely uses for analog audio), does not have much of a channel mapping standard either! So vendors might decide to do things differently, which means that every single hardware model might need a patch in the kernel.

If you don’t have an internal subwoofer, but a separate external one, you might be able to use hdajackretask to reconfigure your headphone jack to an “Internal Speaker (LFE)” instead. But the downside of that, is that you then can’t use the jack as a headphone jack…

Do I have it?

In Ubuntu, it’s been working since the 15.04 release (vivid). If you’re not running Ubuntu, you need alsa-lib 1.0.28, PulseAudio 7, and a kernel from, say, mid 2014 or later.

Acknowledgements

Takashi Iwai wrote the channel mapping API, and also provided help and fixes for the alsa-lib route plugin work.

The crossover filter code was imported from CRAS (but after refactoring and cleanup, there was not much left of that code).

Hui Wang helped me write and test the PulseAudio implementation.

PulseAudio upstream developers, especially Alexander Patrakov, did a thorough review of the PulseAudio patch set.

Read more
Robie Basak

  • rbasak listed areas that he thinks needs looking at before Xenial feature freeze on 18 Feb. hallyn pointed out that this should be in a blueprint, so rbasak agreed to take an action to create one. Some work item assignments were made for the blueprint.
  • No other discussion was required for the other standing agenda items.
  • Meeting actions assigned:
    • rbasak look at
      https://code.launchpad.net/~psivaa/ubuntu-test-cases/lvm-grub-preseed-fix/+merge/258620 and https://code.launchpad.net/~om26er/ubuntu-test-cases/fix_minimal_image_size_test/+merge/235298
    • rbasak to create blueprint for Xenial feature work
    • rbasak to find kickinz1 a merge to do
  • The next meeting will be on Tue Nov 24 16:00:00 UTC 2015 in #ubuntu-meeting.

Full agenda and log

Read more
Mark W Wenning

Hello world!

Welcome to Canonical Voices. This is your first post. Edit or delete it, then start blogging!

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20151027 Meeting Agenda



Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt



Status: Xenial Development Kernel

Our Xenial kernel is open for development. The repo’s have been opened
in LP:
git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/xenial
git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux-meta/+git/xenial
git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux-signed/+git/xenial
Our Xenial master branch is still tracking Wily’s v4.2 based kernel.
However, Xenial master-next is currently rebased to v4.3-rc7.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/XenialXerus/ReleaseSchedule
    Thurs Dec 31 – Alpha 1 (~ weeks away)



Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html



Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/lts-utopic/Vivid/Wily

Status for the main kernels, until today:

  • Precise – Testing and Verification
  • Trusty – Testing and Verification
  • lts-Utopic – Testing and Verification
  • Vivid – Testing and Verification
  • Wily – Testing and Verification

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    Current cycle: 18-Oct through 07-Nov
    ====================================================================
    16-Oct Last day for kernel commits for this cycle
    18-Oct – 24-Oct Kernel prep week.
    25-Oct – 31-Oct Bug verification & Regression testing.
    01-Nov – 07-Nov Regression testing & Release to -updates.

    Next cycle: 08-Nov through 28-Nov
    ====================================================================
    06-Nov Last day for kernel commits for this cycle
    08-Nov – 14-Nov Kernel prep week.
    15-Nov – 21-Nov Bug verification & Regression testing.
    22-Nov – 28-Nov Regression testing & Release to -updates.



Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Ryan Finnie

2ping 3.0.0 has been released. It is a total rewrite, with the following features:

  • Total rewrite from Perl to Python.
  • Multiple hostnames/addresses may be specified in client mode, and will be pinged in parallel.
  • Improved IPv6 support:
    • In most cases, specifying -4 or -6 is unnecessary. You should be able to specify IPv4 and/or IPv6 addresses and it will "just work".
    • IPv6 addresses may be specified without needing to add -6.
    • If a hostname is given in client mode and the hostname provides both AAAA and A records, the AAAA record will be chosen. This can be forced to one or another with -4 or -6.
    • If a hostname is given in listener mode with -I, it will be resolved to addresses to bind as. If the hostname provides both AAAA and A records, they will both be bound. Again, -4 or -6 can be used to restrict the bind.
    • IPv6 scope IDs (e.g. fe80::213:3bff:fe0e:8c08%eth0) may be used as bind addresses or destinations.
    • Better Windows compatibility.
    • ping(8)-compatible superuser restrictions (e.g. flood ping) have been removed, as 2ping is a scripted program using unprivileged sockets, and restrictions would be trivial to bypass. Also, the concept of a "superuser" is rather muddied these days.
    • Better timing support, preferring high-resolution monotonic clocks whenever possible instead of gettimeofday(). On Windows and OS X, monotonic clocks should always be available. On other Unix platforms, monotonic clocks should be available when using Python 2.7
    • Long option names for ping(8)-compatible options (e.g. adaptive mode can be called as --adaptive in addition to -A). See 2ping --help for a full option list.

    Because of the IPv6 improvements, there is a small breaking functionality change. Previously, to listen on both IPv4 and IPv6 addresses, you needed to specify -6, e.g. 2ping --listen -6 -I 127.0.0.1 -I ::1. Now that -6 restricts binds to IPv6 addresses, that invocation will just listen on ::1. Simply remove -6 to listen on both IPv4 and IPv6 addresses.

    This is a total rewrite in Python, and the original Perl code was not used as a basis, instead writing the new version from the 2ping protocol specification. (The original Perl version was a bit of a mess, and I didn't want to pick up any of its habits.) As a result of rewriting from the specification, I discovered the Perl version's implementation of the checksum algorithm was not even close to the specification (and when it comes to checksums, "almost" is the same as "not even close"). As the Perl version is the only known 2ping implementation in the wild which computes/verifies checksums, I made a decision to amend the specification with the "incorrect" algorithm described in pseudocode. The Python version's checksum algorithm matches this in order to maintain backwards compatibility.

    This release also marks the five year anniversary of 2ping 1.0, which was released on October 20, 2010.

    Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20151020 Meeting Agenda



Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt



Status: Wily Development Kernel

We release Wily 15.10 in 2 days this Thurs Oct 22. Any kernel patches submitted for Wily will now be queued for SRU and must adhere to SRU
policy.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs Oct 22 – 15.10 Release (~2 days away)



Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html



Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/lts-utopic/Vivid

Status for the main kernels, until today:

  • Precise – Kernel Prep
  • Trusty – Kernel Prep
  • lts-Utopic – Kernel Prep
  • Vivid – Kernel Prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 18-Oct through 07-Nov
    ====================================================================
    16-Oct Last day for kernel commits for this cycle
    18-Oct – 24-Oct Kernel prep week.
    25-Oct – 31-Oct Bug verification & Regression testing.
    01-Nov – 07-Nov Regression testing & Release to -updates.

    Note: Oct. 22 is release day



Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more