Canonical Voices

Ryan Beisner

Agenda

  • Review ACTION points from previous meeting
  • rbasak to review mysql-5.6 transition plans with ABI breaks with infinity
  • blueprint updating
  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair
  • ACTION: meeting chair (of this meeting, not the next one) to carry out post-meeting procedure (minutes, etc) documented at https://wiki.ubuntu.com/ServerTeam/KnowledgeBase

Minutes

  • REVIEW ACTION POINTS FROM PREVIOUS MEETING
    • re: rbasak noted that regarding mysql mysql-5.6 transition / abi infinity action, we decided to defer the 5.6 move for this cycle, as we felt it was too late given the ABI concerns.
  • UTOPIC DEVELOPMENT
    • LINK: https://wiki.ubuntu.com/UtopicUnicorn/ReleaseSchedule
    • LINK: http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-u-tracking-bug-tasks.html#ubuntu-server
    • LINK: http://status.ubuntu.com/ubuntu-u/group/topic-u-server.html
    • LINK: https://blueprints.launchpad.net/ubuntu/+spec/topic-u-server
  • SERVER & CLOUD BUGS (CARIBOU)
    • Nothing to report.
  • WEEKLY UPDATES & QUESTIONS FOR THE QA TEAM (PSIVAA)
    • Nothing to report.
  • WEEKLY UPDATES & QUESTIONS FOR THE KERNEL TEAM (SMB, SFORSHEE)
    • smb reports that he is digging into a potential race between libvirt and xen init
  • UBUNTU SERVER TEAM EVENTS
    • None to report.
  • OPEN DISCUSSION
    • Pretty quiet. Not even any bad jokes. Back to crunch time!
  • ANNOUNCE NEXT MEETING DATE AND TIME
    • next meeting will be : Tue Oct 7 16:00:00 UTC 2014 chair will be lutostag
  • MEETING ACTIONS
    • ACTION: all to review blueprint work items before next weeks meeting.

People present (lines said)

  • beisner (54)
  • smb (8)
  • meetingology (4)
  • smoser (3)
  • rbasak (3)
  • kickinz1 (3)
  • caribou (2)
  • gnuoy (1)
  • matsubara (1)
  • jamespage (1)
  • arges (1)
  • hallyn (1)

IRC Log

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140930 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remainds rebased on the v3.16.3 upstream stable
kernel. The latest uploaded to the archive is 3.16.0-19.26. Please
test and let us know your results.
Also, Utopic Kernel Freeze is next week on Thurs Oct 9. Any patches
submitted after kernel freeze are subject to our Ubuntu kernel SRU
policy.
—–
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~1 week away)
Thurs Oct 16 – Utopic Final Freeze (~2 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Trusty – Verification and Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 11-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Louis

I have seen this setup documented a few places, but not for Ubuntu so here it goes.

I have used this many time to verify or diagnose Device Mapper Multipath (DM-MPIO) since it is rather easy to fail a path by switching off one of the network interfaces. Nowaday, I use two KVM virtual machines with two NIC each.

Those steps have been tested on Ubuntu 12.04 (Precise) and Ubuntu 14.04 (Trusty). The DM-MPIO section is mostly a cut and paste of the Ubuntu Server Guide

The virtual machine that will act as the iSCSI target provider is called PreciseS-iscsitarget. The VM that will connect to the target is called PreciseS-iscsi. Each one is configured with two network interfaces (NIC) that get their IP addresses from DHCP. Here is an example of the network configuration file :

$ cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
#
auto eth1
iface eth1 inet dhcp

The second NIC resolves to the same hostname with a « 2″ appended (i.e. PreciseS-iscsitarget2 and PreciseS-iscsi2)

Setting up the iSCSI Target VM

This is done by installing the following packages :

$ sudo apt-get install iscsitarget iscsitarget-dkms

Edit /etc/default/iscsitarget and change the following line to enable the service :

ISCSITARGET_ENABLE=true

We now proceed to create an iSCSI target (aka disk). This is done by creating a 50 Gb sparse file that will act as our disk :

$ sudo dd if=/dev/zero of=/home/ubuntu/iscsi_disk.img count=0 obs=1 seek=50G

This container is used in the definition of the iSCSI target. Edit the file /etc/iet/ietd.conf. At the bottom, add :

Target iqn.2014-09.PreciseS-iscsitarget:storage.sys0
        Lun 0 Path=/home/ubuntu/iscsi_disk.img,Type=fileio,ScsiId=lun0,ScsiSN=lun0

The iSCSI target service must be restarted for the new target to be accessible

$ sudo service iscsitarget restart


Setting up the iSCSI initiator

To be able to access the iSCSI target, only one package is required :

$ sudo apt-get install open-iscsi

Edit /etc/iscsi/iscsid.conf changing the following:

node.startup = automatic

This will ensure that the iSCSI targets that we discover are enabled automatically upon reboot.

Now we will proceed to discover and connect to the device that we setup in the previous section

$ sudo iscsiadm -m discovery -t st -p PreciseS-iscsitarget
$ sudo iscsiadm -m node --login
$ dmesg | tail
[   68.461405] iscsid (1458): /proc/1458/oom_adj is deprecated, please use /proc/1458/oom_score_adj instead.
[  189.989399] scsi2 : iSCSI Initiator over TCP/IP
[  190.245529] scsi 2:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[  190.245785] sd 2:0:0:0: Attached scsi generic sg0 type 0
[  190.249413] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)
[  190.250487] sd 2:0:0:0: [sda] Write Protect is off
[  190.250495] sd 2:0:0:0: [sda] Mode Sense: 77 00 00 08
[  190.251998] sd 2:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  190.257341]  sda: unknown partition table
[  190.258535] sd 2:0:0:0: [sda] Attached SCSI disk

We can see in the dmesg output that the new device /dev/sda has been discovered. Format the new disk & create a file system. Then verify that everything is correct by mounting and unmounting the new file system.

$ fdisk /dev/sda
n
p
1
<ret>
<ret>
w
$  mkfs -t ext4 /dev/sda1
$ mount /dev/sda1 /mnt
$ umount /mnt

 

Setting up DM-MPIO

Since each of our virtual machines have been configured with two network interfaces, it is possible to reach the iSCSI target through the second interface :

$ iscsiadm -m discovery -t st -p
192.168.1.193:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
192.168.1.43:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
$ iscsiadm -m node -T iqn.2014-09.PreciseS-iscsitarget:storage.sys0 --login

Now that we have two paths toward our iSCSI target, we can proceed to setup DM-MPIO.

First of all, a /etc/multipath.conf file must exist.  Then we install the needed package :

$ sudo -s
# cat << EOF > /etc/multipath.conf
defaults {
        user_friendly_names yes
}
EOF
# exit
$ sudo apt-get -y install multipath-tools

Two paths to the iSCSI device created previously need to exist for the multipath device to be seen.

# multipath -ll
mpath0 (149455400000000006c756e30000000000000000000000000) dm-2 IET,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:0:0 sda 8:0   active ready  running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:0 sdb 8:16  active ready  running

The two paths are indeed visible. We can move forward and verify that the partition table created previously is accessible :

$ sudo fdisk -l /dev/mapper/mpath0

Disk /dev/mapper/mpath0: 53.7 GB, 53687091200 bytes
64 heads, 32 sectors/track, 51200 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0e5e5db1

              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/mpath0p1            2048   104857599    52427776   83  Linux

All that is remaining is to add an entry to the /etc/fstab file so the file system that we created is mounted automatically at boot.  Notice the _netdev entry : this is required otherwise the iSCSI device will not be mounted.

$ sudo -s
# cat << EOF >> /etc/fstab
/dev/mapper/mpath0-part1        /mnt    ext4    defaults,_netdev        0 0
EOF
exit
$ sudo mount -a
$ df /mnt
Filesystem               1K-blocks   Used Available Use% Mounted on
/dev/mapper/mpath0-part1  51605116 184136  48799592   1% /mnt

Read more
Dustin Kirkland

A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:

Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin

Read more
Ben Howard

Cloud Images and Bash Vulnerabilities

The Ubuntu Cloud Image team has been monitoring the bash vulnerabilities. Due to the scope, impact and high profile nature of these vulnerabilties, we have published new images. New cloud images to address the lastest bash USN-2364-1 [1, 8, 9] are being released with a build serials of 20140927. These images include code to address all prior CVEs, including CVE-2014-6271 [6] and CVE-2014-7169 [7], and supersede images published in the past week which addressed those CVEs.

Please note: Securing Ubuntu Cloud Images requires users to regularly apply updates[5]; using the latest Cloud Images are insufficient. 

Addressing the full scope of the Bash vulnerability has been an iterative process. The security team has worked with the upstream bash community to address multiple aspects of the bash issue. As these fixes have become available, the Cloud Image team has published daily[2]. New released images[3] have been made available at the request of the Ubuntu Security team.

Canonical has been in contact with our public Cloud Partners to make these new builds available as soon as possible.

Cloud image update timeline

Daily image builds are automatically triggered when new package versions become available in the public archives. New releases for Cloud Images are triggered automatically when a new kernel becomes available. The Cloud Image team will manually trigger new released images when either requested by the Ubuntu Security team or when a significant defect requires.

Please note:  Securing Ubuntu cloud images requires that security updates be applied regularly [5], using the latest available cloud image is not sufficient in itself.  Cloud Images are built only after updated packages are made available in the public archives. Since it takes time to build the  images, test/QA and finally promote the images, there is time (sometimes  considerable) between public availablity of the package and updated Cloud Images. Users should consider this timing in their update strategy.

[1] http://www.ubuntu.com/usn/usn-2364-1/
[2] http://cloud-images.ubuntu.com/daily/server/
[3] http://cloud-images.ubuntu.com/releases/
[4] https://help.ubuntu.com/community/Repositories/Ubuntu/
[5] https://wiki.ubuntu.com/Security/Upgrades/
[6] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-6271.html
[7] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7169.html
[8] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7187.html
[9] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7186.html

Read more
Stéphane Graber

I often have to deal with VPNs, either to connect to the company network, my own network when I’m abroad or to various other places where I’ve got servers I manage.

All of those VPNs use OpenVPN, all with a similar configuration and unfortunately quite a lot of them with overlapping networks. That means that when I connect to them, parts of my own network are no longer reachable or it means that I can’t connect to more than one of them at once.

Those I suspect are all pretty common issues with VPN users, especially those working with or for companies who over the years ended up using most of the rfc1918 subnets.

So I thought, I’m working with containers every day, nowadays we have those cool namespaces in the kernel which let you run crazy things as a a regular user, including getting your own, empty network stack, so why not use that?

Well, that’s what I ended up doing and so far, that’s all done in less than 100 lines of good old POSIX shell script :)

That gives me, fully unprivileged non-overlapping VPNs! OpenVPN and everything else run as my own user and nobody other than the user spawning the container can possibly get access to the resources behind the VPN.

The code is available at: git clone git://github.com/stgraber/vpn-container

Then it’s as simple as: ./start-vpn VPN-NAME CONFIG

What happens next is the script will call socat to proxy the VPN TCP socket to a UNIX socket, then a user namespace, network namespace, mount namespace and uts namespace are all created for the container. Your user is root in that namespace and so can start openvpn and create network interfaces and routes. With careful use of some bind-mounts, resolvconf and byobu are also made to work so DNS resolution is functional and we can start byobu to easily allow as many shell as you want in there.

In the end it looks like this:

stgraber@dakara:~/vpn$ ./start-vpn stgraber.net ../stgraber-vpn/stgraber.conf 
WARN: could not reopen tty: No such file or directory
lxc: call to cgmanager_move_pid_abs_sync(name=systemd) failed: invalid request
Fri Sep 26 17:48:07 2014 OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Feb  4 2014
Fri Sep 26 17:48:07 2014 WARNING: No server certificate verification method has been enabled.  See http://openvpn.net/howto.html#mitm for more info.
Fri Sep 26 17:48:07 2014 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Fri Sep 26 17:48:07 2014 Attempting to establish TCP connection with [AF_INET]127.0.0.1:1194 [nonblock]
Fri Sep 26 17:48:07 2014 TCP connection established with [AF_INET]127.0.0.1:1194
Fri Sep 26 17:48:07 2014 TCPv4_CLIENT link local: [undef]
Fri Sep 26 17:48:07 2014 TCPv4_CLIENT link remote: [AF_INET]127.0.0.1:1194
Fri Sep 26 17:48:09 2014 [vorash.stgraber.org] Peer Connection Initiated with [AF_INET]127.0.0.1:1194
Fri Sep 26 17:48:12 2014 TUN/TAP device tun0 opened
Fri Sep 26 17:48:12 2014 Note: Cannot set tx queue length on tun0: Operation not permitted (errno=1)
Fri Sep 26 17:48:12 2014 do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1
Fri Sep 26 17:48:12 2014 /sbin/ip link set dev tun0 up mtu 1500
Fri Sep 26 17:48:12 2014 /sbin/ip addr add dev tun0 172.16.35.50/24 broadcast 172.16.35.255
Fri Sep 26 17:48:12 2014 /sbin/ip -6 addr add 2001:470:b368:1035::50/64 dev tun0
Fri Sep 26 17:48:12 2014 /etc/openvpn/update-resolv-conf tun0 1500 1544 172.16.35.50 255.255.255.0 init
dhcp-option DNS 172.16.20.30
dhcp-option DNS 172.16.20.31
dhcp-option DNS 2001:470:b368:1020:216:3eff:fe24:5827
dhcp-option DNS nameserver
dhcp-option DOMAIN stgraber.net
Fri Sep 26 17:48:12 2014 add_route_ipv6(2607:f2c0:f00f:2700::/56 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:714b::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b368::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b511::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b512::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 Initialization Sequence Completed


To attach to this VPN, use: byobu -S /home/stgraber/vpn/stgraber.net.byobu
To kill this VPN, do: byobu -S /home/stgraber/vpn/stgraber.net.byobu kill-server
or from inside byobu: byobu kill-server

After that, just copy/paste the byobu command and you’ll get a shell inside the container. Don’t be alarmed by the fact that you’re root in there. root is mapped to your user’s uid and gid outside the container so it’s actually just your usual user but with a different name and with privileges against the resources owned by the container.

You can now use the VPN as you want without any possible overlap or conflict with any route or VPN you may be running on that system and with absolutely no possibility that a user sharing your machine may access your running VPN.

This has so far been tested with 5 different VPNs, on a regular Ubuntu 14.04 LTS system with all VPNs being TCP based. UDP based VPNs would probably just need a couple of tweaks to the socat unix-socket proxy.

Enjoy!

Read more
niemeyer

The qml package is right now one of the best choices for creating graphic applications under the Go language. Part of the reason why this is true comes from the convenience of QML, a high-level domain-specific language that allows describing visual components, events, animations, and content in general in a succinct and pleasing way. The integration of such a language with Go means having both a good mechanism for describing visual content, and a good platform for doing general development under, which can range from simple data manipulation to involved OpenGL content rendering.

On the practical side, one of the implications of using such a language partnership is that every Go qml application will have some sort of resource content to deal with, carrying the QML logic. Such content may be loaded either from files on disk, or from strings in memory. Loading from a file means the content may be organized in multiple files that directly reference each other without changing the Go application, and may be updated and tested without rebuilding. Loading from a string in memory means the content needs to be self-contained, but results in a standalone binary (linking aside – still depends on Qt libraries).

There’s a well known trick to have both benefits at once, though, and the basic solution has already been made available in multiple packages: have the content on disk, and use an external tool to pack it up into a Go file that is built into the binary when the content is updated. Unfortunately, this trick alone is not enough with the qml package, because the QML engine needs to know what resources are available and where so that the right thing happens when it sees a directory being imported or an image path being referenced.

To solve that problem, the qml package has been enhanced with functionality that leverages the existing Qt resource system to pack content into the binary itself. Rather than using the upstream C++ and XML-based resource compiler, though, a new resource packer was implemented inside the qml package and made available both under a friendly Go API, and as a tool that follows common Go idioms.

The help text for the genqrc tool describes it in detail:

Usage: genqrc [options] <subdir1> [<subdir2> ...]

The genqrc tool packs all resource files under the provided subdirectories into
a single qrc.go file that may be built into the generated binary. Bundled files
may then be loaded by Go or QML code under the URL "qrc:///some/path", where
"some/path" matches the original path for the resource file locally.

Starting with Go 1.4, this tool may be conveniently run by the "go generate"
subcommand by adding a line similar to the following one to any existent .go
file in the project (assuming the subdirectories ./code/ and ./images/ exist):

    //go:generate genqrc code images

Then, just run "go generate" to update the qrc.go file.

During development, the generated qrc.go can repack the filesystem content at
runtime to avoid the process of regenerating the qrc.go file and rebuilding the
application to test every minor change made. Runtime repacking is enabled by
setting the QRC_REPACK environment variable to 1:

    export QRC_REPACK=1

This does not update the static content in the qrc.go file, though, so after
the changes are performed, genqrc must be run again to update the content that
will ship with built binaries.

The tool may be installed via go get as usual:

go get gopkg.in/qml.v1/cmd/genqrc

and once the qrc.go file has been generated, the main qml file may be
loaded with logic equivalent to:

component, err := engine.LoadFile("qrc:///path/to/file.qml")

The loaded file can in turn reference any other content that was bundled
into the Go binary.

For a better picture, this example demonstrates the use of the tool.

Read more
David Callé

The SDK and Unity teams are constantly looking for ways to improve the overall performance and battery life of Ubuntu. Your app should be written in the same spirit: lightweight and fast.

The Ubuntu SDK performance overlay.

This article will show you how to measure performance in your QML app and give you some tips on avoiding common pitfalls and resource hogs. Read…

Read more
Nicholas Skaggs

Final Beta testing for Utopic

Can you believe final beta is here for utopic already? Where has the summer gone? The milestone and images are already prepared for the final beta testing. This is the first round of image testing for ubuntu this cycle. A final image will also be tested next month, but now is the time to try out the image on your system. Be sure to report any bugs you may find. This will help ensure there is time to fix them before the release images.

To help make sure the final utopic image is in good shape, we need your help and test results! Please, head over to the milestone on the isotracker, select your favorite flavor and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help. Happy Testing!

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140923 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.3 upstream stable kernel
and uploaded to the archive, ie. 3.16.0-17.23. Please test and let us
know your results.
Also, we’re approximately 2 weeks away from Utopic Kernel Freeze on
Thurs Oct 9. Any patches submitted after kernel freeze are subject to
our Ubuntu kernel SRU policy.
—–
Important upcoming dates:
Thurs Sep 25 – Utopic Final Beta (~2 days away)
Thurs Oct 9 – Utopic Kernel Freeze (~2 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~3 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 23):

  • Lucid – Kernel prep
  • Precise – Kernel prep
  • Trusty – Kernel prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 11-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
niemeyer

There were a few long standing issues in the yaml.v1 package which were being postponed so that they could be done at once in a single incompatible change, and the time has come: yaml.v2 is now available.

Besides these incompatible changes, other compatible fixes and improvements were performed in that push, and those were also applied to the existing yaml.v1 package so that dependent applications benefit immediately and without modifications.

The subtopics below outline exactly what changed, and how to adapt existent code when necessary.

Type errors

With yaml.v1, decoding a YAML value that is not compatible with the Go value being unmarshaled into will silently drop the offending value without notice. In many cases continuing with degraded behavior by ignoring the problem is intended, but this was the one and only option.

In yaml.v2, these problems will cause a *yaml.TypeError to be returned, containing helpful information about what happened. For example:

yaml: unmarshal errors:
  line 3: cannot unmarshal !!str `foo` into int

On such errors the decoding process still continues until the end of the YAML document, so ignoring the TypeError will produce logic equivalent to the old yaml.v1 behavior.

New Marshaler and Unmarshaler interfaces

The way that yaml.v1 allowed custom types to implement marshaling and unmarshaling of YAML content was slightly confusing and troublesome. For example, considering a CustomType with a keys field:

type CustomType struct {
        keys map[string]int
}

and supposing the goal is to unmarshal this YAML map into it:

custom:
    a: 1
    b: 2
    c: 3

With yaml.v1, one would need to implement logic similar to the following for that:

func (v *CustomType) SetYAML(tag string, value interface{}) bool {
        if tag == "!!map" {
                m := value.(map[interface{}]interface{})
                // ... iterate/validate/convert key/value pairs 
        }
        return goodValue
}

This is too much trouble when the package can easily do those conversions internally already. To fix that, in yaml.v2 the Getter and Setter interfaces are both gone and were replaced by the Marshaler and Unmarshaler interfaces.

Using the new mechanism, the example above would be implemented as follows:

func (v *CustomType) UnmarshalYAML(unmarshal func(interface{}) error) error {
        return unmarshal(&v.keys)
}

Custom-ordered maps

By default both yaml.v1 and yaml.v2 will marshal keys in a stable order which is increasing within the same type and arbitrarily defined across types. So marshaling is already performed in a sensible order, but it cannot be customized in yaml.v1, and there’s also no way to tell which order the map was originally in, as some applications require.

To fix that, there is a new pair of types that support preserving the order of map keys both when marshaling and unmarshaling: MapSlice and MapItem.

Such an ordered map literal would look like:

m := yaml.MapSlice{{"c", 3}, {"b", 2}, {"a", 1}}

The MapSlice type may be used for variables going in and out of the yaml package, or in struct fields, map values, or anywhere else sensible.

Binary values

Strings in YAML must be valid UTF-8 or UTF-16 (with a byte order mark for the latter), and for binary data the specification defines a standard !!binary tag which represents the raw data encoded (encrypted?) as base64. This is now supported both in yaml.v1 and yaml.v2, transparently. That is, any string value that is not valid UTF-8 will be base64-encoded and appropriately tagged so that it roundtrips as the same string. Short strings are inlined, while long ones are automatically broken into several lines and represented in a proper style. For example:

one: !!binary gICA
two: !!binary |
  gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI
  CAgICAgICAgICAgICAgICAgICAgICAgICAgICA

Multi-line strings

Any string that contains new-line characters (‘\n’) will now be encoded using the literal style. For example, a value that would before be encoded as:

key: "line 1\nline 2\nline 3\n"

is now encoded by both yaml.v1 and yaml.v2 as:

key: |
  line 1
  line 2
  line 3

Other improvements

Besides these major changes, some assorted minor improvements were also performed:

  • Better handling of top-level “null”s (issue #35)
  • Marshal base 60 floats quoted for YAML 1.1 compatibility (issue #34)
  • Better error on invalid map keys (issue #25)
  • Allow non-ASCII characters in plain strings (issue #11).
  • Do not catch unrelated panics by mistake (commit a6dc653f)

For obtaining the yaml.v1 improvements:

go get -u gopkg.in/yaml.v1

For updating to yaml.v2, adapt the code as necessary given the points above, replace the import path, and run:

go get -u gopkg.in/yaml.v2

Read more
David Owen

Working with wrap-around

Sometimes we need to work with numbers that wrap-around, such as angles or indexes into a circular array. There are some tricks for dealing with these cases.
Continue reading "Working with wrap-around"

Read more

Following are my long-form notes for a short presentation I gave to the team here at Canonical.


We are all aware that the Internet is truly today's information superhighway.

So much of the world's information today is written in HTML that it's almost synonymous with "information".

HTML is the basic component of the Internet. We all use the Internet. If you take away CSS and JavaScript, you're left with just a whole bunch of HTML.

Understanding the interplay between markup and the Internet is important for anyone who writes content for the Internet

Simplicity and accessibility

Openness

We write JavaScript, CSS and back-end code for simplicity and clarity just so other developers, and probably only developer in our team can easily read and work on the code.

HTML is always the most public and central part of all our information, it is the most import thing to make as simple and intuitive as possible. Our HTML might be downloaded, viewed or hacked around with by anyone. They don't need to be a developer by trade. Anyone who knows how to "view source" can read our markup. Anyone who knows how to click "save web page" can hack around it.

Good writing

I'd like to suggest that anyone who writes professionally, in today's world, should have some understanding of how markup works.

People in more and more areas have to write markup sometimes. Anyone who writes blogs in Wordpress has probably had to edit the raw markup at some point. But also, anyone who writes in any medium that might be converted into markup at any point in the future should be aware of some of the ways it works.

I would therefore posit that using the correct tag to markup your information is as important as choosing how to layout your word document (headings, bullet-points etc.).

If you're ever writing markup, go and familiarise yourself with the HTML extensions in HTML5. And if you have something new to markup (e.g. a pull-quote, a code-block or a graph) give it a Google, see what best practice is.

Accessibility

A tempting attitude to take to writing markup is to focus on the average user, or maybe at least users within the inter-quartile range. If you look at Google Analytics, you will see that almost all visits to our sites are from people with modern, HTML5 & ECMAScript 5 capable browsers. As long as things look good on that setup, it's not so important to cover the edge-cases.

I would say that there are likely many flaws in this analysis. One is that maybe instead of hurting 1% of people by not worrying about the edge-cases, we're hurting 50% of the people, 2% of the time. Which, in terms of public opinion, is worse.

For example, if I try to load a website on the train (which I do more often than most, but many people do occasionally), there is a high likelihood that my connection will drop half-way through and I'll get a partially loaded page. At this point, since I will have downloaded the markup first, it is paramount that the markup looks sensible and contains all the relevant information.

Fortunately, there's a simple formula - if you understand the basic components for the web and write in them as simply and straightforwardly as you can, first, then most things will just work.

One of the beautiful things about the web is it's actually impossible to predict exactly how people are going to want to use it. But simplicity and directness are your friends.

Referencing

The Internet is a collection of links. The real genius of HTML is its extremely light referencing system.

Referencing has been a core component of scientific work forever, but HTML and the Internet bring that scientific process to into the commons.

Not only that, but the whole structure of the Internet depends on references. Good linking makes documents more understandable - it's easy to follow a link to find out more about a base concept you don't properly understand.

People follow links to discover new content, but more importantly, search engines use these links to find new content and to categorise it for searching. The quantity, specificity and wording of your links contribute to the strength of the Internet.

This is where an understanding matters not just to people who write in HTML, but anyone who writes content for the Internet.

When you're writing, especially if you're explaining a concept, if ever you use a term which you think could be described in more depth, find a link for it. People will thank you.

Rather than just adding the full link into the page's text (e.g. "see: www.example.com"), or writing "click here", add the link to a relevant part of your sentence. This is important because search engines will use your link text to help describe what that link is about.

It's also helpful if your link text is not exactly the same as simply the title of the post you're linking to. This is because it's helpful for that page to be described in many different ways, organically, by people linking to it.

IDs and anchors

Your readers will thank you for specific linking. If the topic you're trying to cover with your link is under a sub-heading half way down the document, see if you can find an anchor which will take them straight there (example.com#heading3).

On the development side, I believe that responsible HTML will contain IDs for this reason. Each heading, sub-heading or useful document section should ideally have an ID set on it, so people can link directly to that section if they need to.

Thank you

You're not going to do most of what I've said above, most of the time. But I think just keeping it in mind will make a difference. Learning how to write responsibly for the web is a creative and infinite journey. But every time you publish anything, and even better if you make an extra link or find a new more specific markup tag, you're strengthening the Internet. Thank you.

Read more
Prakash Advani

The main thing the Tox team is trying to do, besides provide encryption, is create a tool that requires no central servers whatsoever—not even ones that you would host yourself. It relies on the same technology that BitTorrent uses to provide direct connections between users, so there’s no central hub to snoop on or take down.

There are other developers trying to build a secure, peer-to-peer messaging systems, including Briar and Invisible.im, a project co-created by HD Moore, the creator of the popular security testing framework Metasploit. And there are other secure-centric voice calling apps, including those from Whisper Systems and Silent Circle, which encrypt calls made through the traditional telco infrastructure. But Tox is trying to roll both peer-to-peer and voice calling into one.

Actually, it’s going a bit further than that. Tox is actually just a protocol for encrypted peer-to-peer data transmission. “Tox is just a tunnel to another node that’s encrypted and secure,” says David Lohle, a spokesperson for the project. “What you want to send over that pipe is up to your imagination.” For example, one developer is building an e-mail replacement with the protocol, and Lohle says someone else is building an open source alternative to BitTorrent Sync.

 

Read More: http://www.wired.com/2014/09/tox/

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140916 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remains based on a v3.16.2 upstream stable kernel and
is uploaded to the archive, ie. linux-3.16.0-15.21. Please test and let
us know your results.
I’d also like to point out that our Utopic kernel freeze date is about 3
weeks away on Thurs Oct 9. Please don’t wait until the last minute to
submit patches needing to ship in the Utopic 14.10 release.
—–
Important upcoming dates:
Mon Sep 22 – Utopic Final Beta Freeze (~1 weeks away)
Thurs Sep 25 – Utopic Final Beta (~1 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~3 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~4 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 16):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Aug – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
David Owen

A circular sector is the part of a circle bounded by two radii and an arc connecting them. Determining whether a point lies in a sector is not quite as easy as rectangular bounds-checking. There are a few different approaches: θ-comparison, line-sides, and dot-products.
Continue reading "Bounds-checking with a circular sector"

Read more
Michael Hall

Last week I attended FOSSETCON, a new open source convention here in central Florida, and I had the opportunity to give a couple of presentations on Ubuntu phones and app development. Anybody who knows me knows that I love talking about these things, but a lot fewer people know that doing it in front of a room of people I don’t know still makes me extremely nervous. I’m an introvert, and even though I have a public-facing job and work with the wider community all the time, I’m still an introvert.

I know there are a lot of other introverts out there who might find the idea of giving presentations to be overwhelming, but they don’t have to be.  Here I’m going to give my personal experiences and advice, in the hope that it’ll encourage some of you to step out of your comfort zones and share your knowledge and talent with the rest of us at meetups and conferences.

You will be bad at it…

Public speaking is like learning how to ride a bicycle, everybody falls their first time. Everybody falls a second time, and a third. You will fidget and stutter, you will lose your train of thought, your voice will sound funny. It’s not just you, everybody starts off being bad at it. Don’t let that stop you though, accept that you’ll have bruises and scrapes and keep getting back on that bike. Coincidentally, accepting that you’re going to be bad at the first ones makes it much less frightening going into them.

… until you are good at it

I read a lot of things about how to be a good and confident public speaker, the advice was all over the map, and a lot of it felt like pure BS.  I think a lot of people try different things and when they finally feel confident in speaking, they attribute whatever their latest thing was with giving them that confidence. In reality, you just get more confident the more you do it.  You’ll be better the second time than the first, and better the third time than the second. So keep at it, you’ll keep getting better. No matter how good or bad you are now, you will keep getting better if you just keep doing it.

Don’t worry about your hands

You’ll find a lot of suggestions about how to use your hands (or not use them), how to walk around (or not walk around) or other suggestions about what to do with yourself while you’re giving your presentation. Ignore them all. It’s not that these things don’t affect your presentation, I’ll admit that they do, it’s that they don’t affect anything after your presentation. Think back about all of the presentations you’ve seen in your life, how much do you remember about how the presenter walked or waved their hands? Unless those movements were integral to the subject, you probably don’t remember much. The same will happen for you, nobody is going to remember whether you walked around or not, they’re going to remember the information you gave them.

It’s not about you

This is the one piece of advice I read that actually has helped me. The reason nobody remembers what you did with your hands is because they’re not there to watch you, they’re there for the information you’re giving them. Unless you’re an actual celebrity, people are there to get information for their own benefit, you’re just the medium which provides it to them.  So don’t make it about you (again, unless you’re an actual celebrity), focus on the topic and information you’re giving out and what it can do for the audience. If you do that, they’ll be thinking about what they’re going to do with it, not what you’re doing with your hands or how many times you’ve said “um”. Good information is a good distraction from the things you don’t want them paying attention to.

It’s all just practice

Practicing your presentation isn’t nearly as stressful as giving it, because you’re not worried about messing up. If you mess up during practice you just correct it, make a note to not make the same mistake next time, and carry on. Well if you plan on doing more public speaking there will always be a next time, which means this time is your practice for that one. Keep your eye on the presentation after this one, if you mess up now you can correct it for the next one.

 

All of the above are really just different ways of saying the same thing: just keep doing it and worry about the content not you. You will get better, your content will get better, and other people will benefit from it, for which they will be appreciative and will gladly overlook any faults in the presentation. I guarantee that you will not be more nervous about it than I was when I started.

Read more
pitti

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

Read more
facundo


¿Cómo es que llegaste a leer esto? En algún momento apretaste un botón y la computadora se prendió. Luego hiciste un click y se abrió el navegador. Hiciste otro click o escribiste algo, y entraste a mi blog.

Esos son ejemplos de causas y consecuencias. Estamos muy acostumbrados a vivir en un mundo donde las causas y las consecuencias están firmemente atadas. Lo vemos todo así, aunque no estemos todo el tiempo razonándolo. Ejemplo: vemos una hoja en el piso. Sabemos que la hoja vino de un árbol, aunque no razonamos toda la secuencia: la hoja estaba en el árbol, la hoja se desprendió, fue cayendo y desplazándose por efecto de la gravedad y el aire, hasta que cayó donde la vemos.

Incluso, podríamos hacer el razonamiento al revés: vemos la hoja, está ahí porque cayó del árbol, cayó porque se desprendió, etc. En general, sin embargo no hacemos estos razonamientos de forma consciente.

Todo esto es común para vos, y no presenta mayor sorpresa, ¿cierto? Eso es porque estamos acostumbrados a la causa y consecuencia, forma parte de nuestra experiencia como humanos, es la forma en que nuestro cerebro interpreta todo lo que nos pasa. Desde que nacemos estamos expuestos a lo que nuestros sensores capturan (ojos, oídos, piel, etc), y formamos una imagen de la realidad en base a esa información.

Pero esa realidad que nosotros percibimos, y que nos es común (en el sentido en que la vivimos siempre, y en que es la misma que viven el resto de los humanos), es sólo parte de todo lo que realmente existe. Es decir, sólo interpretamos parte de la realidad, sentimos sólo una parte de lo que realmente existe. Y todo aquello que está fuera de nuestra experiencia es muy difícil de entender, porque nuestro cerebro no está acostumbrado a procesarlo.

Una de esas cosas es el tiempo. Y no estoy hablando de si llueve o mañana va a ser un día soleado (o sea, el clima) sino el tiempo como lo que pasa entre el "antes", el "ahora" y el "después". Nosotros creemos que entendemos qué pasa con el tiempo, porque en general estamos expuestos a siempre lo mismo con respecto a esa variable física. Siempre sentimos parte de la realidad, aquella en la que la flecha del tiempo es reversible. Por eso a partir de la situación del ahora se puede deducir la situación del después. O incluso sabiendo el estado actual podemos saber como estaba el sistema antes.

Pongamos un ejemplo para entenderlo mejor: soltemos una pelota en el aire...

La pelota, antes de soltarla

En el momento de soltar la pelota, la misma está quieta y a una altura determinada. Si yo te pregunto, qué pasa luego de soltar la pelota, me contestarías fácilmente. Obviamente, momentos después, la pelota está más abajo, y cayendo a una velocidad determinada...

La pelota, un rato después

También, si en lugar de mostrarte las dos imágenes al mismo tiempo, te muestro la segunda, te podés imaginar la primera. Es como en el caso de ver la hoja del árbol en el piso, sabés que antes estaba en una rama.

Implícito en todo esto está la reversibilidad del tiempo. Viendo la primera imagen (que está sacada en "tiempo cero") podemos imaginar el avance del tiempo y predecir que va a pasar después (con tiempo t‚, obviamente mayor que cero) , o viendo la segunda imagen podemos predecir que pasaría si el tiempo retrocediese e imaginar la primera imagen.

Lo vemos incluso en las ecuaciones que describen este modelo simple. Vayamos por ejemplo a las posiciones... la ecuación para esto es:

  e = ½.a.t²

Eso es: el espacio recorrido (h‚ - h, en el dibujo) es la mitad de la aceleración multiplicada por el tiempo que pasó al cuadrado. Si entre la primera y segunda imagen pasaron 2 segundos, siendo la aceleración 9.81m/s² (más o menos, acá en la Tierra), tenemos que el espacio es 19.62m. O sea, viendo la primera imagen podemos deducir que 2 segundos después la pelota va a estar casi 20 metros más abajo... y si vemos la segunda imagen, podemos deducir que 2 segundos antes (o sea, usando t=-2s en la ecuación) la pelota estaba esa distancia más arriba (el espacio lo recorremos en la dirección contraria, porque el resultado de la ecuación nos dio negativo).

Más allá de la complejidad matemática (?) todo esto que te estoy contando no te parece muy loco, ¿no? No. Pero ojo... no todo es tan simple en esta vida (bah, en este Universo).

Y no es tan simple, porque esto de poder ir para adelante y para atrás en el tiempo ,en nuestra mente, en nuestros razonamientos, ¡sorpresivamente no siempre se cumple!

Vayamos con otro ejemplo sencillo (aunque un poco más difícil de construir)... mirá el siguiente dibujo.

Foton loco, viaje de ida

Eso es una lamparita que tira de a un fotón (la mínima unidad de luz, su partícula elemental), una superficie semiespejada (la explico abajo), y un detector de fotones (que nos va a decir si el fotón llegó ahí).

La superficie semiespejada es un instrumento óptico que tiene el siguiente efecto: deja pasar la mitad de la luz, y la otra mitad la refleja. O sea, si lo iluminamos con un millón de fotones, la mitad sigue derecho, y la otra mitad rebota. En el caso de nuestro experimento, que le tiramos de a un sólo fotón, podemos decir que ese fotón tiene la mitad de chance de ser reflejado, y la mitad de chance de seguir derecho. [0]

Entonces, veamos qué pasa cuando la lamparita emite un fotón. Este va derechito hasta el espejo (recorrido A-B), y como dijimos puede seguir su camino o reflejarse e irse para la pared. Podemos decir que el recorrido A-B-C tiene un 50% de probabilidad de que suceda, y el recorrido A-B-D tiene la otra mitad. Piénsenlo como las dos fases del ejemplo anterior, el de la pelota: viendo el fotón saliendo de la lamparita como estado inicial, se pueden imaginar que va a pasar después (o sea, avanzando en el tiempo): que el fotón pegue en el detector, o que pegue en la pared.

Pero ahora hagamos la pregunta inversa: arranquemos de la segunda imagen, y tratemos de deducir la primera. O sea, tratemos de imaginar qué pasó antes (retrocediendo en el tiempo), arrancando nuestra visualización desde el fotón impactando en el detector. Para eso voy dibujo el mismo experimento, pero con otros recorridos...

Fotón loco, viaje de vuelta

¿Cómo se entiende este nuevo dibujo? Como decía, tenemos que pensar para atrás. Si nosotros sabemos que el detector recibió un fotón, la trayectoria B-C seguro se cumplió; entonces, al punto B del semiespejo el fotón llego de uno de dos lados posibles: o de la lamparita en A (atravesando el semiespejo), o desde un nuevo punto D (reflejándose en el semiespejo).

Acá me dirás que le estoy pifiando conceptualmente... ¿cómo puede ser que el fotón salga desde el punto D, que arranque desde una pared? Pues claro, ¡el fotón no puede venir nunca de ahí! Eso hace que la trayectoria E-B-C no sea realmente posible. En otras palabras, el fotón salió sí o sí de la lamparita: la trayectoria A-B-C se recorrió seguro (un 100% de probabilidad).

No te sientas frustrada/o si tenés que leer dos o tres veces la explicación para entender que pasa, es bastante avanzado a nivel de física. Yo, la primera vez, lo tuve que leer como cinco veces ;). Una de las razones por la que cuesta entenderlo, y hasta aceptarlo es que, justamente, todo eso está por afuera de lo que nosotros sentimos del universo, no forma parte de nuestra experiencia.

En fin, resumiendo los dos análisis: si hacemos que el tiempo se desarrolle para adelante, arrancando con el fotón desde A, vemos que puede recorrer dos trayectorias, A-B-C o A-B-D, con un 50% de chances cada una. Pero si hacemos que el tiempo se desarrolle para atrás, arrancando con el fotón desde C, tenemos que sólo pudo recorrer un camino: A-B-C.

¡Esta es una muestra de que el fotón atravesando el semiespejo no se comporta de una forma reversible en el tiempo! El tener dos posibles recorridos cuando hacemos correr el tiempo para adelante, y uno solo cuando lo hacemos recorrer para atrás, es totalmente distinto a lo que veíamos en el primer experimento, y totalmente distinto a la forma en que vemos normalmente a nuestro entorno, a la forma en que experimentamos el Universo.

La razón de este comportamiento se explica en las bases de la mecánica cuántica, donde se ve y entiende que hay todo un modelo que explica nuestro Universo con el tiempo reversible, pero hay toda una parte donde el tiempo no lo es. O sea, hay toda una rama de la física donde en las ecuaciones no podemos cambiarle alegremente el signo a t.

Respirá aliviada/o, no voy a meterme en toda esa explicación ;) [1]. Pero lo que te quería mostrar es que ahí afuera, aunque no lo veamos, aunque no forme parte de nuestra experiencia como humanos, hay todo un Universo al que sólo podemos acceder con el poder de nuestros cerebros y su capacidad de pensamiento abstracto. Es una herramienta maravillosa, ¡la tenemos que entrenar más y mejor!


[0] Para usar terminología adecuada, tenemos que decir que hay una amplitud de uno sobre raíz cuadrada de dos de que el fotón esté en un lado y la misma amplitud de que esté en el otro... eso es hablando de distribución de amplitudes con respecto a las posiciones... el módulo del cuadrado de eso nos da la probabilidad de que el fotón esté en un punto o el otro, que es .5 en cada caso.

[1] Pero si te interesa, hay un libro que es GENIAL y que habla de esto en tres o cuatro páginas del medio millar que tiene: The Emperor's New Mind, de Roger Penrose

Read more
Jane Silber

Cycling in London

As the CEO of Canonical, I am proud of the growth of the team in London.  From a team of 5 around a kitchen table in London 10 years ago, the business has grown to 650 employees globally of which over 100 are based in London.

Like many businesses in London, one of the most popular modes of transport to the office is cycling and an even larger proportion of the team would cycle to the office if they felt it was safer than it is now.

We value employee satisfaction, health and freedom and firmly endorse the Mayor’s Vision for Cycling in London. We specifically support the cross London plans from City Hall to create new segregated routes through the heart of the city.

These plans are good for London and Londoners, making it a more attractive and productive city in which we can build a business and serve customers.

Proposed Farringdon Road route. Image from Transport For London 2014.

 

I encourage everyone to respond directly to TFL about these proposals. This particularly applies to businesses whose support for cycling is often not registered.

I know that there many business leaders like me who feel the same and will be speaking up over the coming days.

Read more