Wow, I'm still in a daze. Atlanta LinuxFest (ALF) was an incredible event. It was held at the IBM facility in Atlanta on Saturday.
Wow, I'm still in a daze. Atlanta LinuxFest (ALF) was an incredible event. It was held at the IBM facility in Atlanta on Saturday.
Wanted to drop a quick note about ALF...
I'll be speaking at the Atlanta LinuxFest, talking about the Ubuntu Kernel. This will talk about the team, how we develop & maintain the Ubuntu Kernel.
In fact there are several Canonical folks talking at ALF.
Its been a bit since I've blogged last. Most of it is due to moving. We moved from Raleigh NC to a small town in the western part of NC called Union Mills. The good thing is we are on a farm, we have lots of land, space, fresh air... however the Net connection just plain sucks.
I'm 2.3 miles from the CO, so the best I can get with DSL is a 3m/384k connection thru AT&T. So I called AT&T commercial and was assured I could be 2x bonded ADSL lines that would give me effectively a 6mb connection. Guess what? The fibbed. In the end I had to settle for 2x ADSL lines not bonded, and I elected for the non-commercial option since it was far cheaper.
Now came the big question how to best effectively use em'. After hitting up Google I found lots of interesting solutions. Most were very complex, routing various protocols out this interface or that.... I wanted something that would give me the closest to a load balanced connection as possible.
I'm using a old HP Desktop with 3 network cards as my gateway router running Jaunty 9.04 Server Edition.
Below is what I came up with using ip route and iptables. I put in bogus IPs to keep it as real as possible.
First I added two routing tables to the
Then I found a script on the net and used it as a template and hacked it up as so:
# DSL Lines are IF0 & IF1, IF2 is local net.
# IP Addr on Gateway matching interfaces above
# DSL IPs
# Network addresses
# Routing table entries
# Set up routes
ip route add $P0_NET dev $IF0 src $IP0 table $T1
ip route add default via $P0 table $T1
ip route add $P1_NET dev $IF1 src $IP1 table $T2
ip route add default via $P1 table $T2
ip route add $P0_NET dev $IF0 src $IP0
ip route add $P1_NET dev $IF1 src $IP1
# Set up default route to balance between both interfaces
ip route add default scope global nexthop via $P0 dev $IF0 weight 1 \
nexthop via $P1 dev $IF1 weight 1
# Add the rules for the routing tables
ip rule add from $IP0 table $T1
ip rule add from $IP1 table $T2
# Now for the masq bits
iptables -t nat -F
iptables -t nat -X
iptables -t filter -F
iptables -t filter -X
#iptables -t nat -A POSTROUTING -o $IF0 -j MASQUERADE
#iptables -t nat -A POSTROUTING -o $IF1 -j MASQUERADE
iptables -t nat -I POSTROUTING -s 172.31.0.0/24 -j MASQUERADE
# Turn on ip_forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -i $IF2 -s 172.31.0.0/24 -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
# Flush routing every four hours
echo 144000 > /proc/sys/net/ipv4/route/secret_interval
It works like this... Each time a connection is established outbound, the server will decide which interface to route it out of. This should be a rough 50/50 split. The server decides based on congestion and a few other factors. Only one connection can only ever be established over the same route.
If I was start a download, it will only ever utilize a single connection, another download from the same IP will also utilize the same connection. This is due to the kernel having already cached the route. If a 3rd download to a new server was to be started it will likely be established over the second connection due to the first route being congested and the second route is idle.
The route will be flushed 4 hours after the first download completes, then a new decision with be made on how to contact the original server.
I've been using this for a bit now and it seems to do what I need it to do... give me a faster response time when I have intensive net operations going on, like vonage & rsync's to offsite machines.
By no means am I a iptables or routing guru. If anyone has any other suggestions or better ways to do it I'd love to hear them.
This weekend I took the family to the mountains of western North Carolina. That is where my wife was born and raised and we will be moving there when the kids get of school in June.
The weather was a bit crappy today and that gave my wife lots of time to continue her investigation of Ubuntu. If you want to read its here: http://amber.redvoodoo.org
I find it very enlightening reading it. She has not been asking for my help and I have to deliberately stay away so I don't volunteer. One thing I found very informational is the Ubuntu help. To be honest I never bothered to read it. Watching her use it she was grumbling about sudo. That caught my attention so I listened more... "Why do I care what sudo does?, why do I care about a command line?" were some of the statements I heard her utter. The one that really struck me was "on my Mac I *never* use command line..." Hmmmm, it was at that point I realized we (techies) assume everyone will need to sudo and use a terminal, if we did a better job of designing interfaces they wouldn't need to. In fact you have to hunt for the terminal application on a Mac. We have it in the accessiories menu. Something to think about.
At Red Hat I managed the Base OS group and that dealt with primarily userspace & plumbing so I never really thought about how to make it better. At Canonical I manage the Kernel Team and again I don't give the desktop much thought. I have been using Linux so long I remember when you had to configure FVWM to launch your applications. Anything that was easier than that to me has been a big win. I just take it for granted you need to do things different than Windows & Mac users do. Watching Amber struggle to understand things has given me a whole new appreciation as to the work we as a community need to do.
Amber managed to get on Freeenode and join #ubuntu-women and join the ubuntu-women mailing list (her first mailing list subscription ever!). The folks in the channel were very patient and supportive of her endvour with Linux. She is very much enjoying the community aspect of it all.
I had no sooner got home from Europe when my wife Amber has the idea that she wants to use Ubuntu. (Thats what I get for getting her an Ubuntu T-Shirt with "Linux for Human Beings" on the back.)
My wife was a die hard windows user for years. When I got tired of being her personal admin, we moved to a Mac. For me a Mac was close enough to Linux in that I could ssh in, do most thing remotely... As of yesterday I couldn't pry a Mac out of her cold dead hands...
Over the years I had tried to move here to various Red Hat flavors since I worked at Red Hat, we tried RHL, Fedora, RHEL WS, and every time it met with abysmal failure. Usually it manifested itself as some sort of failure when I was out of town, she couldn't print, WIFI wouldn't work, or just her plain impatience when it comes to button clicking. If something doesn't immediately return she keeps clicking, selecting more and more menu choices until the computer grinds to a halt with 10,000 dialog boxes all over the screen with wording she does not understand.
So I'm back aboard the "convert the wife to Linux" train yet again. This time I don't get to help, advise or otherwise participate. I gave her an old laptop the Ubuntu 8.10 Intrepid CD and showed her how to "press F12" to get to the boot selection menu.
She is blogging about it if you want to follow the saga you can here:
Cuz God knows I'm living it... *sigh*
It has been a long few weeks, with that said I'll try and recap some of the more interesting things that have been going on with Ubuntu, specifically the kernel happenings in the Jaunty Jackalope release.
The Platform Team met in Berlin for the Jaunty Platform Sprint for the week of 2-6 Feb. This was an incredible event with the vast majority of the Canonical Engineering teams. We had both cross team and individual sub-team tracks. The kernel track covered all of the release roadmap items and administrative topics.
I'll talk about some of the roadmap items and the most interesting highlights...
The Jaunty Kernel version will be 2.6.28. We considered 2.6.29, it was not selected however due to all of the major changes. The primary reasons were due to the large number new features that are scheduled to land in it. Regression of functionality is a large concern and there would be a good chance of that happening given when estimated date that Linus will declare it baked. Unfortunately it just doesn't line up with the Jaunty release cycle. On the bright side... for Jaunty+1 we will have time to shake out any issues and are looking towards 2.6.30 or .31
Suspend & Resume
We are making the suspend & resume one of our top priorities for this cycle. We ran a suspend and resume workshop with every notebook at the sprint.
Surprisingly we had a small number of failures. Most of them were on resume with NVidia video. We did not test the priority divers only the free ones. Out of 65 machines tested (various models) there were 12 failures.
We will be issuing a Call For Testing at the Beta release, however for those of you that want to play along at home early you can visit the Suspend/Resume wiki here: https://wiki.ubuntu.com/KernelTeam/SuspendResumeTesting and some more of the background material is here: https://wiki.ubuntu.com/KernelTeam/SuspendResume
Some other notable suspend and resume news for Jaunty...
As the Ubuntu Intrepid release came down to the wire we ended having a serious bug (LP #264019). This bug was very difficult due to the way it manifested itself. First some background.
For a number of years the Linux Kernel had something called TCP Timestamping in the kernel. In 2.6.27 in the rc1 timeframe upstream did some TCP stack fixes and one of these broke some very old consumer grade DSL modems and routers. Keep in mind the fixes in question are technically correct, they follow the requisite IETF RFCs. It was this old consumer grade equipment that was at fault. All this is documented in kernel bugzilla in bug #11721. In the end a patch was developed that reset the TCP ordering.
Ok, after all this why is this such a big deal? Timing and the nature of the bug. A user reported that without this patch they could not connect to our archive servers over the Internet. This posed a problem for any user that had the old hardware. They would be unable to get the fix via the normal update method. Not a good thing.
So the next question would be why not just add the kernel patch? Thats where the timing issue comes in. We were at a point in the release cycle where to spin, test and validate a new kernel would have delayed the release up to a week.
We decided to go with a temporary workaround. The workaround would all the affected users the ability to get the fixed kernel. In parallel we prepared a security kernel that was ready in the archive by the time the Intrepid images hit the mirrors. The security kernel turns off the workaround we put into the procps package, and contains only the patch to fix this issue.
Decisions like this are made all the time by Distribution vendors. Its walking a fine line between whats best for the users and the amount of work, cost and end user expectation. We don't take issues like this lightly, all parts of the Ubuntu team and the highest levels of Canonical management are involved.
I hope this helps clairify things for people.
P.S. Its currently 14:43 London time as I write this and the security kernel has not yet hit the archive. Don't worry its making in the process of being published. It hould the archive shortly.
I received a comment on my blog, referencing the Kernel Team requiring a LP bug prior to committing a patch to the tree. The person commenting called it "Bureaucracy". I thought this would be the common reaction so I wanted to raise it here. My response is below...
Its not about bureaucracy, its about accountability. We add numerous patches during a cycle. While most do have a LP entry there are quite a few that don't. The problem manifests itself when someone can't remember why they added the patch. Obviously it was intended to fix a problem. There have been occasions where the patch, while fixing one bug introduced a much bigger one. When going back through the history trying to figure out why we would add it, the usual answer is "it seemed like a good idea at the time".
We are striving to stick as closely as possible to upstream, and every patch we add, whether a backport from a newer kernel or a sauce patch needs to have a bug attached. This is common a common "Change Control Measure". If it worth adding it should have a valid bug attached.
For a job that is "Work from home" I'm sure not home a whole lot. Looks like I'll be in London in Aug (14th - 22nd). I'm soliciting suggestions on what to do over the weekend. Thoughts anyone?
I'll be traveling every month up to the end of the year. In Sep, Linux Plumbers Conference, in Oct. a visit to our office in Taiwan, Nov. another conference Linux End User Collaboration Summit and in Dec the Ubuntu Developer Summit in CA. Thats only the ones I know about, I'm sure a few others will get squeezed in.
I've been neglect in posting to the Blog. Since starting with Canonical I've had lots of questions about the company, Ubuntu community, culture etc... I'm planing on taking some time and doing a detailed posting in the near future. So if your interested stay tuned.
Since my last posting I've received numerous, emails, IMs & text messages about going to work for Canonical. All in all, the majority have been good wishes and "glad to have you back in the community" messages. However I have had numerous questions so I thought I'd answer here since my blog is where it all originated from.
Q. What are you going to be doing?
A. Ubuntu Kernel Manager. (The following is more for my family since the title means nothing to them). Here is the job description:
Job Title: Ubuntu Kernel Team Manager
Job Location: Your home, given appropriate facilities including broadband Internet
Reports to: Ubuntu CTO (Matt Zimmerman)
Job Summary: Drive the leading edge of desktop and server OS technology based on the Linux kernel, open source methodology, and a supportive community of users and developers.
Key responsibilities and accountabilities:
• Lead a team of engineers responsible for the development and maintenance of the Ubuntu branch of the Linux kernel
• Take overall responsibility for day to day kernel development
• Manage project plans and schedules
• Encourage and enable community participation in accordance with the unique philosophies and practices of Ubuntu
• Ensure worldclass hardware compatibility for Ubuntu by working with vendor and OEM partners to deliver driver support for their components and systems
• Provide direct line management for a fastmoving team of 5+ individuals
• Provide regular updates on program results and provide feedback and new action plans if necessary
• Lead and participate in regular development “sprints” involving international travel, 4+ weeklong trips per year
Required skills and experience:
• Proven track record in project management and management of smallmedium sized teams at a global level
• 3 to 5 years experience in technical project management, Linux and open source focus strongly preferred
• Solid knowledge of software/software industry trends, particularly open source software
• Strong English language communication skills, especially in online
environments such as mailing lists and IRC
• Fundamental technical understanding of the Linux kernel and its development, and with architectural
principles of Linux distributions (including packaging)
• Ability to effectively interact with diverse group of people (technical, nontechnical); multitask when necessary
• Ability to be productive in a globally distributed team through selfdiscipline and selfmotivation, delivering according to a schedule.
• Selfdriven, results oriented with a positive outlook, detail oriented, responsive, proactive
Short and sweet... more details to follow. This was more to save me the multiple explanations.
I decided to leave HP and have taken a position with Canonical as the Ubuntu Kernel Manager.
Why leave HP? The job I was doing was not working with Linux. Yea yea I knew that going in but I wasn't prepared for the Linux withdrawal. The offer from Canonical was very interesting and I just couldn't refuse.
I will be working out of my home in NC, as such I've been busy getting the upstairs guest bed room ready as an office. As part of the exercise I've managed to gather up about 3 large totes of misc cables, part and computer bits that I need to get rid of. Becca decided she wants to have a yard sale and keep the money she makes from it. Like dad would ever have a say? It is good to get rid of the junk.
We'll be going to the mountains for the Memorial Day weekend, hopefully hooking up with Greg & Tammy and mucho brews.
More to follow...
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.