Canonical Voices

What Certifiably (Brendan Donegan's Ubuntu Blog) talks about

Posts tagged with 'linux'

brendandonegan

The inaugural online UDS (or vUDS as it’s becoming known) is underway. This brings with it a number of new challenges in terms of running a good session. Having sat in on a few sessions yesterday and been the session lead for several sessions at physical UDS’s going back nearly two years now, I thought I’d jot down a few tips on how to run a good session.

Prepare

Regardless of whether the session is physical or virtual, it’s always important to prepare. The purpose of a UDS session is to get feedback on some proposed plan of work (even if it is extremely nebulous at the time of the session). Past experience shows that sessions are always more productive where most of the plan is already fleshed out prior to the session and the session basically functions as a review/comments meeting. This depends on your particular case though, since the thing you are planning may not be possible to flesh out in a lot of detail without feedback. I personally find this is rarely the case though.

Be punctual

UDS runs on a tight schedule, at least in the physical version, although I don’t see any good reason why this should change for vUDS. Punctuality is therefore important not as good manners but from a practical point of view. You need to time to compose yourself, find notes and make sure everything is set up. For a physical UDS this would have been to check microphones are working and projectors are projecting. For a vUDS, in my brief experience, this means making sure everyone who needs to be is invited into the hangout and that the etherpad is up and the video feed is working on the session page.

Delegate

As the session lead it is your responsibility to run a good session, however it will be impossible for you to perform all the tasks required to achieve this on your own. Someone needs to be making notes on the Etherpad and someone needs to be monitoring IRC. You should also be looking out for questions yourself but since you may be concentrating on conveying information and answering other questions, you do need help with this.

Avoid going off track

Time is limited in a UDS session and you may have a lot of points to get through. Be wary of getting distracted from the point of the session and discussing things that may not be entirely relevant. Don’t be afraid to cut people short – if the question is important to them then you can follow up offline later.

Manage threads of communication

This one is quite vUDS specific, but especially now that audiovisual participation is limited, it is important that all of the conversation take place in one spot. Particularly for people who are catching up with the video streams later on. Don’t allow a parallel conversation to develop on IRC if possible. If someone asks a question in IRC, repeat it to the video audience and answer it in the hangout, not on IRC. If someone is talking a lot in IRC and even answering questions, do invite them into the hangout so that what they’re saying can be recorded. It may not be possible to avoid this entirely, but as session lead you need to do your best to mitigate it.

Follow up

Not so much a tip for running a good session, but for getting the best from a good session. Remember to read the notes from the session and rewatch the video so that you can use the feedback to adapt your plan and find places to follow up.

That’s all there is to say, I really hope this first virtual UDS goes very well and that sessions are productive for everyone involved.


Read more
brendandonegan

I find that sometimes the Network Manager applet in Ubuntu can be a little temperamental (apologies to the maintainer, cyphermox, if he’s reading this – but such is the nature of software). Sometimes it won’t show available routers and if that’s the case then I’ve established a little workaround that I’m telling you about mainly because it involves a script I wrote that lives in a somewhat obscure place in Ubuntu.

Step one in the workaround is needed if you don’t know which networks are available in advance. If you’re sitting in your home then you’ll probably not need this step since most people know their router SSID. If you don’t then you can scan using:

nmcli dev wifi list

This is really reliable and always works if your WiFi hardware is working.

The second step is to use the SSID to create the connection using the script I wrote:

sudo /usr/share/checkbox/scripts/create_connection $SSID --security=wpa --key=$WPA_KEY

If the router doesn’t use any security (which nmcli dev wifi list will tell you) then you don’t need –security or –key. If the router doesn’t use WPA2 (maybe it uses WEP), then you’re out of luck – and deservedly so. Change the routers security settings if you can!


Read more
brendandonegan

Preparing for UDS P

With the release of Oneiric Ocelot just around the corner and the archives firmly in freeze mode, my main focus has turned to preparing topics for UDS P which is taking place in Orlando at the end of this month. As you might know by reading my blog one of my main roles within the team is co-ordinating testing of SRU kernels by Hardware Certification. The next cycle of development is going to take on a strong flavor of SRU testing. Personally I’ll be hosting two sessions at UDS,

The first one is titled ‘Improving automated certification testing of Kernel SRUs‘ and is based around increasing the overall scope and coverage of the test suite used when testing the SRU kernels. Recently I took the time to document the test suite we use during SRU testing, and if you read through it you can see that it’s really quite basic and hasn’t been especially good at picking up regressions so far. I’m quite excited at the prospect of doing this and my definition of success here will be a test suite that starts detecting real problems early. Linked at the bottom of the blueprint are some notes that were brainstormed together on the #ubuntu-kernel channel on FreeNode last week which will form a foundation for the discussion. If you’re interested in ensuring that kernel updates don’t break your system and will be at UDS P then feel free to subscribe to the blueprint (of course you’re free to send your feedback via this blog as well).

The second topic is titled ‘Image creation and publishing for kernel SRU testing‘ and has a less broadly interesting premise but will be important for us nonetheless. At the moment we use quite a complicated lab infrastructure to install all the necessary pieces for SRU testing over the network and it prevents us from easily allowing external parties to perform the same testing themselves. If we can easily automated the creation of images which include everything required for testing then we can get rid of this barrier. If the subject matter interests you then, again, either subscribe or leave feedback here.


Read more
brendandonegan

Every well seasoned tester knows the advantages and disadvantages that manual/semi-manual and automatic tests have when compared to each other. A manual test is easy to create, just a few simple words and you have your test. Automatic tests allow you to (almost) fire and forget about them with your only concern being the PASS/FAIL at the end. A semi-manual test is a funny hybrid of the two, usually only used in a situation where a fully automated test is almost physically impossible (e.g. verifying screenshots and tests involving peripherals). Manual tests are not good in situations where the same test must be run many times across a large number of configurations. This is exactly what we have in hardware certification, where we must run tests across ~100 systems on a very regular basis. To this end we’ve been taking the opportunity this development cycle to update some of our older tests to be more automated.

One of the tests that I updated was one which would cycle through available resolutions on the system (using the xrandr tool) and request the tester to verify that they all looked okay with no graphical corruption. This is the sort of test that is fine when someone is running the tests on a one-off basis, it’s not so good when one tester needs to supervise 50+ systems during a certification run. One of the main problems is that it causes too much context switching, with the tester constantly needing to keep an eye on all the systems to see if they’ve reached this test yet. Obviously, it being a graphical test, it’s difficult to do fully automated verification so a compromise needed to be reached. The solution I came up with was to integrate screen capture into the test and then upload these screens in a tgz file as an attachment with the test submission. Everything going well, the tester can sit down at their own computer and go through the screens and confirm they’re okay. In fact the person verifying the screens doesn’t even need to be in the lab! The task can be distributed amongst any number of people, anywhere in the world.

Another test that looked like a prime candidate for automation was one for testing the functioning of the wireless card before and after suspending the system. Previously the test case was:

- Disconnect the wireless interface.
- Reconnect and ensure you’re online.
- Suspend the system.
- Repeat the first two steps

This was all specified to be done manually. I am currently updating this test to use nmcli to make sure a connection can be made, then disconnect and reconnect just as would happen if the tester did the steps manually using nm-applet. The one thing I haven’t got down pat yet is connecting to a wireless network where a connection didn’t exist before. This step may be optional as it could be expected that the tester will do this manually at some point during the setup of the tests and we can trust a connection to be available already. This will mean this test has gone from manual to fully automated and hopefully should shave potentially some significant number of minutes off the whole test run!

Saving time on our existing tests will allow us to introduce new tests where appropriate, so we’re able to provide even more thorough certification testing.


Read more
brendandonegan

My favourite aliases…

Something I recently (embarrassingly) discovered is that bash supports the concept of aliases, which are like shorthand for commonly used commands. Ubuntu comes with a few as default already in your .bashrc, e.g. ‘ll’ for ‘ls -alF’ (long listing). You’re free of course to add your own in .bashrc, so here I present some of the ones I use:

alias chx='chmod +x'
alias rvim='sudo vim' (if you use VIM that is ;) )
alias sagi='sudo apt-get install -y'
alias sagr='sudo apt-get remove'
alias sagu='sudo apt-get update'
alias saar='sudo add-apt-repository'

I find that especially the apt ones save a lot of typing. Hope you find them useful!

(oh yeah, just put the lines in your ~/.bashrc and run ‘source ~/.bashrc’)


Read more
brendandonegan

As discussed at last months Ubuntu Developer Summit in the session ‘ARM and other architectures certification program‘, there’s a plan to start certifying ARM hardware, or at least start investigating how we’ll do it. To this end I’ve received on loan a TI OMAP4 Pandaboard from Canonical’s ARM QA team. I’ve actually had it here in the office for quite a few weeks now but for some reason or another I haven’t got around to blogging about it yet!

So, without further adieu – here are a couple of shots of my setup:

I like it because it’s really compact and smacks of geekiness, with all the exposed circuits, yet is really quite easy to use in a lot of ways. The monitor is plugged in via the HDMI port on the right hand side (because of an issue with my monitor I can only get 640×480 out of it, so everything is very squeezed on the screen) and the wireless desktop receiver which handles my mouse and keyboard plugs right in to one of the two full sized USB 2.0 ports. The whole thing is powered by my laptop (even when it’s suspended) via USB-AC 5v connector, also on the right-hand side.

It’s running Natty/Unity 2D installed on the 8GB SDHC card on the left of the board. This means that the whole setup cost (if I had have payed for rather than borrowed it) just under $200. The white labeled chip on the top left hand side of the board is the WiFi/Bluetooth chip and that works *perfectly* out of the box – often picking up a better signal than the laptop sitting right next to it. I also have the option of plugging in my USB headset in the the same USB hub as the wireless receiver (it’s a tight squeeze but it just about fits) and that too works perfectly.

Cons are that I don’t have a USB HDD so Ubuntu is running on flash memory (notoriously bad performance) and that if I decide to power down my laptop but forget the Pandaboard has some task running on it then all is lost :( Overall though it’s a really nice piece of equipment and because of all the good work that has been done around it, I could recommend one to anyone with a bit of technical know-how (no ARM experience required!)


Read more
brendandonegan

In my travels around Launchpad looking for bugs to triage, I came across an old one that I noticed (but not before others apparently) in the Alpha 1 release of Oneiric Ocelot. This was a problem with update-manager not ‘seeing’ that network-manager had a connection because the new version of network-manager (0.9) uses different codes to express ‘connected’.

This issue was bugging me, so I decided I’d take it upon myself to patch it up. Someone had done a similar patch in software-center so I already had all of the knowledge needed right there (i.e. what are the new codes). I jumped into my Oneiric VM, branched the update-manager code and hacked away at a couple of Python modules, tweaked, buffed and polished until lo and behold, on starting update-manager it picked up the connection! A few command lines (bzr stat, bzr commit, bzr push) and a few clicks in Launchpad later my merge request was with the update-manager project maintainer (Michael Vogt aka mvo). Minutes later it was merged and the next day with the help of my patched version of update-manager :) I was able to update update-manager with the patch.

Looking at my own name there in update-manager’s description of the change, I couldn’t help but think how awesome it is that I’m able to do this with my favourite operating system. That’s what makes OSS magic for me…


Read more