Canonical Voices

Posts tagged with 'intel'

Colin Ian King

I've been fortunate to get my hands on an Intel ® 520 2.5" 240GB Solid State Drive so I thought I'd put it through some relatively simple tests to see how well it performs.

Power Consumption


My first round of tests involved seeing how well it performs in terms of power consumption compared to a typical laptop spinny Hard Disk Drive.  I rigged up a Lenovo X220i (i3-2350M @ 2.30GHz) running Ubuntu Precise 12.04 LTS (x86-64) to a Fluke 8846A precision digital multimeter and then compared the SSD with a 320GB Seagate ST320LT020-9YG142 HDD against some simple I/O tests.  Each test scenario was run 5 times and I based my results of the average of these 5 runs.

The Intel ® 520 2.5" SSD fits into conventional drive bays but comes with a black plastic shim attached to one side that has to be first removed to reduce the height so that it can be inserted into the Lenovo X220i low profile drive bay. This is a trivial exercise and takes just a few moments with a suitable Phillips screwdriver.   (As a bonus, the SSD also comes with a 3.5" adapter bracket and SATA 6.0 signal and power cables allowing it to be easily added into a Desktop too).

In an idle state, the HDD pulled ~25mA more than the SSD, so in overall power consumption terms the SSD saves ~5%, (e.g. adds ~24 minutes life to an 8 hour battery).

I then exercised the ext4 file system with Bonnie++ and measured the average current drawn during the run and using the idle "baseline" calculated the power consumed for the duration of the test.    The SSD draws more current than the HDD, however it ran the Bonnie++ test ~4.5 times faster and so the total power consumed to get the same task completed was less, typically 1/3 of the power of the HDD.

Using dd, I next wrote 16GB to the devices and found the SSD was ~5.3 times faster than the HDD and consumed ~ 1/3 the power of the HDD.    For a 16GB read, the SSD was ~5.6 times faster than the HDD and used about 1/4 the power of the HDD.

Finally, using tiobench I calculated that the SSD was ~7.6 times faster than the HDD and again used about 1/4 the power of the HDD.

So, overall, very good power savings.  The caveat is that since the SSD consumes more power than the HDD per second (but gets way more I/O completed) one can use more power with the SSD if one is using continuous I/O all the time.    You do more, and it costs more; but you get it done faster, so like for like the SSD wins in terms of reducing power consumption.

 

Boot Speed


Although ureadhead tries hard to optimize the inode and data reads during boot, the HDD is always going to perform badly because of seek latency and slow data transfer rates compared to any reasonable SSD.   Using bootchart and five runs the average time to boot was ~7.9 seconds for the SSD and ~25.8 seconds for the HDD, so the SSD improved boot times by a factor of about 3.2 times.  Read rates were topping ~420 MB/sec which was good, but could have been higher for some (yet unknown) reason. 

 

Palimpsest Performance Test


Palimpsest (aka "Disk Utility") has a quick and easy to use drive benchmarking facility that I used to measure the SSD read/write rates and access times.  Since writing to the drive destroys the file system I rigged the SSD up in a SATA3 capable desktop as a 2nd drive and then ran the tests.  Results are very impressive:

Average Read Rate: 535.8 MB/sec
Average Write Rate: 539.5 MB/sec
Average Access Time: sub 0.1 milliseconds.

This is ~7 x faster in read/write speed and ~200-300 x faster in access time compared to the Seagate HDD.

File System Benchmarks


So which file system performs best on the SSD?  Well, it depends on the use case. There are may different file system benchmarking tools available and each one addresses different types of file system behaviour.   Which ever test I use it most probably won't match your use case(!)  Since SSDs have very small latency overhead it is worth exercising various file systems with multiple threaded I/O read/writes and see how well these perform.  I rigged up the threaded I/O benchmarking tool tiobench to exercise ext2, ext3, ext4, xfs and btrfs while varying the number of threads from 1 to 128 in powers of 2.  In theory the SSD can do multiple random seeks very efficiently, so this type of testing should show the point where the SSD has optimal performance with multiple I/O requests.

 

Sequential Read Rates

Throughput peaks at 32-64 threads and xfs performs best followed by ext4, both are fairly close to the maximum device read rate.   Interestingly btrfs performance is always almost level.

Sequential Write Rates


xfs is consistently best, where as btrfs performs badly with the low thread count.

 

Sequential Read Latencies



These scale linearly with the number of threads and all file systems follow the same trend.

 

 Sequential Write Latencies



Again, linear scaling of latencies with number of threads.

Random Read Rates


Again, best transfer rates seem to occur at with 32-64 threads, and btrfs does not seem to perform that well compared to ext2, ext3, ext4 and xfs

Random Write Rates



Interestingly ext2 and ext3 fair well with ext4 and xfs performing very similarly and btrfs performing worst again.

 

Random Read Latencies



Again the linear scaling with latency as thread count increases with very similar performance between all file systems.  In this case, btrfs performs best.

Random Write Latencies


With random writes the latency is consistently flat, apart from the final data point for ext4 at 128 threads which could be just due to an anomaly.

Which I/O scheduler should I use?

 

Anecdotal evidence suggests using the noop scheduler should be best for an SSD.  In this test I exercised ext4, xfs and btrfs with Bonnie++ using the CFQ, Noop and Deadline schedulers.   The tests were run 5 times and below are the averages of the 5 test runs.

ext4:




CFQNoopDeadline
Sequential Block Write (K/sec):506046513349509893
Sequential Block Re-Write (K/sec):213714231265217430
Sequentual Block Read (K/sec):523525551009508774


So for ext4 on this SSD, Noop is a clear winner for sequential I/O.

xfs:




CFQNoopDeadline
Sequential Block Write (K/sec):514219514367514815
Sequential Block Re-Write (K/sec):229455230845252210
Sequentual Block Read (K/sec):526971550393553543


It appears that Deadline for xfs seems to perform best for sequential I/O.

 

btrfs:




CFQNoopDeadline
Sequential Block Write (K/sec):511799431700430780
Sequential Block Re-Write (K/sec):252210253656242291
Sequentual Block Read (K/sec):629640655361659538


And for btrfs, Noop is marginally better for sequential writes and re-writes but Deadline is best for reads.

So it appears for sequential I/O operations, CFQ is the least optimal choice with Noop being a good choice for ext4, deadline for xfs and either for btrfs.   However, this is just based on Sequential I/O testing and we should explore Random I/O testing before drawing any firm conclusions.

Conclusion

 

As can be seen from the data, SSD provide excellent transfer rates, incredibly short latencies as well as a reducing power consumption.   At the time of writing the cost per GB for an SSD is typically slightly more than £1 per GB which is around 5-7 times more expensive than a HDD.    Since I travel quite frequently and have damaged a couple of HDDs in the last few years the shock resistance, performance and power savings of the SSD are worth paying for.

Read more
Colin Ian King

The Ubuntu Kernel Team has uploaded a new kernel (3.2.0-17.27) which contains an additional fix to resolve the remaining issues seen with the RC6 power saving enabled. For users with Sandy Bridge based hardware we would appreciate them to run the tests described on https://wiki.ubuntu.com/Kernel/PowerManagementRC6 and add their results to that page.

Read more
Colin Ian King

The Ubuntu Kernel Team has released a call for testing for a set of RC6 power saving patches for Ubuntu 12.04 Precise Pangolin LTS. Quoting Leann Ogasawara's email to the ubuntu kernel team and ubuntu-devel mailing lists:

"Hi All,

RC6 is a technology which allows the GPU to go into a very low power consumption state when the GPU is idle (down to 0V). It results in considerable power savings when this stage is activated. When comparing under idle loads with machine state where RC6 is disabled, improved power usage of around 40-60% has been witnessed [1].

Up until recently, RC6 was disabled by default for Sandy Bridge systems due to reports of hangs and graphics corruption issues when RC6 was enabled. Intel has now asserted that RC6p (deep RC6) is responsible for the RC6 related issues on Sandy Bridge. As a result, a patch has recently been submitted upstream to disable RC6p for Sandy Bridge [2].

In an effort to provide more exposure and testing for this proposed patch, the Ubuntu Kernel Team has applied this patch to 3.2.0-17.26 and newer Ubuntu 12.04 Precise Pangolin kernels. We have additionally enabled plain RC6 by default on Sandy Bridge systems so that users can benefit from the improved power savings by default.

We have decided to post a widespread call for testing from Sandy Bridge owners running Ubuntu 12.04. We hope to capture data which supports the the claims of power saving improvements and therefore justify keeping these patches in the Ubuntu 12.04 kernel. We also want to ensure we do not trigger any issues due to plain RC6 being enabled by default for Sandy Bridge.

If you are running Ubuntu 12.04 (Precise Pangolin) and willing to test and provide feedback, please refer to our PowerManagementRC6 wiki for detailed instructions [3]. Additionally, instructions for reporting any issues with RC6 enabled are also noted on the wiki. We would really appreciate any testing and feedback users are able to provide.

Thanks in advance,
The Ubuntu Kernel Team"

So please contribute to this call for testing by visiting https://wiki.ubuntu.com/Kernel/PowerManagementRC6 and follow the instructions.  Thank you!

Read more

At Ohio LinuxFest I had lunch with Carl from System76 and Chase Douglas, who has been working on bringing multitouch to Ubuntu. Since we’re nerds the subject of hardware came up, and I got a glimpse of the amount of effort S76 puts into getting quality parts that are known-good Linux compatible components and some of the challenges they face. They have a budget box that boots in 6 seconds, if you get the SSD option. So, speaking about SSDs …

I had a first generation Intel SSD, and like most Intel SSD owners there’s really nothing like it. But it can get expensive, especially on a nice home machine where you want lots of room. On a laptop you can compromise with a hybrid drive, like this one, which I put in my new netbook and is a nice middle ground. However if you’ve got room in your PC case there’s a great compromise that I’ve been rolling with at home. Life is too short to worry about partitioning, however, a 40gb SSD is about one hundred bucks and a worthy addition to your existing PC.

“But it’s only 40gb!”

Yes. You will get this, and then put / on it. /home will go on your normal 1tb drive or whatever. So your OS is on the SSD, and all the stuff you need space for will be on the big disk.

“Is it worth the hundred bucks?”

Yes, because instead of spending $250 to get a 200mhz microbump or another 2 cores on the CPU you will get the mid priced CPU option and then buy this and then come out on top, by a mile. Or you will put this in your existing PC and realize that your existing computing needs are just fine once you get rid of the drive bottleneck.

“Aha, but what about stuff in /home, that’s still on spinning platters!”

Login time is about the sameish, since you’re reading a bunch of junk from .gconf, but the rest of the boot is so fast you won’t mind the compromise. Apps will launch very quickly. Your data will still be on disk, so copying stuff around will be normal, etc. You can also make a temporary directory under / and symlink things there that is important to you (like your Firefox profile, trust me on that one). And there’s enough room on the drive to pop into /tmp if you want to build something and want the SSD speed.

On a related note ZaReason does offer the X25-v and dual drive setups, though I have no idea if they partition it for you how you would expect. If anyone is familiar with this leave a comment!

Read more
Canonical

A few weeks ago myself and Dustin Kirkland had the privilege of travelling to the Intel facility in Hillsboro, Oregon to work with Billy Cox, Rekha Raghu, Paul Guermonprez, Trevor Cooper and Kamal Natesan of Intel and Dan Nurmi and Neil Soman of Eucalyptus Systems and a few others on developing a proof of concept whitepaper on the use of Ubuntu Enterprise Cloud on Intel Xeon processors (Nehalem).

The whitepaper is published today on the Intel site (registration required) so it seems like a good time to talk about why we collaborated.

The Intel Cloud Builder program is intended to develop some best practice information for businesses and institutions looking to take advantage of the promise of cloud computing. As we do consistently with UEC, we are being specific when we talk about cloud as the ability to build Infrastructure as a Service behind a corporate firewall – that is on your own systems, protected by your own security protocols.

In Portland we had access to some great hardware and as an ex-Intel man, it was good to mess directly with the metal again. Intel defined a number of use and test cases and the guys from Intel, Eucalyptus and myself were able to have some fun putting UEC through its paces. And the results were good. We documented them and the whitepaper gives numerous code and scenario examples to help anyone new to cloud to get up to speed really quickly and the make the most of the capabilities of the Xeon processor in supporting an internal IaaS infrastructure. You can find out how to get started on UEC with existing documentation. but this whitepaper takes it to the next stage.

Being able to test the software as part of the Intel Cloud Builder program and jointly publish this whitepaper is a great endorsement of what is still a young technology. And I hope it will give users confidence to start building their own UEC deployment on x86 technology.

Nick Barcet, Ubuntu Server Product Manager

Read more