So here are are generating load for a JackRabbit test. A prospective partner wants to know what it can handle. Fair enough, we would like to know if we can push it to its limits.
Basic test: 4-way channel bond gigabit, with NFS export. 4 client machines mounting this, all generating load via iozone. Iozone run like this:
I started out with a motley crew of client hosts, all running whatever version of Linux, figuring this would be fine.
First, the nVidia ck804 gigabit port, you know, forcedeth, is terrible. Under heavy load, it appears to corrupt packets.
Fine. Replace this with an Intel e1000 based 1000/MT. I cannot say enough good things about this card. Really, I can’t.
Second, the d-link or link-sys in my old crappy Athlon (32 bit) unit, a machine I have had for about 4 years now, does the same thing under load. Since I use this machine as a CD/DVD burner, this concerns me.
Now here is where we leave the realm of hardware, and start talking about OS distributions.
The clients have (er, had) SuSE 10.2, SuSE 10.1, RHEL 4, and Ubuntu 7.04. In the Ubuntu case, I found that our built kernel (184.108.40.206Scalable) is generally better at packet generation (not sure why) than the 220.127.116.11default kernels. Maybe I will understand this some day.
But here is what is interesting. All units are connected by cat 5e/6 into the same switch (not the worlds greatest switch, but a switch). All can do some great data rates with things like netperf and other benchmarks.
But stick in something like IOzone, all run the same way, over the same mount point, and … well …
The ubuntu machines give very similar results, near theoretical peak performance.
Its the SuSE machines that worry me. These are with 2.6.x kernels, x< =18
Note that the read performance and the rewrite performance are terrible.
All benchmarks were started within 2 seconds of each other, the 2 ubuntu finished within 20 seconds of each other, and the SuSE version is still crunching.
Looking at the output of dstat (dstat -N bond0,eth2,eth3,eth4,eth5), I see two of the 4 channel bond ports running at full capacity during the test, and one limping along at sub-capacity.
You will notice that I haven’t spoken about RHEL yet?
Well, now may be time.
So I used RHEL, ok, Centos 4.2, on the itanic box here as the last test case. Or at least I did until I gave up on it.
iozone would, about 1/3 into the test, hang this box hard. Completely unresponsive. Big red switch time.
My impression in using Ubuntu over the past several months is that it is technically one of the best engineered distros (and that may be due to its being built upon Debian). The kernel we built in this case has all the latest NFS patches. We may be seeing patch conflicts with previous non-patched distros. Performance using this distro has generally been very snappy. Stuff worked, and worked well. Building a custom kernel for patch/driver support for customers is easy (as compared to the other distros such as SuSE and RHEL, where it is akin to pulling teeth, without anaesthetic).
I just took one of the SuSE 10.2 machines down, booted it with Ubuntu CD, and now we are going to see what happens when we have 3 Ubuntu and 1 SuSE client. Then we will try with 4 Ubuntu clients.
Of course, while running these tests, I used atop and other tools to watch JackRabbit. It wasn’t even sweating, panting hard. Had lots of head room.
And this is the small version, the JR-S, 8 TB unit, with 16 drives.