Caveat: this is a quick check to see if we have a performance regression, which I run routinely before a release, and my no means a comprehensive performance test !
I ran this both on my home cluster and our internal lab.
This test is described in detail in . It forms a cluster of 4 nodes, and every node sends 1 million messages of varying size (1K, 5K, 20K). We measure how long it takes for every node to receive the 4 million messages, and compute the message rate and throughput, per second, per node.
This is my home cluster and consists of 4 HP ProLiant DL380G5 quad core servers (ca 3700 bogomips), connected to a GB switch, and running Linux 2.6. The JDK is 1.6 and the heap size is 600M. I ran 1 process on every box. The configuration used was udp.xml (using IP multicasting) shipped with JGroups.
- 1K message size: 140 MBytes / sec / node
- 5K message size: 153 MBytes / sec / node
- 20K message size: 154 MBytes / sec / node
This test mimicks the way Infinispan's DIST mode works.
Again, we form a cluster of between 1 and 9 nodes. Every node is on a separate machine. The test then has every node invoke 2 unicast RPCs in randomly selected nodes. With a chance of 80% the RPCs are reads, and with a chance of 20% they're writes. The writes carry a payload of 1K, and the reads return a payload of 1K. Every node makes 20'000 RPCs.
The hardware is a bit more powerful than my home cluster; every machine has 5300 bogomips, and all machines are connected with GB ethernet.
- 1 node: 50'000 requests / sec /node
- 2 nodes: 23'000 requests / sec / node
- 3 nodes: 20'000 requests / sec / node
- 4 nodes: 20'000 requests / sec / node
- 5 nodes: 20'000 requests / sec / node
- 6 nodes: 20'000 requests / sec / node
- 7 nodes: 20'000 requests / sec / node
- 8 nodes: 20'000 requests / sec / node
- 9 nodes: 20'000 requests / sec / node
This is actually good news, as it shows that performance grows linearly. As a matter of fact, with increasing cluster size, the chances of more than 2 nodes picking the same target decreases, therefore performance degradation due to (write) access conflicts are likely to decrease.
Caveat: I haven't tested this on a larger cluster yet, but the current performance is already very promising.