I'm happy to announce a new transport based on NIO.2: TCP_NIO2 !
The new transport is completely non-blocking, so - contrary to TCP - never blocks on a socket connect, read or write.
The big advantage of TCP_NIO2 over TCP is that it doesn't need to create one reader thread per connection (and possibly a writer thread as well, if send queues are enabled).
With a cluster of 1000 nodes, in TCP every node would have 999 reader threads and 999 connections. While we still have 999 TCP connections open (max), in TCP_NIO2 we only have a single selector thread servicing all connections. When data is available to be read, we read as much data as we can without blocking, and then pass the read message(s) off to the regular or OOB thread pools for processing.
This makes TCP_NIO2 a more scalable and non-blocking alternative to TCP.
PerformanceI ran the UPerf and MPerf tests  on a 9 node cluster (8-core boxes with ~5300 bogomips and 1 GB networking) and got the following results:
UPerf (500'000 requests/node, 50 invoker threads/node):
TCP: 62'858 reqs/sec/node, TCP_NIO2: 65'387 reqs/sec/node
MPerf (1 million messages/node, 50 sender threads/node):
TCP: 69'799 msgs/sec/node, TCP_NIO2: 77'126 msgs/sec/node
So TCP_NIO2 was better in both cases, which surprised me a bit as there have been reports claiming that the BIO approach was faster.
I therefore recommend run the tests in your own environment, with your own application, to get numbers that are meaningful in your system.
The documentation is here: .