The need for faster data throughput is not slowing down and there are a number of high-level business reasons for the continuing data tsunami in modern data centers. Perhaps the most important from a financial perspective is high-performance computing related to the financial markets. Algorithmic traders can turn a slight decrease in latency into potentially millions or billions of dollars. A few weeks back in fact I discussed this very point when interviewing Jim Theodoras – Director of Technical Marketing at ADVA Optical Networking.
Data warehousing is another area where increasing speed hasn’t abated – especially with the challenge of exponentially growing databases. Finally there are clustered cloud-computing services where improved SLAs can directly convert into increased revenue.
In addition the continuing evolution of multicore processors which can be general purpose CPUs or even GPUs used as CPUs means we are seeing dozens and dozens of cores in a typical server. The analogous software trend is virtualization which increases the utilization of the machine’s processors.
A combination of business need and software/hardware innovation has led the information technology industry to the point where inter-server communications is becoming a bottleneck.
To understand how the problem is being addressed I sat down with Brian Sparks, Marketing Working Group Co-Chair of the InfiniBand Trade Association at Interop 2010 in Las Vegas.
Simply stated InfiniBand is a protocol which allows programs running on computers to send messages directly to other computers without the need to run through the operating system network stack and in the process cutting latency by up to 90%.
The concept is often referred to as remote direct memory access because a program can read and write from/to memory on another machine in a direct fashion. Truth be told InfiniBand does use a network stack but it is designed for rapidly sending messages without the need to bother the operating system and as such the term “direct” is applied.
InfiniBand creates a channel directly connecting an application in its virtual address space to an application in another virtual address space. The two applications can be in disjoint physical address spaces – hosted by different servers.
The association just came out with a book on InfiniBand – which was originally targeted at “dummies” like those famous yellow books we are all used to. It seems the association decided to change the name to Introduction to InfiniBand for End Users instead – especially after they realized the typical reader of such a document is far from a dummy.
I read through the first ten pages or so of the 52-page volume and found myself highlighting half of what I read – it is a very good read. It won’t make you laugh or cry but if you understand how it works – perhaps it can get you a raise.
Items worth sharing are that InfiniBand was designed in an application-centric way allowing programs to communicate with one-another without the need to go through the OS. When two applications communicate they set up Queue Pairs (QPs) which become a channel. The amount of data which can be sent at once is 2**31 bytes or two gigabytes.
I decided not to read the whole book because if I did I would likely share it all and give away the ending. You can read it here.
Earlier today I discussed how Ethernet is taking over everything and another example is Fiber Channel where we now have FCoE. Well in the world of high-performance computing we also have RoCE which is RDMA over Converged Ethernet – and this acronym is pronounced Rocky. This specification applies to the latest flavors of Ethernet such as 10GigE and 40GigE and even the newer and faster adapters coming in the future.
As you can imagine the use of current Ethernet technology allows these adapters to be less-expensive and greener as the components are deployed in much larger numbers where efficiency has been maximized.
If you are in a rush to implement some of this cutting-edge technology in your data center so you can generate more revenue you will be thrilled to know RoCE-based Ethernet adapters will be available by 2010-2011 and software is becoming available from the OpenFabrics Alliance now. We can expect adoptions by OS distributions in the second half of this year.