In a recent blog, John Roese, Nortel CTO, highlighted the IT infrastructure behind CERN's Large Haldron Collider, 'the single largest machine ever built and the biggest scientific experiment ever conducted on the planet'. This 'grid' will distribute, process and analyze some 15 petabytes each year.
What John failed to mention is that the primary network behind CERN is not based on core routers, as is done in the Internet at large and in conventional research and education networks. Networking researchers concluded that core routers just couldn't meet the latency demands for the huge files that needed to be shipped around and for the grid computing applications (i.e. packet segmentation was one major culprit).
Instead, the network is based on lightpaths, 10Gbps lambdas and SONET/SDH pipes, between computing facilities.
The network currently spans Europe and North America, and extends to Taipei and Mumbai.
The plan is to allow these lightpaths to be established dynamically by the applications themselves (a case of communications enabled applications).
Nortel has collaborated for many years with leading networking research institutions such as Northwestern University in Chicago (StarLight in the US, SURFnet in the Netherlands (including Netherlight), and CANARIE in Canada.
This type of out-of-box thinking is also behind Nortel's leadership in 40 and 100 Gbps optical systems and optical Ethernet networking, including technologies such as Provider Backbone Transport (PBT).