Scientists from CalTech University have set a new world record for data transfer, after they successfully reached a combined rate of 186 GB/s, in both direction. Their work was presented at the recent SuperComputing 2011 (SC11) conference in Seattle.
To put things into perspective 186 GBps would roughly mean 100,000 transferred Blue Ray disks in a single day, or you could use it to download the current version of the Internet in 1,3 years – the type of speed which can only be tracked by the highest quality network performance monitor. This extraordinary advancement will pave the way for the next generation of high-tech optical fiber networks, capable of transfering high volumes of information across oceans and continents.
The researchers used a 100-GB/s network circuit between University of Victoria Computing Centre located in Victoria, British Columbia, and the Washington State Convention Centre in Seattle, all set up by Canada’s Advanced Research and Innovation Network (CANARIE) and BCNET, a non-profit, shared IT services organization. Using this high-tech network array, the researchers were able to achieve a staggering efficiency – data was transferred at a constant rate of 98 Gbps.
When the researchers opted for a simultaneous data rate, in both directions, they successfully managed to reach a sustained two-way data rate of 186 Gbps between the two data centers – a new world record.
“Our group and its partners are showing how massive amounts of data will be handled and transported in the future,” says Harvey Newman, professor of phsycis at California Institute of Technology (Caltech) and head of the high-energy physics (HEP) team.
“Having these tools in our hands allows us to engage in realizable visions others do not have. We can see a clear path to a future others cannot yet imagine with any confidence.”
But why is this important for me, the average internet flicking Joe whose monthly bandwidth amounts to a few iTunes albums and some Netflix streaming?
Well, high transfer rates is of capital importance for researchers today, especially those working at the CERN experiment. So far, more than 100 petabytes (100,000 terabytes) of data have been processed, distributed, and analyzed using a global grid of 300 computing and storage facilities located at laboratories and universities around the world, and these figures are only set to increase tenfold as new particle collision data needs to be crunched in the future.
“Enabling scientists anywhere in the world to work on the LHC data is a key objective, bringing the best minds together to work on the mysteries of the universe,” says David Foster, the deputy IT department head at CERN.
“The 100-Gbps demonstration at SC11 is pushing the limits of network technology by showing that it is possible to transfer petascale particle physics data in a matter of hours to anywhere around the world,” adds Randall Sobie, a research scientist at the Institute of Particle Physics in Canada and team member.
Was this helpful?