At the BES requirements workshop that I led last week in Washington D.C. for scientists
and program managers, I saw a significant result of the impending data explosion that will be produced by the next generation of light sources and instruments at BES facilities.
The sheer quantity of data involved is going to completely change the scientific process for the scientists who use them. The current model for data transport used by most scientists at light sources does not use networks at all – scientists travel to the light source, run their experiments, and travel home with a USB hard drive loaded with a few hundred gigabytes of data (perhaps a terabyte or two, but even that is tractable with portable media). This model has worked well for this community for years.
However, as instruments are upgraded with new detectors and as new data analysis methods are employed, data sets are going to increase in size by up to a factor of 100 over the next few years- – scientists that might carry home 700 gigabytes of data today will need to move 70 terabytes in the near future. I don’t know about
you, but my briefcase isn’t up to the task.
Data will have to be transferred home over the network, or scientists will have to perform the computational analysis on site at the facilities. Other options include streaming data to supercomputer centers for real time or semi-real time analysis. Whatever happens, the scientists will need more from their networks and from the
systems connected to them.
The increase in data as instrumentation capacity improves will mean a significant change in the science process for these communities. Transferring data will require network capacity upgrades at the scientific facilities and the laboratories that support them, as well as network test and measurement tools such as perfSONAR.
ESnet is ready to help, with pilot projects already underway.
ESnet’s Evangelos Chaniotakis and Chin Guok received Berkeley Lab’s Outstanding Performance Award for their work in promoting technical standards for international scientific networking. Their work is notable because the implementation of open-source software development and new technical standards for network interoperability sets the stage for scientists around the world to better share research and collaborate.
Guok and Chaniotakis worked extensively within the DICE community on development of the Inter-domain Controller Protocol (IDCP). They are taking the principles and lessons gained from years of development efforts and applying them to the efforts in international standards bodies such as the Open Grid Forum (OGF), as well as consortia such as the Global Lambda Infrastructure Facility (GLIF).
So far, the IDCP has been adopted by more than a dozen Research and Education (R&E) networks around the world, including Internet2 (the leading US higher education network), GEANT (the trans-European R&E network), NORDUnet (Scandinavian R&E network) and USLHCNet (high speed trans-Atlantic network for the LHC community).
Guok and Chaniotakis have also advanced the widescale deployment of ESnet’s virtual circuits OSCARS (On Demand Secure Circuits and Reservation System). OSCARS, developed with DOE support, enables networks
to schedule and move the increasingly vast amounts of data generated by large-scale scientific collaborations. Since last year, ESnet has seen a 30% increase in the use of virtual circuits. OSCARS virtual circuits now carry over 50% of ESnet’s monthly production traffic. The increased use of virtual circuits was a major factor enabling ESnet to easily handle a nearly 300% rise in traffic from June 2009 to May 2010.
ESnet is now soliciting research proposals for its ARRA-funded testbed. It currently provides network researchers with a rapidly reconfigurable high-performance network research environment where reproducible tests can be run. This will eventually evolve into a nationwide 100Gbps testbed, available for use to any researcher whose proposal is accepted.
Researchers can use the testbed to prototype, test, and validate cutting edge networking concepts, for example, projects including:
Path computation algorithms that incorporate information about hybrid layer 1, 2 and 3 paths, and support ‘cut-through’ routing.
New transport protocols for high speed networks
Protection and recovery algorithms
Automatic classification of large bulk data flows
New routing protocols
New network management techniques
Novel packet processing algorithms
High-throughput middleware and applications research
Please look at the descriptionto get a more detailed idea of the current testbed capabilities.
The proposal review panel will discuss and review proposals twice yearly. The first round of proposals is due October 1, 2010, and decisions will be made by Dec 10, 2010. After that the committee will meet approximately every six months to accept additional proposals and review progress of current projects.
Proposals should be sent to: firstname.lastname@example.org
More details on the testbed and the brief proposal process can be found right here.
You must be logged in to post a comment.