At 25, ESnet Transfers a Universe of Data

ESnet turns 25 this year. This anniversary marks a major inflection point in our evolution as a network in terms of bandwidth, capability, and control. We are running data through our network a billion—that is 106—times faster than when ESnet was established.   Yet, we are still facing even greater demands for bandwidth as the amount of scientific data explodes and global collaborations expand.

Created in 1986 to meet the Department of Energy’s needs for moving research data and enabling scientists to access remote scientific facilities, ESnet combined two networks serving researchers in fusion research; (MFEnet) and high-energy physics (HEPnet). But ESnet’s roots actually stretch back to the mid-1970s, when staff at the CTR Computer Center at Lawrence Livermore National Laboratory installed four acoustic modems on a borrowed CDC 6600 computer. Our technology morphed over the years from fractional T1s to T1s to DS3, then to ATM, and in the last ten years we have moved to packet over SONET—all driven by the needs of our thousands of scientific users.

Over the last 25 years, ESnet has built an organization of excellence driven by the DOE science mission. We have consistently provided reliable service even as the science areas we support—high energy physics, climate studies, and cosmology, to name a few—have become exponentially more data-intensive. These fields especially rely on ever more powerful supercomputers to do data crunching and modeling. Other fields, such as genomics, are growing at a rapid pace as sequencing technologies become cheaper and more distributed.

Based on the dizzying trajectory of change in scientific computing technology, we urgently need to act now to expand the capacity of scientific networks in order to stay ahead of the demand in years to come.  At this point the ESnet high-speed network carries between 7 and 10 petabytes of data monthly (a single petabyte is equivalent to 13.3 years of HD video). The level of ESnet data traffic is increasing an average of 10 times every 4 years, steeply propelled by the rise in data produced. More powerful supercomputers can create more accurate models, for instance, of drought and rainfall patterns—but greater resolution requires vastly bigger datasets. Scientific collaborations can include thousands of researchers exchanging time-sensitive data around the world. Specialized facilities like the Large Hadron Collider and digital sky surveys produce torrents of data for global distribution. All these factors are poised to overload networks and slow scientific progress.

Tonight we face our biggest milestone yet: We are launching the first phase of our brand new 100 gigabit per second (Gbps) network, currently the fastest scientific network in the world today. Called the Advanced Networking Initiative (ANI), this prototype network forms the foundation of a soon-to-be permanent national network that will vastly expand our future data handling capacity.

The ANI prototype network will initially link researchers to DOE’s supercomputing centers: the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, Oak Ridge Leadership Computing Facility (OLCF) , and Argonne Leadership Computing Facility (ALCF), as well as MANLAN, the Manhattan Landing Exchange Point.  ESnet will also deploy 100 Gbps-capable systems throughout the San Francisco Bay Area and Chicago. The ANI prototype network will then transition into a permanent, national 100 Gbps network, paving the way to an eventual terabit-scale DOE network.

To prepare the way, ESnet acquired almost 13,000 miles of dark fiber. This gives the DOE unprecedented control over the network and its assets, enabling upgrades and expansion to capacity and capabilities according to the needs of our scientists. By owning the fiber, we are able to lock in the cost of future upgrades for decades to come.

The third facet of ANI is a nationwide testbed which is being made available to researchers both in the public and private sector as a first of its kind platform to test experimental approaches to new network protocols and architectures in a greater than 10 Gbps network environment.

Researchers are already working on multiple experiments investigating emerging technologies like Remote Direct Memory Access (RDMA)-enabled data transfers at 40 Gigabits per second (Gbps) or new TCP congestion control algorithms that scale to 100Gbps and beyond. By creating a research testbed, ESnet will enable researchers to safely experiment with disruptive technologies that will build the next generation Internet—something impossible to do on networks that also carry daily production traffic.

Bringing You the Universe at 100 Gbps speed

Just this past week our engineers completed the installation of the first phase of network – nearly 6 weeks ahead of schedule. Tonight at the SC11 conference we are showcasing the inauguration of our brand new 100 Gbps network by demonstrating how high-capacity networks can open up the universe – or at least a highly sophisticated computer simulation of the early days of the universe.  The demo will include the transfer of over 5 terabytes of data over the new network from NERSC in Oakland, CA to the Washington State Convention Center in Seattle.

This demonstration is important as astrophysicists are interested in studying high-resolution simulations to better understand the complex structural patterns in the universe. ESnet’s new 100 Gbps network will enable scientists to interactively examine large datasets at full spatial and temporal fidelity without compromising data integrity. This novel capability will also enable remotely located scientists to gain insights from large data volumes located at DOE supercomputing facilities such as NERSC. For comparison purposes, a simulation utilizing a 10 Gpbs network connection will also be displayed on a complementary screen to showcase the vast difference in quality that a magnitude difference of bandwidth can bring to scientific discovery.

To view the 100 Gbps and 10 Gbps simulations, visit:

In addition to this great demo, seven other community collaborations will be leveraging the new ESnet 100 Gbps network to support innovative HPC demonstrations at SC11.

As we toast to this great new accomplishment this evening, we recognize that we are building on an amazing 25-year legacy in network innovation. We owe tremendous gratitude to the ESnet staff both past and present for their over two decades of hard work and dedication that has been keenly focused on helping the DOE community solve some of society’s greatest challenges.  Given our team’s accomplishments to date, I cannot wait to see what this next chapter in our history brings.

–Steve Cotter

Routers, the Portal, and Transcontinental 100G Links – Oh My!

Things are moving quickly at ESnet, and we’re not talking only about data transfers.  Our Advanced Networking Initiative (ANI) 100G roll-out is gaining even more momentum and we are really gearing up for the SC11 conference next month.

This week, it was announced that we are working with LGS Innovations to deploy Alcatel-Lucent 7750 Service Routers (SR) on the new ANI prototype network. The first routers were delivered about a month ago and we are already well into acceptance testing and deployment. So far we have routers up and running in Sunnyvale, NERSC, StarLight and Argonne.  After we installed this equipment, our engineering team worked with Internet2 to light the first coast-to-coast network path from Washington, D.C. all the way to Sunnyvale in our backyard  –  the first 100G transcontinental link in the world! We were particularly proud of this milestone, which we reached well ahead of schedule. As next steps, we anticipate that 6 routers of our 10 total will be deployed in time for SC11.

To help the community keep tabs on our rapid ANI deployment progress, the ESnet Tools team has been expanding the MyESnet portal to provide a real-time view of the rollout. You can see the result of their efforts here.

This summer we introduced MyESnet to provide DOE Office of Science researchers and IT staff with a wide range of customized tools intended to greatly improve their ability to understand network issues in real time. The initial functionality includes a dashboard that provides a bird’s-eye view of the local area networks of the DOE facilities connected by ESnet, including their traffic patterns and system status.

Using this new ANI visualization tool, users can see which 100G network links are active, as well as information on each of our node and interconnect locations across the country.  The tool will be expanded to show traffic load in the near future.  In the next few weeks, the portal will also provide a real-time view of the eight community-driven projects that will use the new 100G network for large scale demonstrations at SC11. This interface will allow viewers to view the details and traffic load of these demonstrations.

With fewer than 25 days to go until SC11, our team is gearing up for some exciting announcements and ANI demos . . . check back for info on these activities soon and check in with us in Seattle. ESnet is part of Berkeley Lab booth 512.

Next round of ESnet 100 Gbps Testbed Proposals are due October 1

ESnet is pleased to announce that the Advanced Networking Initiative (ANI) testbed will be upgraded to include a 2200 mile 100 Gbps segment by the end of this year. This will make it the world’s first long-distance 100 Gbps testbed, available to any network researcher, whether from government, university, or industry. But take note: the next round of proposals to use the ANI Testbed is due on October 1– just 3 weeks away, so submit your ideas soon to take advantage of this great resource.

There has been a lot of exciting work on the testbed recently, including 40 Gbps RDMA testing, detailed power monitoring experiments, high-performance middleware testing, Openflow testing, and more. We also have some projects planning some interesting TCP research as soon as the 100 Gbps network is available.

For more information on how to submit a proposal see:

Brian Tierney

100 Gbps ANI Prototype Network is Just the Beginning

After much work and planning, we proudly introduce you to our new scientific network-to-be.  Berkeley Lab just announced the signing of an agreement to begin construction of a 100 gigabits per second (Gbps) prototype network, part of the Lab’s Advanced Networking Initiative funded by the American Recovery and Reinvestment Act (ARRA).  Under the terms of the deal, Internet2 and its industry partners will construct the network under ESnet’s direction.

Coming Soon: ANI 100 Gpbs Prototype Network, First Step Towards Nationwide 100 Gbps Network

While our ANI network testbed is already open for business (mark your calendar –the next call for research proposals is October 1st) in a matter of months researchers can conduct experiments on the 100 Gbps prototype network. This network will link the three supercomputing facilities at national laboratories – the National Energy Research Scientific Computing Center (NERSC), Oak Ridge Leadership Computing Facility (OLCF), and Argonne Leadership Computing Facility (ALCF) to MANLAN – the Manhattan Landing international exchange point.

But more is yet to come; the ANI prototype network will also be framework for ESnet to obtain concrete network energy use data for research into “green networking” – a priority here at ESnet.  Also part of the agreement, Berkeley Lab negotiated access to a nation-wide pair of dark fiber for 20 years which will be immediately available to network researchers and industry to experiment with new protocols and disruptive technologies.  Stay tuned to our blog to follow construction of the network and the exciting happenings at ESnet.

The 100 Gbps prototype network is just the beginning.  Berkeley Lab and ESnet will leverage the experience gained deploying the prototype network to extend these capabilities to the national labs and facilities ESnet currently serves, and connect DOE scientists to university collaborators around the world.  When moved to production status, the new network will increase the information carrying capacity of ESnet’s current 10 Gbps network by several orders of magnitude.  One terabyte of data, which takes approximately 13 minutes to transfer on ESnet’s present 10 Gbps network, will be delivered in under a minute on the 100 Gbps network.

What we hope you will get out of this

While our network currently meets the capacity needs of its users, we see new challenges on the horizon.  On behalf of the DOE Office of Science, we regularly survey scientists about their projected needs in order to better target services for our users. Demand on ESnet’s network has grown by a factor of 10 every 47 months since 1990.  Over the last year, we have seen a rise in ESnet network traffic by 70 percent – most of that data traffic coming from the Large Hadron Collider at CERN but with genomics and climate data also picking up steam.

Just as investments in national infrastructure built the Interstate Highway System that sped the delivery of goods and opened up commerce in the United States, high-speed networks will speed the scientific discovery that will drive this nation’s economy in the future.  New, large-scale instruments are in the offing, and we anticipate a growing tide of data in various disciplines as scientists work to understand our climate, develop clean fuels, and investigate the basic nature of matter.  High-speed networks like this one will have a huge impact on an increasing number of disciplines, as more scientists collaborate in global teams and depend on remote supercomputers to model and solve complex problems.

How to be really thrifty with $62 million.

This is a challenging time when budgets are under severe pressure, both for government and the research and education community.  We believe that by pooling resources and expertise we can help curb the costs of scientific research.  By working closely with Internet2 to build this prototype network, we are leveraging our ARRA award and their funding for synergistic effect and getting more return on investment for the taxpayer.

We strongly believe this new 100Gbps network infrastructure will allow us to better support data-intensive science by providing more capacity at lower cost, lower energy consumption, and lower carbon emissions per bit.  And that makes us feel good about what we do.

101 Excellent Reasons to Send in your SCiNet Research Sandbox Proposal by June 5

As co-chair of the SCinet11 Research Sandbox (SRS), I’d like to remind you that research proposals are due this Sunday, June 5.  The SRS Sandbox is a great way for researchers to play with and test ideas for innovative network architectures, applications, tools and protocols in the high-powered live environment of the SCinet network. This year is notable, as the SRS will provide researchers with access to over 100 Gigabits per second of capacity and the flexibility of a software-programmable testbed network on the SCinet infrastructure.

Digging into 100Gbps and OpenFlow too

At SC11, SRS plans a 10 Gigabit per second (Gbps), multi-vendor OpenFlow network testbed connecting the Washington State Convention Center in Seattle to several national research networks to provide wide area OpenFlow capabilities. OpenFlow allows the implementation of software-defined network policy. While originally conceived to allow researchers to implement new protocols without entirely new hardware, OpenFlow has begun to allow innovation and optimization with current protocols.  SRS is highlighting OpenFlow because it can potentially revolutionize the networking environment and in turn its ability to support HPC applications. OpenFlow allows an HPC data center to easily reconfigure the network on the fly, separating bulk data flows from other flows, for example. Openflow will also provide virtualization of the data center network to support cloud environments, allowing resource allocation to individual virtual machines, or providing multiple clouds from the same infrastructure.

This year, for the first time, submissions to the Disruptive Technologies (DT) program can also demonstrate their research as part of the SRS.  Disruptive Technologies, which has taken place as part of the SC technical program since 2006, examines new computing and networking architectures and interfaces that will significantly impact the high-performance computing field throughout the next five to 15 years, but have not yet emerged in current systems. Submissions to Disruptive Technologies should indicate their interest in demonstrating their research as part of SRS where indicated in the DT online submissions form.

How to submit

SRS submissions should describe the nature of the experiments, desired outcomes, the relevance to the HPC community, as well as a description of the network requirements and vendor collaborations (if appropriate). Submissions do not have to utilize OpenFlow to be considered for SRS. Submissions may be up to 3 pages long, and must address the approach and what will be learned or demonstrated by the effort.

Those whose submissions are accepted will be able to present their experiment in a technical panel session, and submissions will be included in the SC11 proceedings. In addition, accepted projects are expected to write up the results obtained during SC11, and all SRS write-ups will be assembled for journal submission. SCinet may provide additional fiber to a booth to support an SRS experiment as well. Submissions are due by June 5, 2011.  To submit to the SRS, visit:

— Brian Tierney, group lead for the Advanced Network Technologies Group, ESnet.

ANI Testbed Departs Left Coast for Long Island

It's hard to part with hardware

In another milestone, ESnet’s testbed was dispatched to its more permanent home at Brookhaven National Laboratory. The testbed, part of LBL’s $62 million ARRA-funded Advanced Networking Initiative, was established so researchers could experiment and push the limits of network technologies. A number of researchers have taken advantage of its capabilities so far, and we are collecting proposals for new projects.

We handled the painstaking shut down and packing procedures. We verified the internal wiring and the structural integrity of the hosts. We dealt with the intricacies of IP addressing. The testbed will be reassembled and open for new research projects in a couple of weeks. Note to Brookhaven: We meticulously counted every last screw and packed them all in plastic baggies.

On our way to Joint Techs 2011, in Clemson S.C.

We are looking forward to the upcoming Winter 2011 ESCC/Internet2 Joint Techs meeting from January 30 to February 3, 2011 at Clemson University in South Carolina.  The meeting is cosponsored by ESnet and Internet2.

Write us into your schedule. All Joint Techs talks will take place the auditorium, and we look forward to your questions and comments.

  • January 31, from 9:10 to 9:30 a.m., Evangelos (Vangelis) Chaniotakis, ESnet, will talk about “Automated GOLEs and Fenius: Pragmatic Interoperability”; including the GNI API task force, the collaboration for Fenius, GLIF as a forum for interoperability, demonstrations done at Geneva and SC10, and future plans.
  • February 01, 2011, between 3:50 – 4:10 p.m. Eli Dart, ESnet will talk about “The Science DMZ – A well-known location for high-performance network services”. Eli will discuss the critical need for network performance for productivity as science becomes increasingly data-intensive.  Frequently problems with network performance occur very near the endpoints of a data transfer. Eli will propose a simple network architecture, dubbed the “Science DMZ” for simplifying network performance tuning.
  • February 2, from 8:30 – 8:50 a.m. Steve Cotter, head of ESnet, will present an update on ESnet’s projects in 2010, including the Advanced Networking Initiative and the related 100 gigabits/second testbed, and give an overview of ESnet’s plans for 2011.

Other Lawrence Berkeley National Laboratory staff will be giving talks at the meeting, including:

  • January 31, 2011, 9:30  – 9:50 a.m. Mike Bennett, LBNL IT Division, will present on “Green Ethernet”. Bennett will provide an overview of the recently approved IEEE standard for Energy-Efficient Ethernet, IEEE Std 802.3az-2010, which specified energy efficient modes for copper interfaces. He will discuss the operation of Low Power Idle and illuminate its anticipated benefits as well as identify opportunities for innovation beyond the scope of the standard.
  • February 01, from 11:20 a.m. – 11:40 a.m., Dan Klinedinst, LBNL IT Division will discuss “3D Modeling and Visualization of Real Time Security Events”  where he will introduce Gibson, a tool for modeling real time security events and information in 3D. This tool allows users to watch a visual representation of threats, defenses and compromises on their systems as they occur, or record them for later analysis and forensics.
  • February 01, 4:10- 4:30 p.m., Dan Gunter, Advanced Computing for Science Department, LBNL, will discuss “End-to-End Data Transfer Performance with Periscope and NetLogger.” Gunter will examine the data currently being collected by the NetLogger toolkit for end-to-end bulk data transfers performed by the “Globus Online” service (between GridFTP servers), and describe the integration of that data with the Periscope service, which provides on-demand caching and analysis of perfSONAR data.

At the following ESCC meeting topic talks will focus on specific issues pertaining to ESnet’s goals for the coming year, as well as how they implement the DOE Office of Science vision. By the way, if you are an ESCC member and can’t make it to the meeting, ESnet will videostream it using the ESnet ECS Ad-hoc bridge.

To access the ESCC meeting remotely:

1) Open a browser to
2) In the Conf ID field type 372211 (no service prefix)
3) Next to “Streaming rate” select RealPlayer or Quicktime
Press<Stream this conference>

ESCC – February 2, 2011

  • 1:10 p.m. Greg Bell, ESnet will review the Site Cloud/Remote Services template discussed at the July ESCC meeting, and provide a brief summary of recent conversations with cloud-services vendors.
  • 1:30 p.m., Vince Dattoria, ESnet program manager for DOE/SC/ASCR will give the view from Washington.
  • 2:00 p.m., Steve Cotter will present an ESnet update.
  • 2:45 p.m., Sheila Cisko, FNAL will give an update on ESnet Collaborative Services.
  • 3:30 p.m., Brian Tierney, ESnet will provide the rundown on ANI, ESnet’s ARRA-funded Advanced Networking Initiative. So far, the ANI testbed has accepted the second round of research proposals. Brian will describe some highlights of recent experiments, and discuss further research opportunities.
  • 3:50 p.m., Steve Cotter will discuss site planning for the upcoming 100G production backbone.
  • 4:20 p.m. Eli Dart will talk about Science Drivers for ESnet Planning, and discuss the Science DMZ.
  • 6:30 p.m., Kevin Oberman will participate in the evening focus section on IP Address Management issues, emphasizing IPv6

ESCC- February 3, 2011

8:30 a.m. onwards, panels and discussions on aspects of IPv6

1:35 p.m., Joe Metger, ESnet will participate in the session on R&D Monitoring Efforts, discussing DICE monitoring directions

See you in South Carolina!

Engineering mixed traffic on ANI testbed

The first crop of experiments using ESnet’s Advanced Networking Initiative testbed are now in full swing. In a project funded by the DOE Office of Science, Prof. Malathi Veeraraghavan and post-doc Zhenzhen Yan at the University of Virginia, along with consultant Prof. Admela Jukan, are investigating the role of hybrid networking in ESnet’s next generation 100 Gbps network.
Their goal is to learn how to optimize a hybrid network comprised of two components, an IP datagram network and a high-speed optical dynamic circuit network, to best serve users’ data communication needs.  ESnet deployed a hybrid network in 2006, based on an IP routed network and the “science data network” (SDN), which is a dynamic virtual circuit network.  “It is a question of efficiency, which essentially boils down to cost.” Veeraraghavan notes, “IP networks have to be operated at low utilization for the performance requirements of all types of flows to be met. With hybrid networks, it is feasible to meet performance requirements while still operating the network at higher utilization.”

Data flows have certain characteristics that make them suited for certain types of networks. It is a complex problem to match flows with the “right” networks. In the ESnet core network, one can identify flows by looking at multiple fields in packet headers, according to Veeraraghavan, but you can’t know the size of the flow (bytes) or whether a flow is long or short. A challenge of this project is to predict characteristics of data flows based on prior history.  To do this, the researchers are using machine learning techniques. Flows are classified based on size and duration. Large-sized (“elephant”) flows are known to consume a higher share of bandwidth and thus adversely affect small-sized (“mice”) flows. Therefore, they are good candidates to redirect to the SDN. If SDN circuits are to established dynamically, i.e., after a router starts seeing packets in a flow that history indicates is a good candidate for SDN, then the flow needs to not only be large-sized but also of long-duration (“tortoise”) because circuit setup takes minutes. Short-duration (“dragonfly”) flows are not good candidates for dynamic circuits, but if they are of large size and occur frequently, static circuits could be used.

Boston is notorious for its mixed traffic

The concept of different lanes handling different types of traffic is seen commonly in other contexts. For example, Veeraraghavan notes that on some urban streets with mixed traffic, “separate lanes are set aside for buses, cars, motorcycles, and bicycles.” Also, grocery store checkouts have separate express lanes for the equivalent of “mice flows”.  To support this concept, the researchers developed several modules of a system called Hybrid Network Traffic Engineering Software (HNTES), and tested these modules on the ANI Testbed.

HNTES, diagrammed

Their experiments use two computers loaded with the HNTES software, and two Juniper routers. The HNTES software configures the routers to mirror packets of certain pre-determined flows (this determination is made with an offline flow analysis tool that analyzes previously collected Netflow data to find flow identifiers of elephant and tortoise flows) to a server that runs a flow-monitoring module (part of HNTES). Upon detecting such a flow, HNTES reconfigures the router to redirect packets from this flow to a circuit (different path). In future versions, if dynamic circuits are deemed feasible, a HNTES module called IDC Interface Module will send messages to an OSCARS IDC server to reserve and provision a circuit before reconfiguring the router.  So far Veeraraghavan and her colleagues have completed phase I of the software implementation, and demonstrated it. The demonstrations were presented in a Oct. 2010 DOE annual review meeting with previously recorded CAMtasia video. The next step will be to improve several features in the software to understand what happens to the flow when it is redirected; do packets get lost, or does redirection cause out of order arrivals at the destination? They are also doing a theoretical study with flow simulations, to see if taking the trucks off the parkway and putting them on the freeway really benefits the flow of “traffic.”

ESnet publishes design guide for high-performance data movers

As science becomes more and more data-intensive, demand for the capability of moving large data sets between sites over high-performance networks keeps increasing.  Now ESnet engineers Eric Pouyoul and Roberto Morelli have designed a powerful, yet inexpensive data transfer host that can function as a test server for network troubleshooting or as a data mover for scientific applications.

Assemble your own machine for that D.I.Y. glow of accomplishment

“We have right now a very fast network, but some users have difficulty realizing its full potential,” said Pouyoul. “We are in the process of increasing network bandwidth from 10Gbps to 100Gbps. But with computers, any time you multiply speed by a factor of 10, something is going to break. We also need to provide enough compute power to simulate the large data transfers that occur in normal scientific research, so we are locating boxes at different places on the ESnet network.”

So far, ESnet has built three test hosts based on the design, and deployed them at different points in the network so that users can test their own installations using the ESnet hosts as a reference. “This way, ESnet and the community can experiment with data transfers over different parts of the network.” Pouyoul says.  The test hosts are available to resources located on networks devoted to scientific research.

Pouyoul and Morelli’s box is powerful, yet affordable.  Anybody can build one using off-the-shelf components. Pouyoul points out that while the I/O speed of their creation is comparable to a half million-dollar machine, the footprint is much smaller. And parts, excluding labor, cost only around 10 grand. “We wanted to make it cheap and easy for do-it-yourself deployments.” Pouyoul said. “Our next step is to publish documentation to encourage people to build and install one on their own network. This is close to a production level machine for the R&D community.” Pouyoul has overseen the building and deployment of several machines by students.  You can find more about Pouyoul and Morelli’s innovation at Instructions on how to run your own tests on our three public test hosts is at: