How the World’s Fastest Science Network Was Built

Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

Funded by DOE’s Office of Science and managed by the Lawrence Berkeley National Laboratory (Berkeley Lab), ESnet moves about 51  petabytes of scientific data every month. This is a 13-step guide about how ESnet has evolved over 30 years.

Step 1: When fusion energy scientists inherit a cast-off supercomputer, add 4 dialup modems so the people at the Princeton lab can log in. (1975)

Online3

Step 2: When landlines prove too unreliable, upgrade to satellites! Data screams through space. (1981)

18ogxd

Step 3: Whose network is best? High Energy Physics (HEPnet)? Fusion Physics (MFEnet)?  Why argue? Merge them into one-Energy Sciences Network (ESnet)-run by the Department of Energy!  Go ESnet! (1986)

ESnetListicle

Step 4: Make it even faster with DUAL Satellite links! We’re talking 56 kilobits per second! Except for the Princeton fusion scientists – they get 112 Kbps! (1987)

Satellite

Step 5:  Whoa, when an upgrade to 1.5 MEGAbits per second isn’t enough, add ATM (not the money machine, but Asynchronous Transfer Mode) to get more bang for your buck. (1995)

18qlbh

Step 6: Duty now for the future—roll out the very first IPv6 address to ensure there will be enough Internet addresses for decades to come. (2000)

18s8om

Step 7: Crank up the fastest links in the network to 10 GIGAbits per second—16 times faster than the old gear—a two-generation leap in network upgrades at one time. (2003)

18qlnc

Step 8: Work with other networks to develop really cool tools, like the perfSONAR toolkit for measuring and improving end-to-end network performance and OSCARS (On-Demand Secure Circuit and Advance Reservation), so you can reserve a high-speed, end-to-end connection to make sure your data is delivered on time. (2006)

18qn9e

Step 9: Why just rent fiber? Pick up your own dark fiber network at a bargain price for future expansion. In the meantime, boost your bandwidth to 100G for everyone. (2012)

18on55

Step 10: Here’s a cool idea, come up with a new network design so that scientists moving REALLY BIG DATASETS can safely avoid institutional firewalls, call it the Science DMZ, and get research moving faster at universities around the country. (2012)

18onw4

18oo6c

Step 11: We’re all in this science thing together, so let’s build faster ties to Europe. ESnet adds three 100G lines (and a backup 40G link) to connect researchers in the U.S. and Europe. (2014)

18qnu6

Step 12: 100G is fast, but it’s time to get ready for 400G. To pave the way, ESnet installs a production 400G network between facilities in Berkeley and Oakland, Calif., and even provides a 400G testbed so network engineers can get up to speed on the technology. (2015)

18oogv

Step 13: Celebrate 30 years as a research and education network leader, but keep looking forward to the next level. (2016)

ESnetFireworks

Attending SC15? Get a Close-up Look at Virtualized Science DMZs as a Service

Attending SC15? Get a Close-up Look at Virtualized Science DMZs as a Service

ESnet, NERSC and RENCI are pooling their expertise to demonstrate “Virtualized Science SMZs as a Service”  at the SC15 conference being held Nov. 15-20 in Austin. They will be giving the demos at 2:30-3:30 p.m. Tuesday and Wednesday and 1:30-2:30 p.m. Thursday in the RENCI booth #181.

Here’s the background: Many campuses are installing ScienceDMZs to support efficient large-scale scientific data transfers. There’s a need to create custom configurations of ScienceDMZs for different groups on campus. Network function virtualization (NFV) combined with compute and storage virtualization enables a multi-tenant approach to deploying virtual ScienceDMZs. It makes it possible for campus IT or NREN organizations to quickly deploy well-tuned ScienceDMZ instances targeted at a particular collaboration or project. This demo shows a prototype implementation of ScienceDMZ-as-a-Service using ExoGENI racks (ExoGENI is part of NSF GENI federation of testbeds) deployed at StarLight facility in Chicago and at NERSC.

The virtual ScienceDMZs deployed on-demand in these racks use the SPOT software suite developed at Lawrence Berkeley National Laboratory to connect to a data source at Argonne National Lab and a compute cluster at NERSC to provide seamless end-to-end high-speed data transfers of data acquired from Argonne’s Advanced Photon Source (APS) to be processed at NERSC. The ExoGENI racks dynamically instantiate necessary compute virtual resources for ScienceDMZ functions and connect to each other on-demand using ESnet’s OSCARS and Internet2’s AL2S system.

ESnet to Deliver 100G Connectivity to Demos at SC13 Conference

When the Colorado Convention Center becomes the best-connected site on the planet for SC13 from Nov. 17-22, ESnet will be providing a significant portion of the connectivity, which will be used to support live demos by Caltech (booth 3118), Ciena (booth 1924), NASA (booth 822) and the Laboratory for Advanced Computing / Open Cloud Consortium (booth 828 ). Additionally, Brocade (booth 109) will use OSCARS – ESnet’s on-demand bandwidth reservation system – in demonstrations of multi-layer software defined networking.

For Caltech, ESnet will provide four 100G paths to bring data to Denver from CERN in Switzerland, Fermilab in Illinois, DOE’s NERSC (National Energy Research Scientific Computing Center) in California and the Karlsruhe Institute of Technology in Germany.

For demonstrations in the NASA booth, ESnet will provide two paths – one a northern route and the other a southern route – from NASA’s Goddard Space Flight Center in Maryland.

Ciena, in collaboration with ESnet and Internet2, will be delivering unprecedented support for the SCinet network at SC13 through a high-capacity 400G transport network that will carry three of the 100G links that ESnet brings to the Denver show floor.

In the Laboratory for Advanced Computing / Open Cloud Consortium booth, ESnet will provide two paths for a demonstration by the Naval Research Laboratory and the University of Chicago/International Center for Advanced Internet Research, or iCAIR.

Additionally, Brocade (booth 109) will use OSCARS – ESnet’s on-demand bandwidth reservation system – in demonstrations of multi-layer software defined networking. If you miss seeing this at SC, the demonstration will be showcased live on SDNCentral’s DemoFriday on November 22. Learn more at http://www.sdncentral.com/events/brocade-infinera-esnet-sdn-demo/.

Key to all of this are the efforts of the SCinet team that works to provide bandwidth into and throughout the convention center. ESnet staff supporting SCinet are Patrick Dorn, Andy Lake, Lauren Rotman and Jason Zurawski.

NERSC Now Connected to ESnet and Beyond at 100G

All network traffic flowing in-and-out of the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) is now moving at 100 Gigabits per second (Gbps)—this includes everything from email to massive scientific datasets. Jason Lee, who leads NERSC’s Network and Security Team, worked with engineers at the DOE’s Energy Sciences Network (ESnet) to set up a 100Gbps Science DMZ, which gives NERSC network engineers the ability to set up multiple private circuits using software-defined networking (SDN). With these tools, NERSC staff can help remote scientists who may see their data transfers slow down due to firewalls at their local campuses achieve a true 100Gbps end-to-end connection. Additionally, ESnet engineers also helped NERSC set up a system to announce their own address space. This allows the center to separately route traffic to any research or education (R&E) site or separate R&E traffic from the commodity Internet.

Read more.

ESnet Partners with North American, European Research Networks in Pilot to Create First 100 Gbps Research Link Across Atlantic

Six of the world’s leading research and education networks – ESnet, Internet2, NORDUnet, SURFnet, CANARIE and GÉANT – have announced their intent to build the world’s first 100 gigabits-per-second (Gbps) intercontinental transmission links for research and education.

 The project, called the “Advanced North Atlantic 100G Pilot” or ANA-100G, is aimed at stimulating the market for 100 Gbps intercontinental networking and advancing global networks and applications to benefit research and education. In addition to ESnet (the U.S. Department of Energy’s Energy Sciences Network), the other participating networks are Internet2, NORDUnet (the Nordic Infrastructure for Research & Education), SURFnet (the Dutch National Research and Education Network), CANARIE (Canada’s Advanced Research and Innovation Network), and GÉANT (the high speed European communication network dedicated to research and education, operated by DANTE).

 The partners are inviting other national research and education networks (NRENs) and their constituencies from around the world to participate in the project. The announcement was made April 24 at the 2013 Internet2 Annual Meeting before 800 technology, education and research leaders.

Read more.

Oak Ridge First National Lab with 100G Production Connection to ESnet

More labs scheduled to follow soon

On Tuesday, April 9, Oak Ridge National Laboratory in Tennessee became the first DOE lab with a 100 gigabit-per-second production  link to ESnet’s 100G backbone. Since the high-speed backbone went into production in late 2012, each of the labs served by ESnet has been developing plans to upgrade their connections to 100G. Within the next 6 months, DOE’s National Energy Research Scientific Computing Center (NERSC), Fermilab, and Lawrence Berkeley, Lawrence Livermore, Argonne, Brookhaven  national labs are expected to put 100G links into production as well.

Since the faster connections replaces the old links when they go live, it’s important to thoroughly test the new links. We do this by building the new link so we can test it in parallel with the existing one and troubleshoot as needed. When everything appears to be ready, we conduct a 24-hour link acceptance test, sending network traffic nonstop for 24 hours. It’s basically a bit-blast followed by a thorough check for problems, then a handover to the end site.

Once the test is completed, we schedule a maintenance activity to move traffic to the new link.  When that comes, we time get everyone who is involved in the switchover on the phone, each one does their part, and the changeover is completed.

ESnet’s Patrick Dorn did a great job heading up the project, and Vangelis Chaniotakis and Chris Tracy helped to architect the ORNL hub to make this possible.

Read about how we tested the 100G backbone before switching into production mode at: http://lightbytes.es.net/2012/10/31/esnet5-is-well-on-its-way-to-entering-production/

ESnet5 is well on its way to entering production

In mid-July we began activating the 100 gigabit-per-second optical waves needed to deploy the new nationwide ESnet5 backbone into production by the end of November.  We are thrilled to announce that all of the waves have been brought up successfully as of October 23.  In parallel with this work, ESnet engineers have successfully tested the new network using a clever “snake test” wherein a path is provisioned through the entire network and traffic is generated and received by test equipment, looking for errors and packet loss. The details of this test will be described in a separate blog, so stay tuned.

Image

Our engineers’ attention has turned to provisioning the routing infrastructure in preparation for moving traffic from the old ESnet4 network to the new 100G-enabled network.  With the new infrastructure tested and ready to go, we are beginning the final phase of the transition which will continue through the third week in November.  During that time, our engineers are changing routing metrics to prefer the new network in a phased manner, taking care to make sure no loops or bottlenecks are created.  Greg Bell, ESnet’s Director, has likened the transition to changing engines between cars while they’re speeding down the highway.  Of course we aim to do so without interruption while continuing to provide excellent service. Once that phase has been completed and traffic is running on the new 100G production core network, our focus will change to the remaining work, which is to remove the ESnet4 links and decommission it – ending an era that served science well and beginning the era of the network as a scientific instrument.

– Mike Bennett, ESnet Network Engineering Services Group Lead

100G: it may be voodoo, but it certainly works

SC10, Thursday morning.

During the SC10 conference, NASA, NOAA, ESnet, the Dutch Research Consortium, US LHCNet and CANARIE announced that they would transmit 100Gbps of scientific data between Chicago and New Orleans.  Through the use of 14 10GigE interconnects, researchers attempted to  completely utilize the full 100 Gbps worth of bandwidth by producing up to twelve 8.5-to-10Gbps individual data flows.

Brian Tierney reports: “We are very excited that a team from NASA Goddard completely filled the 100G connection from the show floor to Chicago.  It is certainly the first time for the supercomputing conference that a single wavelength over the WAN achieved 100Gbps. The other thing that is so exciting about it that they used a single sending host to do it.”

“Was this just voodoo?” asked NERSC’s Brent Draney.

Tierney assures us that indeed it must have been… but whatever they did, it certainly works.

Catch ESnet roundtable discussions today at SC10, 1 and 2 p.m.

Wednesday Nov. 17 at SC10:

At 1 p.m. at Berkeley Lab booth 2448, catch ESnet’s Inder Monga’s round-table discussion on OSCARS virtual circuits. OSCARS, the acronym for On- demand Secure Circuits and Advance Reservation System, allows users to reserve guaranteed bandwidth. Many of the demos at SC10 are being carried by OSCARS virtual circuits which were developed by ESnet with DOE support. Good things to come: ESnet anticipates the rollout of OSCARS 0.6 in early 2011. Version 0.6 will offer greatly expanded capabilities and versatility, such as a modular architecture enabling easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture.

Then, stick around, because next at 2 p.m.  Brian Tierney from ESnet will lead a roundtable on the research being produced from the ARRA-funded Advanced Networking Initiative (ANI) testbed.

In 2009, the DOE Office of Science awarded ESnet $62 million in recovery funds to establish ANI, a next generation 100Gbps network connecting DOE’s largest unclassified supercomputers, as well as a reconfigurable network testbed for researchers to test new networking concepts and protocols.

Brian will discuss progress on the 100Gbps network, update you on the several research projects already underway on the testbed, discuss testbed capabilities and how to get access to the testbed. He will also answer your questions on how to submit proposals for the next round of testbed network research.

In the meantime, some celeb-spotting at the LBNL booth at SC10.

Inder Monga
Brian Tierney

ESnet Call for Proposals: Advanced Networking Initiative Testbed

ANI ConfigurationESnet is now soliciting research proposals for its ARRA-funded testbed. It currently provides network researchers with a rapidly reconfigurable high-performance network research environment where reproducible tests can be run. This will eventually evolve into a nationwide 100Gbps testbed, available for use to any researcher whose proposal is accepted.

Sample Research

Researchers can use the testbed to prototype, test, and validate cutting edge networking concepts, for example, projects including:

  • Path computation algorithms that incorporate information about hybrid layer 1, 2 and 3 paths, and support ‘cut-through’ routing.
  • New transport protocols for high speed networks
  • Protection and recovery algorithms
  • Automatic classification of large bulk data flows
  • New routing protocols
  • New network management techniques
  • Novel packet processing algorithms
  • High-throughput middleware and applications research

Please look at the description to get a more detailed idea of the current testbed capabilities.

Important Dates

The proposal review panel will discuss and review proposals twice yearly. The first round of proposals is due October 1, 2010, and decisions will be made by Dec 10, 2010. After that the committee will meet approximately every six months to accept additional proposals and review progress of current projects.

Proposals should be sent to: ani-testbed-proposal@es.net
More details on the testbed and the brief proposal process can be found right here.