ESnet Scientists awarded best paper at SC21 INDIS!

A combined team from ESnet and Lehigh University was awarded the best paper for Exploring the BBRv2 Congestion Control Algorithm for use on Data Transfer Nodes at the 8th IEEE/ACM International Workshop on Innovating the Network for Data-Intensive Science (INDIS 2021), which was held in conjunction with the 2021 IEEE/ACM International Conference for High Performance Computing, Networking, Storage and Analysis (SC21) on Monday, November 15, 2021.

The team was comprised of:

  • Brian Tierney, Energy Sciences Network (ESnet)
  • Eli Dart, Energy Sciences Network (ESnet)
  • Ezra Kissel, Energy Sciences Network (ESnet)
  • Eashan Adhikarla, Lehigh University

The paper can be found here. Slides from the presentation are here. In this Q+A, ESnet spoke with the award-winning team about their research — answers are from the team as a whole.

INDIS 21 Best Paper Certificate

The paper is based on extensive testing and controlled experiments with the BBR (Bottleneck Bandwidth and Round-trip propagation time), BBRv2 and the Cubic Function Binary Increase Congestion Control (CUBIC) Transmission Control Protocol (TCP) Internet congestion algorithms. What was the biggest lesson from this testing?

BBRv2 represents a fundamentally different approach to TCP congestion control. CUBIC (as well as Hamilton, Reno, and many others) are loss-based, meaning that they interpret packet loss as congestion and therefore require significant network engineering effort to achieve high performance. BBRv2 is different in that it measures the network path and builds a model of the path – it then paces itself to avoid loss and queueing. In practical terms, this means that BBRv2 is resilient to packet loss in a way that CUBIC is not. This comes through loud and clear in our data.

What part of the testing was the most difficult and/or interesting?

We ran a large number of tests in a wide range of scenarios. It can be difficult to keep track of all the test configurations, so we wrote a “test harness” in python that allowed us to keep track of all the testing parameters and resulting data sets.

The harness also allowed us to better compare results collected over real-world paths to those in our testbed environments. Managing the deployment of the testing environment though containers also allowed for rapid setup and improved reproducibility. 

You provide readers with links to great resources so they can do their own testing and learn more about BBRv2. What do you hope readers will learn?

We hope others will test BBRv2 in high-performance research and education environments. There are still some things that we don’t fully understand, for example there are some cases where CUBIC outperforms BBRv2 on paths with very large buffers. It would be great for this to be better characterized, especially in R&E network environments.

What’s the next step for ESnet research into BBRv2? How will you top things next year?

We want to further explore how well BBRv2 performs at 100G and 400G. We would also like to spend additional time performing a deeper analysis of the current (and newly generated) results to gain insights into how BBRv2 performs compared to other algorithms across varied networking infrastructure. Ideally we would like to provide strongly substantiated recommendations on where it makes sense to deploy BBRv2 in the context of research and educational network applications.

Attending SC15? Get a Close-up Look at Virtualized Science DMZs as a Service

Attending SC15? Get a Close-up Look at Virtualized Science DMZs as a Service

ESnet, NERSC and RENCI are pooling their expertise to demonstrate “Virtualized Science SMZs as a Service”  at the SC15 conference being held Nov. 15-20 in Austin. They will be giving the demos at 2:30-3:30 p.m. Tuesday and Wednesday and 1:30-2:30 p.m. Thursday in the RENCI booth #181.

Here’s the background: Many campuses are installing ScienceDMZs to support efficient large-scale scientific data transfers. There’s a need to create custom configurations of ScienceDMZs for different groups on campus. Network function virtualization (NFV) combined with compute and storage virtualization enables a multi-tenant approach to deploying virtual ScienceDMZs. It makes it possible for campus IT or NREN organizations to quickly deploy well-tuned ScienceDMZ instances targeted at a particular collaboration or project. This demo shows a prototype implementation of ScienceDMZ-as-a-Service using ExoGENI racks (ExoGENI is part of NSF GENI federation of testbeds) deployed at StarLight facility in Chicago and at NERSC.

The virtual ScienceDMZs deployed on-demand in these racks use the SPOT software suite developed at Lawrence Berkeley National Laboratory to connect to a data source at Argonne National Lab and a compute cluster at NERSC to provide seamless end-to-end high-speed data transfers of data acquired from Argonne’s Advanced Photon Source (APS) to be processed at NERSC. The ExoGENI racks dynamically instantiate necessary compute virtual resources for ScienceDMZ functions and connect to each other on-demand using ESnet’s OSCARS and Internet2’s AL2S system.

Why this spiking network traffic?

ESnet November 2010 Traffic

Last month was the first in which the ESnet network crossed a major threshold – over 10 petabytes of traffic! Traffic volume was 40% higher than the prior month and 10 times higher than just a little over 4 years ago. But what’s behind this dramatic increase in network utilization?  Could it be the extreme loads ESnet circuits carried for SC10, we wondered?

Breaking down the ESnet traffic highlighted a few things.  Turns out it wasn’t all that demonstration traffic sent across thousands of miles to the Supercomputing Conference in New Orleans (151.99 TB delivered), since that accounted for only slightly more than 1% of November’s ESnet-borne traffic.  We observed for the first time significant volumes of genomics data traversing the network as the Joint Genome Institute sent over 1 petabyte of data to NERSC. JGI alone accounted for about 10% of last month’s traffic volume. And as we’ve seen since it went live in March, the Large Hadron Collider continues to churn out massive datasets as it increases its luminosity, which ESnet delivers to researchers across the US.

Summary of Total ESnet Traffic, Nov. 2010

Total Bytes Delivered: 10.748 PB
Total Bytes OSCARS Delivered: 5.870 PB
Pecentage of OSCARS Delivered: 54.72%

What is is really going on is quite prosaic, but to us, exciting. We can follow the progress of distributed scientific projects such as the LHC  by tracking the proliferation of our network traffic, as the month-to-month traffic volume on ESnet correlates to the day-to-day conduct of science. Currently, Fermi and Brookhaven LHC data continue to dominate the volume of network traffic, but as we see, production and sharing of large data sets by the genomics community is picking up steam. What the stats are predicting: as science continues to become more data-intensive, the role of the network will become ever more important.


Cheers for Magellan

We were glad to see DOE’s Magellan project getting some well-deserved recognition by the HPCwire Readers’ and Editors’ Choice Award at SC10 in New Orleans. Magellan investigates how cloud computing can help DOE researchers to manage the massive (and increasing) amount of data they generate in scientific collaborations. Magellan is a joint research project at NERSC at Berkeley Lab in California and Argonne Leadership Computing Facility in Illinois.

This award represents teamwork on several fronts. For example, earlier this year, ESnet’s engineering chops were tested when the Joint Genome Institute, one of Magellan’s first users, urgently needed increased computing resources at short notice.

Within a nailbiting span of several hours, technical staff at both centers collaborated with ESnet engineers to establish a dedicated 9 Gbps virtual circuit between JGI and NERSC’s Magellan system over ESnet’s Science Data Network (SDN). Using the ESnet-developed On-Demand Secure Circuits and Advance Reservation System (OSCARS), the virtual circuit was set up within an hour after the last details were finalized.

NERSC raided its closet spares for enough networking components to construct a JGI@NERSC local area network and migrated a block of Magellan cores over to JGI control.  This allowed NERSC and JGI staff to spend the next 24 hours configuring hundreds of processor cores on the Magellan system to mimic the computing environment of JGI’s local compute clusters.

With computing resources becoming more distributed, complex networking challenges will occur more frequently. We are constantly solving high-stakes networking problems in our job connecting DOE scientists with their data. But thanks to OSCARS, we now have the ability to expand virtual networks on demand. And OSCARS is just getting better as more people in the community refine its capabilities.

The folks at JGI claim they didn’t feel a thing. They were able to continue workflow and no data was lost in the transition.

Which makes us very encouraged about the prospects for Magellan, and cloud computing in general. Everybody is hoping that putting data out there in the cloud will expand capacity.  At ESnet, we just want to make the ride as seamless and secure as possible.

Kudos to Magellan. We’re glad to back you up, whatever the weather.

A few grace notes to SC10

As SC10 wound down, ESnet started disassembling the network of connections that brought experimental data from the rest of the country to New Orleans, (and at least a bit of the universe as well). We detected harbingers of 100Gbps in all sorts of places. We will be sharing our observations on promising and significant networking technologies with you in blogs to come.

We were impressed by the brilliant young people we saw at the SC Student Cluster Competition organized collaboratively part of SC Communities, which brings together programs designed to support emerging leaders and groups that have traditionally been under-represented in computing.  Teams came from U.S. universities, including Purdue, Florida A&M, SUNY Stonybrook, and University of Texas at Austin, as well as universities from China and Russia.

Florida A&M team

Nizhni Novgorod State University team

At ESnet, we are always looking for bright, committed students interested in networking internships (paid!). We are also still hiring.

 

As SC10 concluded, the computer scientists and network engineers on the streets of the city dissipated, replaced by a conference of anthropologists. SC11 is scheduled for Seattle. But before we go, a note of appreciation to New Orleans.

Katrina memorial

Across from the convention center is a memorial to the people lost to Katrina; a sculpture of a wrecked house pinioned in a tree. But if you walk down the street to the corner of Bourbon and Canal, each night you will hear the trumpets of the ToBeContinued Brass Band. The band is a group of friends who met in their high school marching bands and played together for years until scattered by Katrina. Like the city, they are regrouping, and are profiled in a new documentary.

Our mission at ESnet is to help scientists to collaborate and share research. But a number of ESnet people are also musicians and music lovers, and we draw personal inspiration from the energy, technical virtuosity and creativity of artists as well as other engineers and scientists. We are not alone in this.

New Orleans is a great American city, and we wish it well.

100G: it may be voodoo, but it certainly works

SC10, Thursday morning.

During the SC10 conference, NASA, NOAA, ESnet, the Dutch Research Consortium, US LHCNet and CANARIE announced that they would transmit 100Gbps of scientific data between Chicago and New Orleans.  Through the use of 14 10GigE interconnects, researchers attempted to  completely utilize the full 100 Gbps worth of bandwidth by producing up to twelve 8.5-to-10Gbps individual data flows.

Brian Tierney reports: “We are very excited that a team from NASA Goddard completely filled the 100G connection from the show floor to Chicago.  It is certainly the first time for the supercomputing conference that a single wavelength over the WAN achieved 100Gbps. The other thing that is so exciting about it that they used a single sending host to do it.”

“Was this just voodoo?” asked NERSC’s Brent Draney.

Tierney assures us that indeed it must have been… but whatever they did, it certainly works.

Visiting the Dice-Avetech Sandbox

Thursday, SC10.

Tracey Wilson, at the DICE-Avetech booth

We caught up with Tracey Wilson, the DICE program manager, who had this to say about the Data Intensive Computing Environment/Obsidian Strategics SCinet Sandbox project, or DOS3 for short:  “In our Sandbox project, we wanted to look at the effects of putting E-100 Infiniband extenders with encryption,” said Wilson. “We wanted to test out the performance, with regular TCP-IP transfer and also RDMA transfers. We worked with Lawrence Livermore and they allowed us to have access to almost 100 nodes of Hyperion, a big test cluster there. Our intent there was to do a lot of wide area luster testing over the wide area Infiniband network. We also worked with BlueArc to attach a Mercury NAS system to an IB gateway and access that remotely from both the Hyperion site and the other sites we were using across the fabric using the Obsidian extenders. ESnet is the primary carrier for Hyperion, but we have four other circuits coming from NLR to our other sites.  So we’ve been able to get to NASA Goddard and do a span test across the Obsidian on 10G links using NLR.  We had a testing slot scheduled from both NLR and ESnet, but we had 10G available through most of the show, thanks to ESnet.”

“Real thanks go to the ESnet guys; Evangelos Chaniotakis, Jon Dugan, and Eli Dart who were supporting SCinet as well as Linda Winkler from Argonne and Caroline Weilhammer from the global NOC, Andrew Lee from NLR and the rest of the routing team.” said Wilson. “It was so much work getting all the VLANs configured across the show floor, because there were a huge number of demos this year and it was a considerable effort to get all these things configured.”

Visit Jon Dugan’s BoF in network measurement

ESnet’s Jon Dugan will lead a Bof on network measurement 12:15, Thurs in room 278-279 at SC10. Functional networks are critical to high performance computing, but to achieve optimal performance, it is necessary to accurately measure networks.  Jon will open up the session to discuss ideas in measurement tools such as perfSONAR, emerging standards, and the latest in research directions.

The circuits behind all those SC10 demos

It is midafternoon Wednesday at SC10 and the demos are going strong. Jon Dugan supplied an automatically updating graph in psychedelic colors  http://bit.ly/9HUrqL of the traffic ESnet is able to carry with all the circuits we set up. Getting this far required many hours of work from a lot of ESnet folk to accommodate the virtual circuit needs of both ESnet sites and SCinet customers using the OSCARS IDC software.  As always, the SCinet team has put in long hours in a volatile environment to deliver a high performance network that meets the needs of the exhibitors.

Catch ESnet roundtable discussions today at SC10, 1 and 2 p.m.

Wednesday Nov. 17 at SC10:

At 1 p.m. at Berkeley Lab booth 2448, catch ESnet’s Inder Monga’s round-table discussion on OSCARS virtual circuits. OSCARS, the acronym for On- demand Secure Circuits and Advance Reservation System, allows users to reserve guaranteed bandwidth. Many of the demos at SC10 are being carried by OSCARS virtual circuits which were developed by ESnet with DOE support. Good things to come: ESnet anticipates the rollout of OSCARS 0.6 in early 2011. Version 0.6 will offer greatly expanded capabilities and versatility, such as a modular architecture enabling easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture.

Then, stick around, because next at 2 p.m.  Brian Tierney from ESnet will lead a roundtable on the research being produced from the ARRA-funded Advanced Networking Initiative (ANI) testbed.

In 2009, the DOE Office of Science awarded ESnet $62 million in recovery funds to establish ANI, a next generation 100Gbps network connecting DOE’s largest unclassified supercomputers, as well as a reconfigurable network testbed for researchers to test new networking concepts and protocols.

Brian will discuss progress on the 100Gbps network, update you on the several research projects already underway on the testbed, discuss testbed capabilities and how to get access to the testbed. He will also answer your questions on how to submit proposals for the next round of testbed network research.

In the meantime, some celeb-spotting at the LBNL booth at SC10.

Inder Monga

Brian Tierney