Network as Instrument: Please Adjust Your Mental Models

This week, ESnet announces the most significant upgrade in its 26-year history.

We’re finishing up construction of the world’s fastest science network, in collaboration with our close partners at Internet2.  In fact, both organizations now share an optical platform that can transmit staggering amounts of data.  To visualize just how much, it may be helpful to think about a piano keyboard, which has 88 keys.  Imagine that ESnet’s new 100 Gigabit-per-second network fits on middle C, and that Internet2’s new 100G network fits on C#.   That leaves 86 keys’ worth of network capacity for us to share in the future.  Remember, each piano key represents one instance of the fastest national-scale network currently in existence.

That’s impressive, but it’s not the whole story.  Networks like ESnet are passing through three inflection points at this moment in time:

  1. We’re entering an era of cheap, hyper-abundant capacity.
  2. We’re developing a new global architecture that will transform research networks into programmable systems capable of virtualization, reservation, customization, and inspection.  Many of these new capabilities simply won’t be available from commercial carriers in a worldwide, multi-domain context.
  3. We’re finally making progress at tackling the end-to-end problem, as more campus networks implement a ScienceDMZ for data mobility.

Taken together, these changes are revolutionary.  They motivate a new way of thinking about research networks, as instruments for discovery, not just infrastructures for service delivery.

The ATLAS Detector (courtesy CERN).

This idea that research networks serve as vital extensions of a discovery instrument isn’t new.  In fact, it’s implicit in the architecture of the Large Hadron Collider (LHC) at CERN.  LHC may have been the first large instrument designed around the premise of a highly-capable research networks, but it won’t be the last.  In fact, dozens of experiments currently on the drawing board – accelerators, supercomputers, photon and neutron sources, and the extraordinary Square Kilometer Array – share exactly the same premise.

Here’s another respect in which ESnet departs from the model of simple infrastructure.  The amount of traffic we carry is growing roughly twice as fast as the commercial Internet, and increasingly that traffic is very different from traffic in the Internet core: it’s dominated by a relatively small number of truly massive flows.   You’ve heard this before, but modern science has entered an age of extreme-scale data.  In more and more fields, scientific discovery depends on data mobility, as huge data sets from experiments or simulations move to computational and analysis facilities around the globe.

Large-scale scientific collaborations simply require advanced networks, pure and simple.

One last thing about the historical upgrade we’re finishing up.  It would not have been possible without heroic efforts of four people who have moved on from the DOE family: Steve Cotter (now CEO of REANNZ in New Zealand), and Dan Hitchcock, Joe Burrescia, and Jim Gagliardi (all happily retired).   Steve, Dan, Joe and Jim: we’ll drink a toast in your honor at Supercomputing this week!

– Greg

Posted in Uncategorized