Science Data Weathers the Storm

Hurricane Sandy took a terrible toll this week, and our thoughts remain with the millions of people whose lives were impacted.

From the nightly news, we learned about critical systems that temporarily failed during the extreme weather, and others that continued to function.  I’d like to share an example in the second category.

Here’s a screenshot from our public web portal, http://my.es.net, showing traffic to and from Brookhaven National Laboratory on Long Island, during the worst of the storm:

Network traffic to and from Long Island’s Brookhaven National Laboratory was not impacted by the devastating storm that struck the New York area this week.

Why does it matter that Brookhaven’s connection to ESnet, and to the broader Internet, remained functional during the disaster? It matters because modern science has entered an age of extreme-scale data. In more and more fields, scientific discovery depends on data mobility, as huge data sets from experiments or simulations move to computational and analysis facilities around the globe. Large-scale science instruments are now designed around the premise that high-speed research networks exist. In this kind of architecture, loss of network connectivity impairs scientific productivity. In the worst case, unique and vital data can be lost.

Much of the network traffic in these graphs can be attributed to ATLAS, one of the extraordinary experiments at CERN’s Large Hadron Collider, outside of Geneva. It’s been a great year at CERN, with announcement of a Higgs-like particle this summer. At ESnet, we were very happy that productivity of the ATLAS experiment was not affected by the massive storm.

Although the research networking community may have benefited from good fortune during storm, it’s important to recognize that a lot of careful planning and sound engineering – conducted over the course of many years – contributed to this outcome. Research networks like ESnet and Internet2, regional networks like NYSERNet, campus networking groups at Brookhaven and elsewhere, and exchange points like MAN LAN all work very hard to harden themselves against individual points of failure. As with all engineering, the devil is in the details: fiber diversity, elimination of shared fate in single conduits and data centers, location and specification of generators, availability of fuel, optical protection schemes, failover for dynamic circuits… the list goes on and on.

We won’t always be so fortunate, but the screenshot above is something our research networking community can take pride in. My thanks to the hundreds of people whose sound decisions made this good outcome possible.