Listening to the drumbeats of 100G

Since we received the news of our ARRA funds for our Advanced Network Initiative (ANI), we have been working steadily towards the ambitious goal of deploying 100G technology to stimulate networking advancement in high-speed interfaces. In the course of pushing the ANI agenda over the last year we have met with many carriers and vendors. Although I cannot share my personal conversations with these vendors–the thought of flocks of lawyers descending upon me ensures reticence–I have avidly been tracking their public announcements.

Just today Cisco announced the acquisition of Core Optics, a coherent optics board vendor. I had the good fortune to see their 40G system working at OFC this year and I am sure they are working hard on getting their 100G system up and running. Google typically has been quiet about the innovations in their network to keep up with data center innovations. But they have been uncharacteristically beating the 100G drum in public – which meshes very well with our needs. If you look at ESnet, the traffic transiting our network is growing at an alarming rate of 80% year over year.



At the Packet-Optical Transport Evolution conference (http://www.lightreading.com/live/event_information.asp?event_id=29209) Bikash Koley, Senior Network Architect, Google points at machine to machine traffic (like video sensors) as the motivators for needing such bandwidth and cites hybrid networking or packet-optical integration as solving the problems of the day.

If I can quote their article in Lightreading (http://www.lightreading.com/document.asp?doc_id=192230&):

“Regardless of how the network looks, Google is dead set on one thing: it wants label-switched routing and DWDM capabilities to be combined into one box. It doesn’t matter if that’s a label-switched router (LSR) with DWDM added, or a DWDM box with Layer 3 knowledge added,”Koley said. (He also stressed that the LSR doesn’t have to be a full-blown router.)

Now that is one statement we are in agreement with Bikash Koley and Google. Our own experience developing OSCARS (On-demand Secure Circuits and Advanced Reservation System – www.es.net/oscars) and the hybrid networking architecture to deal with large science flows since 2004 has led us down the path of on-demand traffic-engineered paths. With MPLS being the only choice at that time (discussing the merits of new IEEE mac-in-mac protocols will require a separate blog), we created the OSCARS open-source software to dynamically configure LSPs through our hybrid packet-optical network. That worked very well for us, though it was clear that we did not really need the service/edge capabilities built into the router. So if there is a way to make the core simpler, cheaper and more energy-efficient – sign me up for that and we will run OSCARS over it to steer those circuits to where our customers want it.

We at ESnet continue to march ahead towards the 100G-prototype network. I look forward to your comments on 40G, 100G, the new Ethernet standards and the way to higher rates (400GE, 1TBE.)

Inder Monga, Network Architect

Email me at: Imonga@es.net

Down another pit, looking for the secrets of the universe

The Large Hadron Collider is the world’s largest particle collider. At the bottom of a huge shaft dug into a mountain, two beams of subatomic particles, dubbed “hadrons”, shoot around an enclosed racetrack accelerating with every lap until they collide. The idea is to recreate on a small scale the conditions in the universe immediately after the Big Bang.

I actually saw the LHC under construction. I was visiting another accelerator used to generate antimatter, escorted by a physicist with spiky hair who looked like he played in a band. The question of why the universe is composed of matter versus antimatter could give humans a glimpse of “God’s big toe” according to this recent NYTimes interview http://www.nytimes.com/2010/05/18/science/space/18cosmos.html , and points to a fundamental asymmetry in the universe. The accelerator was wrapped in tinfoil and duct tape as a sort of low-tech insulation.

It is not such a bad way to spend one’s career figuring out how the universe got started. Here at ESnet, we are helping. ESnet is part of the network that carries the data from the LHC in Switzerland to groups of physicists in the U.S.

This view is at the inception of traffic on April 1st, 2010.

The Large Hadron Collider is projected to generate 15 petabytes of data yearly from six different detector experiments. The data, too massive to handle internally, is sent to 12 tier 1 sites around the world. CERN data from the ATLAS and CMS detectors travels the Atlantic on USLHCNet http://lhcnet.caltech.edu/ ESnet then carries data to Brookhaven National Laboratory and Fermilab, US tier 1 sites, for processing and archiving. The data is then distributed to tier 2 facilities, mainly of universities and research institutions around the U.S., including the Berkeley Lab’s National Energy Research Scientific Computing Division (NERSC).–Wendy Tsabba

What we see here is an upward spike in traffic. This view is taken within the last 96 hours.

Looking for cosmic particles deep underground

A mile or so down, activity is afoot in the formerly disused Homestake Mine in South Dakota’s Black Hills.  Instead of digging for gold, researchers will hunt for cosmic particles at the Sanford Underground Laboratory. Going deep underground is a chance to revisit physics experiments, as current neutrino detectors are orders of magnitude more sensitive than their predecessors,  and try some new ones without interference from cosmic radiation. And when scientific data is generated, ESnet will be connecting to the networks that will bring it to researchers all over the country.

The Deep Underground Laboratory (DUSEL) is sponsored by the National Science Foundation. Berkeley physicist Kevin Lesko videolinked to the spring Internet2 members meeting from the Ross Shaft at the 4850 level. Clad in a hard hat, and black protective suit laced with reflective tape, Lesko stood in a drafty rock tunnel festooned with wires and pipes and described the search for the next big discoveries in physics. According to Lesko, the plan is to build laboratory modules in 5-6 stories high and a football field in length. In the Davis Cavity, physics experiments are planned for megacavities filled with detectors to catch neutrinos beamed from Fermilab in Chicago 1000 miles away.   There is also a large underground Xenon detector planned to help image dark matter. Geobiologists are sampling microbes to look for specialized bacteria that could give clues to the evolution of life. See the National Academies Press Report: The Limits of Organic Life in Planetary Systems

For this Big Science initiative, NSF is working with the DOE and building networking capacity into the project. There are 370 miles of tunnel in the mine. Engineering challenges include ensuring structural integrity, and pumping in air and pump out water to create a comfortable and safe working environment.  DUSEL’s network engineering team have already deployed 5 miles of single mode fiber 5000 feet down the mineshafts to provide network connectivity. The mine, filled with dust and humidity, is a harsh environment. Engineering teams are building redundant fiber links down the mine shaft for high-speed network access. The labs are in the design stage now, and a report will be finished by year’s end to submit to the NSF. Construction on the laboratory starts in 2014.–Wendy Tsabba