Purchase of dark fiber launches ESnet into new era

What sets us apart? ESnet has, and always will focus on anticipating the needs of the extended DOE science community.  This shapes our network strategy, from services and architecture to topology and reach. It also distinguishes ESnet from university research & education networks which are driven by the broader needs of the general university population.  Vis-à-vis commercial networks, ESnet has specialized in handling the relatively small number of very large flows of large-scale science data rather than the enormous number of relatively small data flows traversing commercial carrier networks today. Our desire to always stay a step ahead of the constantly evolving network needs of the scientific community has driven ESnet to take the bold step of purchasing and lighting our first segment of dark fiber.

Owning the road

By owning a tiny but powerful pair of optical fibers, ESnet will no longer have to rely on the vagaries of the commercial market – we will be able to deliver services when we choose and where they are needed.  For example, the DOE envisions using ESnet to link its supercomputing centers with a terabit of capacity by 2015. Our network will be key to enabling the scientific community to accomplish exascale computing by 2020.

Ramping up is no slam-dunk

But providing terabit capacity by using 10 100G waves through commercial services is no slam-dunk and could be very cost-prohibitive.  Without owning the fiber and transport infrastructure, the same is likely to be true when near-terabit waves become available around 2020. Not only does one lose spectral efficiency because a terabit wave won’t fit within ITU standard 50 Ghz spacing – it is necessary to plan for non-standard spacing, with current research pointing towards 200 Ghz to accommodate the signal.

But just solving this problem is not enough, as ESnet’s massive bandwidth requirements don’t end with the supercomputers.  ESnet must deliver steadily increasing amounts of data generated by the Large Hadron Collider as well as similar data sets shared within the climate, fusion, and genomics communities to scientists around the world.

Lighting the way forward

It is clear to us that the only way to scale the network to meet the rapidly propagating needs of large-scale science is by lighting our own dark fiber. Although this relatively small 200-mile loop linking New York City to Brookhaven National Lab barely registers with most in the networking community, it represents an exciting sea change in ESnet’s approach in serving our customers.

–Steve Cotter

New 100GE Ethernet Standard IEEE 802.3ba (and 40GE as well)

From Charles Spurgeon's Ethernet Website


History is being written: from a simple diagram published in 1976 by Dr. Robert Metcalfe, with a data rate of 3 Mpbs, Ethernet surely has come a long way in the last 30 years. Coincidentally, the parent of ESnet, MFEnet, was also launched around the same time as a result of the new Fusion Energy supercomputer center at Lawrence Livermore National Labs (LLNL) http://www.es.net/hypertext/esnet-history.html. It is remarkable to note that right now, as the 100GE standard got ratified, ESnet engineers are very much on the ball, busy putting 100GE enabled routers through the paces within our labs.

For ESnet and the Department of Energy – it is all about the science. To enable large-scale scientific discovery, very large scientific instruments are being built. You have read on the blog about DUSEL, and are familiar with LHC. These instruments – particle accelerators, synchrotron light sources, large supercomputers, and radio telescope farms are generating massive amounts of data and involve large collaborations of scientists to extract useful research results from it. The Office of Science is looking to ESnet to build and operate a network infrastructure that can scale up to meet the highly demanding performance needs of scientific applications. The Advanced Networking Initiative (ANI) to build the nationwide 100G prototype network and a research testbed is a great start. If you are interested in being part of this exciting initiative, do bid on the 100G Transport RFP.

As a community, we need to keep advancing the state of networking to meet the oncoming age of the digital data deluge ().

To wit, the recent IEEE 802.3ba press release: – http://standards.ieee.org/announcements/2010/ratification8023ba.html Note the quote from our own Steve Cotter:

Steve Cotter, Department Head, ESnet at Lawrence Berkeley National Laboratory
“As the science community looks at collaboratively solving hard research problems to positively impact the lives of billions of people, for example research on global climate change, alternative energy and energy efficiency, as well as projects including the Large Hadron Collider that probe the fundamental nature of our universe – leveraging petascale data and information exchange is essential. To accomplish this, high-bandwidth networking is necessary for distributed exascale computing. Lawrence Berkeley National Laboratory is excited to leverage this standard to build a 100G nationwide prototype network as part of ESnet’s participation in the DOE Office of Science Advanced Networking Initiative.”

Got a networking idea you want to test? ANI testbed opening for business

Want try out some new ideas in network research? ESnet invites you to submit a proposal to run experiments on its reconfigurable testbed.  ESnet’s ARRA-funded Advanced Networking Initiative testbed is a high-performance environment where researchers will have the opportunity to prototype, test, and validate cutting edge networking concepts.

Instructions for submitting proposals can be found here https://sites.google.com/a/lbl.gov/ani-testbed/. Proposals are due October 1, 2010. Decisions will be made January 10, 2011 when the Phase 1 version of the testbed is up and running. The phase I version is a set of 10 Gbps connected layer 1, 2, and 3 equipment that will be deployed on a dark fiber ring we  acquired in Long Island (LIMAN: Long Island Metropolitan Area Network). This will mainly be of interest to researchers doing experiments at layers 1-3, or middleware/application research at 10 Gbps.

The testbed will support research including multi-layer multi-domain hybrid networks, network protocols, component testing for future capabilities, protection and recovery,  automatic classification of large bulk data flows, high-throughput middleware and applications, and any other innovative ideas you may want to try out in a realistic network environment, but with no risk of breaking anything.

Try us. We’re open to suggestions.

100Gbps Prototype Network RFP is out!

All those cheers and whoops from the top of the Berkeley Hill would be us. ESnet just nailed another ANI milestone. We got out our RFP to vendors for the next stage in the nationwide 100Gbps prototype network.

The network will deliver data at scorching speeds and link three of the Department of Energy’s major supercomputing centers— NERSC at Berkeley Lab, the Argonne Leadership Computing Facility at Argonne National Laboratory in Illinois and the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory in Tennessee— and MANLAN, the international exchange point in New York.

Your ARRA stimulus money is busy, this time building the infrastructure to help scientists communicate and deal with all that data proliferation from places like the Large Hadron Collider. We handle petabytes of data a month with our usual aplomb; but terabit networking is not far in our future. Good to get prepared.

100GE around the bend?

Ever feel the exhilaration of sitting in a race car and going around the track at super high speeds? We came close to that experience when we recently received early editions of a vendor’s 100GE cards for their routers. The experience so far has been phenomenal – no issues in getting the card up and running, the optics work great and packets are getting forwarded at line-rate. We are putting those cards through our rigorous testing process, though our lips are sealed for now.

For the industry, this is significant progress – just last year we started the Advanced Networking Initiative (ANI) project and the prospects of actually seeing a 100GE interface in a router this soon seemed far off. So kudos to the vendor (you know who you are) and to the IEEE 802.3ba 40/100Gbps task force – if you are listening, this stuff is ready to go!

Listening to the drumbeats of 100G

Since we received the news of our ARRA funds for our Advanced Network Initiative (ANI), we have been working steadily towards the ambitious goal of deploying 100G technology to stimulate networking advancement in high-speed interfaces. In the course of pushing the ANI agenda over the last year we have met with many carriers and vendors. Although I cannot share my personal conversations with these vendors–the thought of flocks of lawyers descending upon me ensures reticence–I have avidly been tracking their public announcements.

Just today Cisco announced the acquisition of Core Optics, a coherent optics board vendor. I had the good fortune to see their 40G system working at OFC this year and I am sure they are working hard on getting their 100G system up and running. Google typically has been quiet about the innovations in their network to keep up with data center innovations. But they have been uncharacteristically beating the 100G drum in public – which meshes very well with our needs. If you look at ESnet, the traffic transiting our network is growing at an alarming rate of 80% year over year.



At the Packet-Optical Transport Evolution conference (http://www.lightreading.com/live/event_information.asp?event_id=29209) Bikash Koley, Senior Network Architect, Google points at machine to machine traffic (like video sensors) as the motivators for needing such bandwidth and cites hybrid networking or packet-optical integration as solving the problems of the day.

If I can quote their article in Lightreading (http://www.lightreading.com/document.asp?doc_id=192230&):

“Regardless of how the network looks, Google is dead set on one thing: it wants label-switched routing and DWDM capabilities to be combined into one box. It doesn’t matter if that’s a label-switched router (LSR) with DWDM added, or a DWDM box with Layer 3 knowledge added,”Koley said. (He also stressed that the LSR doesn’t have to be a full-blown router.)

Now that is one statement we are in agreement with Bikash Koley and Google. Our own experience developing OSCARS (On-demand Secure Circuits and Advanced Reservation System – www.es.net/oscars) and the hybrid networking architecture to deal with large science flows since 2004 has led us down the path of on-demand traffic-engineered paths. With MPLS being the only choice at that time (discussing the merits of new IEEE mac-in-mac protocols will require a separate blog), we created the OSCARS open-source software to dynamically configure LSPs through our hybrid packet-optical network. That worked very well for us, though it was clear that we did not really need the service/edge capabilities built into the router. So if there is a way to make the core simpler, cheaper and more energy-efficient – sign me up for that and we will run OSCARS over it to steer those circuits to where our customers want it.

We at ESnet continue to march ahead towards the 100G-prototype network. I look forward to your comments on 40G, 100G, the new Ethernet standards and the way to higher rates (400GE, 1TBE.)

Inder Monga, Network Architect

Email me at: Imonga@es.net