New MyESnet Interface Launched for SC11

The ESnet tools team is pleased to announce that they are launching a brand new interface to http://my.es.net that will showcase real time statistics on the 100 Gbps demos running at SC11.

This interface is designed to provide the community with access to multiple live visualization tools of 100 Gbps network utilization and topology.  The tools team continues to build out the MyESnet portal to meet the evolving needs of the community—we will keep you posted on new developments.

ESnet is supporting eight 100 Gbps community projects at SC11:
·      Brookhaven National Laboratory with Stony Brook University: Booth 2443
·      Indiana University with collaborators Brocade, Ciena, Data Direct Networks, IBM, Whamcloud, and ZIH: Booth 2239
·      Lawrence Berkeley National Laboratory, Booth 512
·      NASA with collaborators from MAX, International Center for Advanced Research (iCAIR) at Northwestern University, Laboratory for Advanced Computing at University of Chicago, Open Cloud Consortium, Ciena, Alcatel, StarLight, MREN, Fujitsu, Brocade, Force 10, Juniper, Arista: Booths 615, 2615, 635
·      Orange Silicon Valley with the InfiniBand Trade Association and OpenFabrics Alliance:  Booth 6010
·      The California Institute of Technology: Booth 1223
·      Fermi National Accelerator Laboratory and University of California, San Diego: Booth 1203 and 1213.

For a schedule of booth demos, click here.

–Jon Dugan

At 25, ESnet Transfers a Universe of Data


ESnet turns 25 this year. This anniversary marks a major inflection point in our evolution as a network in terms of bandwidth, capability, and control. We are running data through our network a billion—that is 106—times faster than when ESnet was established.   Yet, we are still facing even greater demands for bandwidth as the amount of scientific data explodes and global collaborations expand.

Created in 1986 to meet the Department of Energy’s needs for moving research data and enabling scientists to access remote scientific facilities, ESnet combined two networks serving researchers in fusion research; (MFEnet) and high-energy physics (HEPnet). But ESnet’s roots actually stretch back to the mid-1970s, when staff at the CTR Computer Center at Lawrence Livermore National Laboratory installed four acoustic modems on a borrowed CDC 6600 computer. Our technology morphed over the years from fractional T1s to T1s to DS3, then to ATM, and in the last ten years we have moved to packet over SONET—all driven by the needs of our thousands of scientific users.

Over the last 25 years, ESnet has built an organization of excellence driven by the DOE science mission. We have consistently provided reliable service even as the science areas we support—high energy physics, climate studies, and cosmology, to name a few—have become exponentially more data-intensive. These fields especially rely on ever more powerful supercomputers to do data crunching and modeling. Other fields, such as genomics, are growing at a rapid pace as sequencing technologies become cheaper and more distributed.

Based on the dizzying trajectory of change in scientific computing technology, we urgently need to act now to expand the capacity of scientific networks in order to stay ahead of the demand in years to come.  At this point the ESnet high-speed network carries between 7 and 10 petabytes of data monthly (a single petabyte is equivalent to 13.3 years of HD video). The level of ESnet data traffic is increasing an average of 10 times every 4 years, steeply propelled by the rise in data produced. More powerful supercomputers can create more accurate models, for instance, of drought and rainfall patterns—but greater resolution requires vastly bigger datasets. Scientific collaborations can include thousands of researchers exchanging time-sensitive data around the world. Specialized facilities like the Large Hadron Collider and digital sky surveys produce torrents of data for global distribution. All these factors are poised to overload networks and slow scientific progress.

Tonight we face our biggest milestone yet: We are launching the first phase of our brand new 100 gigabit per second (Gbps) network, currently the fastest scientific network in the world today. Called the Advanced Networking Initiative (ANI), this prototype network forms the foundation of a soon-to-be permanent national network that will vastly expand our future data handling capacity.

The ANI prototype network will initially link researchers to DOE’s supercomputing centers: the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, Oak Ridge Leadership Computing Facility (OLCF) , and Argonne Leadership Computing Facility (ALCF), as well as MANLAN, the Manhattan Landing Exchange Point.  ESnet will also deploy 100 Gbps-capable systems throughout the San Francisco Bay Area and Chicago. The ANI prototype network will then transition into a permanent, national 100 Gbps network, paving the way to an eventual terabit-scale DOE network.

To prepare the way, ESnet acquired almost 13,000 miles of dark fiber. This gives the DOE unprecedented control over the network and its assets, enabling upgrades and expansion to capacity and capabilities according to the needs of our scientists. By owning the fiber, we are able to lock in the cost of future upgrades for decades to come.

The third facet of ANI is a nationwide testbed which is being made available to researchers both in the public and private sector as a first of its kind platform to test experimental approaches to new network protocols and architectures in a greater than 10 Gbps network environment.

Researchers are already working on multiple experiments investigating emerging technologies like Remote Direct Memory Access (RDMA)-enabled data transfers at 40 Gigabits per second (Gbps) or new TCP congestion control algorithms that scale to 100Gbps and beyond. By creating a research testbed, ESnet will enable researchers to safely experiment with disruptive technologies that will build the next generation Internet—something impossible to do on networks that also carry daily production traffic.

Bringing You the Universe at 100 Gbps speed

Just this past week our engineers completed the installation of the first phase of network – nearly 6 weeks ahead of schedule. Tonight at the SC11 conference we are showcasing the inauguration of our brand new 100 Gbps network by demonstrating how high-capacity networks can open up the universe – or at least a highly sophisticated computer simulation of the early days of the universe.  The demo will include the transfer of over 5 terabytes of data over the new network from NERSC in Oakland, CA to the Washington State Convention Center in Seattle.

This demonstration is important as astrophysicists are interested in studying high-resolution simulations to better understand the complex structural patterns in the universe. ESnet’s new 100 Gbps network will enable scientists to interactively examine large datasets at full spatial and temporal fidelity without compromising data integrity. This novel capability will also enable remotely located scientists to gain insights from large data volumes located at DOE supercomputing facilities such as NERSC. For comparison purposes, a simulation utilizing a 10 Gpbs network connection will also be displayed on a complementary screen to showcase the vast difference in quality that a magnitude difference of bandwidth can bring to scientific discovery.

To view the 100 Gbps and 10 Gbps simulations, visit: http://www.es.net/RandD/advanced-networking-initiative/

In addition to this great demo, seven other community collaborations will be leveraging the new ESnet 100 Gbps network to support innovative HPC demonstrations at SC11.

As we toast to this great new accomplishment this evening, we recognize that we are building on an amazing 25-year legacy in network innovation. We owe tremendous gratitude to the ESnet staff both past and present for their over two decades of hard work and dedication that has been keenly focused on helping the DOE community solve some of society’s greatest challenges.  Given our team’s accomplishments to date, I cannot wait to see what this next chapter in our history brings.

–Steve Cotter

ECSEL leverages OpenFlow to demonstrate new network directions

ESnet and its collaborators successfully completed three days of demonstrating its End-to-End Circuit Service at Layer 2 (ECSEL) software at the Open Networking Summit held at Stanford a couple of weeks ago. Our goal is to build “zero-configuration circuits” to help science applications seamlessly use networks for optimized end-to-end data transport. ECSEL, developed in collaboration with NEC, Indiana University, and the University of Delaware builds on some exciting new conceptual thinking in networking.

Wrangling Big Data 

To put ECSEL in context, the proliferating tide of scientific data flows – anticipated at 2 petabytes per second as planned large-scale experiments get in motion – is already challenging networks to be exponentially more efficient. Wide area networks have vastly increased bandwidth and enable flexible, distributed, scientific workflows that involve connecting multiple scientific labs to a supercomputing site, a university campus, or even a cloud data center.

Heavy network traffic to come

The increasing adoption of distributed, service-oriented computing means that resource and vendor independence for service delivery is a key priority for users. Users expect seamless end-to-end performance and want the ability to send data flows on demand, no matter how many domains and service providers are involved.  The hitch is that even though the Wide Area Network (WAN) can have turbocharged bandwidth, at these exponentially increasing rates of network traffic even a small blockage in the network can seriously impair the flow of data, trapping users in a situation resembling commute conditions on sluggish California freeways. These scientific data transport challenges that we and other R&E networks face are just a taste of what the commercial world will encounter with the increasing popularity of cloud computing and service-driven cloud storage.

Abstracting a solution

One of the key feedback from application developers, scientists and end-users is that they do not want to deal with the complexity at the infrastructure level while still accomplishing their mission. At ESnet, we are exploring various ways to make networks work better for users. A couple of concepts could be game-changers, according to Open Network Summit conference presenter and Berkeley professor Scott Shenker: 1) using abstraction to manage network complexity, and 2) extracting and exposing simplicity out of the network. Shenker himself cites Barbara Liskov’s Turing Lecture as inspiration.

ECSEL is leveraging OSCARS and OpenFlow within the Software Defined Networking (SDN) paradigm to elegantly prevent end-to-end network traffic jams.  OpenFlow is an open standard to allow application-driven manipulation of network flows. ECSEL is using OSCARS-controlled MPLS virtual circuits with OpenFlow to dynamically stitch together a seamless data plane delivering services over multi-domain constructs.  ECSEL also provides an additional level of simplicity to the application, as it can discover host-network interconnection points as necessary, removing the requirement of applications being “statically configured” with their network end-point connections. It also enables stitching of the paths end-to-end, while allowing each administrative entity to set and enforce its own policies. ECSEL can be easily enhanced to enable users to verify end-to-end performance, and dynamically select application-specific protocol forwarding rules in each domain.

The OpenFlow capabilities, whether it be in an enterprise/campus or within the data center, were demonstrated with the help of NEC’s ProgrammableFlow Switch (PFS) and ProgrammableFlow Controller (PFC). We leveraged a special interface developed by them to program a virtual path from ingress to egress of the OpenFlow domain. ECSEL accessed this special interface programmatically when executing the end-to-end path stitching workflow.

Our anticipated next step is to develop ECSEL as an end-to-end service by making it an integral part of a scientific workflow. The ECSEL software will essentially act as an abstraction layer, where the host (or virtual machine) doesn’t need to know how it is connected to the network–the software layer does all the work for it, mapping out the optimum topologies to direct data flow and make the magic happen. To implement this, ECSEL is leveraging the modular architecture and code of the new release of OSCARS 0.6.  Developing this demonstration yielded sufficient proof that well-architected and modular software with simple APIs, like OSCARS 0.6, can speed up the development of new network services, which in turn validates the value-proposition of SDN. But we are not the only ones who think that ECSEL virtual circuits show promise as a platform for spurring further innovation. Vendors such as Brocade and Juniper, as well as other network providers attending the demo were enthusiastic about the potential of ECSEL.

But we are just getting started. We will reprise the ECSEL demo at SC11 in Seattle, this time with a GridFTP application using Remote Direct Memory Access (RDMA) which has been modified to include the XSP (eXtensible Session Protocol) that acts as a signaling mechanism enabling the application to become “network aware.”  XSP, conceived and developed by Martin Swany and Ezra Kissel of Indiana University and University of Delaware,  can directly interact with advanced network services like OSCARS – making the creation of virtual circuits transparent to the end user. In addition, once the application is network aware, it can then make more efficient use of scalable transport mechanisms like RDMA for very large data transfers over high capacity connections.

We look forward to seeing you there and exchanging ideas. Until Seattle, any questions or proposals on working together on this or other solutions to the “Big Data Problem,” don’t hesitate to contact me.

–Inder Monga

imonga@es.net

ECSEL Collaborators:

Eric Pouyoul, Vertika Singh (summer intern), Brian Tierney: ESnet

Samrat Ganguly, Munehiro Ikeda: NEC

Martin Swany, Ahmed Hassany: Indiana University

Ezra Kissel: University of Delaware