It has almost been a year since we turned 25, and transferred a “whole universe of data” at Supercomputing 2011 – and that was over a single 100G link between NERSC and Seattle. Now we are close to the end of building out the fifth generation of our network, ESnet5.
In order to minimize the downtime for the sites, we are building ESnet5 parallel to ESnet4, with just a configuration-driven switch of traffic from one network to the other. Since the scientific community we serve depends on the network to be up, it’s important to have assurance that the transition is not disruptive in anyway. The question we have heard over and over again from some of our users – when you switch the ESnet4 production traffic to ESnet5, how confident are you that the whole network will work, and not collapse?
In this blog post, I’d like to introduce an innovative testing concept the ESnet network engineering team (with special kudos to Chris Tracy) developed and implemented to address this very problem.
The goal of our testing was to ensure that the entire set of backbone network ports would perform solidly at full 100 Gbps saturation with no packet loss, over a 24 hour period. However we had some limitations. With only one Ixia test-set with 100 GE cards at hand to generate and receive packets and not enough time to ship that equipment to every PoP and test each link, we had to create a test scenario that would generate confidence that all the deployed routers and optical hardware, optics, the fiber connections, and the underlying fiber would performing flawlessly in production.
This implied creating a scenario where the 100 Gbps traffic stream being generated by the Ixia would be carried bi-directionally over every router interface deployed in ESnet5, traverse it only once and cover the entire ESnet5 topology before being directed back to the test hardware. A creative traffic loop was created that traversed the entire footprint, and we called it the ‘Snake Test’. Even though the first possible solution was used to create the ‘snake’, I am wondering if this could be framed as a NP-hard theoretical computer science and optimization approach known as the traveling salesman problem for more complex topologies?
The diagram below illustrates the test topology:
So after sending around 1.2 petabytes of data in 24 hours, and accounting for surprise fiber maintenance events that caused the link to flap, the engineering team was happy to see a zero loss situation.
Here’s a sample portion of the data collected:
Automation is key – utility scripts had been built to do things like load/unload the config from the routers, poll the firewall counters (to check for loss ingress/egress at every interface), clear stats, parse the resulting log files and turn them into CSV (a snapshot you see in the picture) for analysis.
Phew! – the transition from ESnet4 to ESnet5 continues without a hitch. Watch out for the completion news, it may come quicker than you think…..
The ESnet tools team is pleased to announce that they are launching a brand new interface to http://my.es.net that will showcase real time statistics on the 100 Gbps demos running at SC11.
This interface is designed to provide the community with access to multiple live visualization tools of 100 Gbps network utilization and topology. The tools team continues to build out the MyESnet portal to meet the evolving needs of the community—we will keep you posted on new developments.
ESnet is supporting eight 100 Gbps community projects at SC11:
· Brookhaven National Laboratory with Stony Brook University: Booth 2443
· Indiana University with collaborators Brocade, Ciena, Data Direct Networks, IBM, Whamcloud, and ZIH: Booth 2239
· Lawrence Berkeley National Laboratory, Booth 512
· NASA with collaborators from MAX, International Center for Advanced Research (iCAIR) at Northwestern University, Laboratory for Advanced Computing at University of Chicago, Open Cloud Consortium, Ciena, Alcatel, StarLight, MREN, Fujitsu, Brocade, Force 10, Juniper, Arista: Booths 615, 2615, 635
· Orange Silicon Valley with the InfiniBand Trade Association and OpenFabrics Alliance: Booth 6010
· The California Institute of Technology: Booth 1223
· Fermi National Accelerator Laboratory and University of California, San Diego: Booth 1203 and 1213.
ESnet turns 25 this year. This anniversary marks a major inflection point in our evolution as a network in terms of bandwidth, capability, and control. We are running data through our network a billion—that is 106—times faster than when ESnet was established. Yet, we are still facing even greater demands for bandwidth as the amount of scientific data explodes and global collaborations expand.
Created in 1986 to meet the Department of Energy’s needs for moving research data and enabling scientists to access remote scientific facilities, ESnet combined two networks serving researchers in fusion research; (MFEnet) and high-energy physics (HEPnet). But ESnet’s roots actually stretch back to the mid-1970s, when staff at the CTR Computer Center at Lawrence Livermore National Laboratory installed four acoustic modems on a borrowed CDC 6600 computer. Our technology morphed over the years from fractional T1s to T1s to DS3, then to ATM, and in the last ten years we have moved to packet over SONET—all driven by the needs of our thousands of scientific users.
Over the last 25 years, ESnet has built an organization of excellence driven by the DOE science mission. We have consistently provided reliable service even as the science areas we support—high energy physics, climate studies, and cosmology, to name a few—have become exponentially more data-intensive. These fields especially rely on ever more powerful supercomputers to do data crunching and modeling. Other fields, such as genomics, are growing at a rapid pace as sequencing technologies become cheaper and more distributed.
Based on the dizzying trajectory of change in scientific computing technology, we urgently need to act now to expand the capacity of scientific networks in order to stay ahead of the demand in years to come. At this point the ESnet high-speed network carries between 7 and 10 petabytes of data monthly (a single petabyte is equivalent to 13.3 years of HD video). The level of ESnet data traffic is increasing an average of 10 times every 4 years, steeply propelled by the rise in data produced. More powerful supercomputers can create more accurate models, for instance, of drought and rainfall patterns—but greater resolution requires vastly bigger datasets. Scientific collaborations can include thousands of researchers exchanging time-sensitive data around the world. Specialized facilities like the Large Hadron Collider and digital sky surveys produce torrents of data for global distribution. All these factors are poised to overload networks and slow scientific progress.
Tonight we face our biggest milestone yet: We are launching the first phase of our brand new 100 gigabit per second (Gbps) network, currently the fastest scientific network in the world today. Called the Advanced Networking Initiative (ANI), this prototype network forms the foundation of a soon-to-be permanent national network that will vastly expand our future data handling capacity.
To prepare the way, ESnet acquired almost 13,000 miles of dark fiber. This gives the DOE unprecedented control over the network and its assets, enabling upgrades and expansion to capacity and capabilities according to the needs of our scientists. By owning the fiber, we are able to lock in the cost of future upgrades for decades to come.
The third facet of ANI is a nationwide testbed which is being made available to researchers both in the public and private sector as a first of its kind platform to test experimental approaches to new network protocols and architectures in a greater than 10 Gbps network environment.
Researchers are already working on multiple experiments investigating emerging technologies like Remote Direct Memory Access (RDMA)-enabled data transfers at 40 Gigabits per second (Gbps) or new TCP congestion control algorithms that scale to 100Gbps and beyond. By creating a research testbed, ESnet will enable researchers to safely experiment with disruptive technologies that will build the next generation Internet—something impossible to do on networks that also carry daily production traffic.
Bringing You the Universe at 100 Gbps speed
Just this past week our engineers completed the installation of the first phase of network – nearly 6 weeks ahead of schedule. Tonight at the SC11 conference we are showcasing the inauguration of our brand new 100 Gbps network by demonstrating how high-capacity networks can open up the universe – or at least a highly sophisticated computer simulation of the early days of the universe. The demo will include the transfer of over 5 terabytes of data over the new network from NERSC in Oakland, CA to the Washington State Convention Center in Seattle.
This demonstration is important as astrophysicists are interested in studying high-resolution simulations to better understand the complex structural patterns in the universe. ESnet’s new 100 Gbps network will enable scientists to interactively examine large datasets at full spatial and temporal fidelity without compromising data integrity. This novel capability will also enable remotely located scientists to gain insights from large data volumes located at DOE supercomputing facilities such as NERSC. For comparison purposes, a simulation utilizing a 10 Gpbs network connection will also be displayed on a complementary screen to showcase the vast difference in quality that a magnitude difference of bandwidth can bring to scientific discovery.
In addition to this great demo, seven other community collaborations will be leveraging the new ESnet 100 Gbps network to support innovative HPC demonstrations at SC11.
As we toast to this great new accomplishment this evening, we recognize that we are building on an amazing 25-year legacy in network innovation. We owe tremendous gratitude to the ESnet staff both past and present for their over two decades of hard work and dedication that has been keenly focused on helping the DOE community solve some of society’s greatest challenges. Given our team’s accomplishments to date, I cannot wait to see what this next chapter in our history brings.
Things are moving quickly at ESnet, and we’re not talking only about data transfers. Our Advanced Networking Initiative (ANI) 100G roll-out is gaining even more momentum and we are really gearing up for the SC11 conference next month.
This week, it was announced that we are working with LGS Innovations to deploy Alcatel-Lucent 7750 Service Routers (SR) on the new ANI prototype network. The first routers were delivered about a month ago and we are already well into acceptance testing and deployment. So far we have routers up and running in Sunnyvale, NERSC, StarLight and Argonne. After we installed this equipment, our engineering team worked with Internet2 to light the first coast-to-coast network path from Washington, D.C. all the way to Sunnyvale in our backyard – the first 100G transcontinental link in the world! We were particularly proud of this milestone, which we reached well ahead of schedule. As next steps, we anticipate that 6 routers of our 10 total will be deployed in time for SC11.
To help the community keep tabs on our rapid ANI deployment progress, the ESnet Tools team has been expanding the MyESnet portal to provide a real-time view of the rollout. You can see the result of their efforts here.
This summer we introduced MyESnet to provide DOE Office of Science researchers and IT staff with a wide range of customized tools intended to greatly improve their ability to understand network issues in real time. The initial functionality includes a dashboard that provides a bird’s-eye view of the local area networks of the DOE facilities connected by ESnet, including their traffic patterns and system status.
Using this new ANI visualization tool, users can see which 100G network links are active, as well as information on each of our node and interconnect locations across the country. The tool will be expanded to show traffic load in the near future. In the next few weeks, the portal will also provide a real-time view of the eight community-driven projects that will use the new 100G network for large scale demonstrations at SC11. This interface will allow viewers to view the details and traffic load of these demonstrations.
With fewer than 25 days to go until SC11, our team is gearing up for some exciting announcements and ANI demos . . . check back for info on these activities soon and check in with us in Seattle. ESnet is part of Berkeley Lab booth 512.
ESnet is pleased to announce that the Advanced Networking Initiative (ANI) testbed will be upgraded to include a 2200 mile 100 Gbps segment by the end of this year. This will make it the world’s first long-distance 100 Gbps testbed, available to any network researcher, whether from government, university, or industry. But take note: the next round of proposals to use the ANI Testbed is due on October 1– just 3 weeks away, so submit your ideas soon to take advantage of this great resource.
There has been a lot of exciting work on the testbed recently, including 40 Gbps RDMA testing, detailed power monitoring experiments, high-performance middleware testing, Openflow testing, and more. We also have some projects planning some interesting TCP research as soon as the 100 Gbps network is available.
After much work and planning, we proudly introduce you to our new scientific network-to-be. Berkeley Lab just announced the signing of an agreement to begin construction of a 100 gigabits per second (Gbps) prototype network, part of the Lab’s Advanced Networking Initiative funded by the American Recovery and Reinvestment Act (ARRA). Under the terms of the deal, Internet2 and its industry partners will construct the network under ESnet’s direction.
Coming Soon: ANI 100 Gpbs Prototype Network, First Step Towards Nationwide 100 Gbps Network
But more is yet to come; the ANI prototype network will also be framework for ESnet to obtain concrete network energy use data for research into “green networking” – a priority here at ESnet. Also part of the agreement, Berkeley Lab negotiated access to a nation-wide pair of dark fiber for 20 years which will be immediately available to network researchers and industry to experiment with new protocols and disruptive technologies. Stay tuned to our blog to follow construction of the network and the exciting happenings at ESnet.
The 100 Gbps prototype network is just the beginning. Berkeley Lab and ESnet will leverage the experience gained deploying the prototype network to extend these capabilities to the national labs and facilities ESnet currently serves, and connect DOE scientists to university collaborators around the world. When moved to production status, the new network will increase the information carrying capacity of ESnet’s current 10 Gbps network by several orders of magnitude. One terabyte of data, which takes approximately 13 minutes to transfer on ESnet’s present 10 Gbps network, will be delivered in under a minute on the 100 Gbps network.
What we hope you will get out of this
While our network currently meets the capacity needs of its users, we see new challenges on the horizon. On behalf of the DOE Office of Science, we regularly survey scientists about their projected needs in order to better target services for our users. Demand on ESnet’s network has grown by a factor of 10 every 47 months since 1990. Over the last year, we have seen a rise in ESnet network traffic by 70 percent – most of that data traffic coming from the Large Hadron Collider at CERN but with genomics and climate data also picking up steam.
Just as investments in national infrastructure built the Interstate Highway System that sped the delivery of goods and opened up commerce in the United States, high-speed networks will speed the scientific discovery that will drive this nation’s economy in the future. New, large-scale instruments are in the offing, and we anticipate a growing tide of data in various disciplines as scientists work to understand our climate, develop clean fuels, and investigate the basic nature of matter. High-speed networks like this one will have a huge impact on an increasing number of disciplines, as more scientists collaborate in global teams and depend on remote supercomputers to model and solve complex problems.
How to be really thrifty with $62 million.
This is a challenging time when budgets are under severe pressure, both for government and the research and education community. We believe that by pooling resources and expertise we can help curb the costs of scientific research. By working closely with Internet2 to build this prototype network, we are leveraging our ARRA award and their funding for synergistic effect and getting more return on investment for the taxpayer.
We strongly believe this new 100Gbps network infrastructure will allow us to better support data-intensive science by providing more capacity at lower cost, lower energy consumption, and lower carbon emissions per bit. And that makes us feel good about what we do.
In another milestone, ESnet’s testbed was dispatched to its more permanent home at Brookhaven National Laboratory. The testbed, part of LBL’s $62 million ARRA-funded Advanced Networking Initiative, was established so researchers could experiment and push the limits of network technologies. A number of researchers have taken advantage of its capabilities so far, and we are collecting proposals for new projects.
We handled the painstaking shut down and packing procedures. We verified the internal wiring and the structural integrity of the hosts. We dealt with the intricacies of IP addressing. The testbed will be reassembled and open for new research projects in a couple of weeks. Note to Brookhaven: We meticulously counted every last screw and packed them all in plastic baggies.
We are looking forward to the upcoming Winter 2011 ESCC/Internet2 Joint Techs meeting from January 30 to February 3, 2011 at Clemson University in South Carolina. The meeting is cosponsored by ESnet and Internet2.
Write us into your schedule. All Joint Techs talks will take place the auditorium, and we look forward to your questions and comments.
January 31, from 9:10 to 9:30 a.m., Evangelos (Vangelis) Chaniotakis, ESnet, will talk about “Automated GOLEs and Fenius: Pragmatic Interoperability”; including the GNI API task force, the collaboration for Fenius, GLIF as a forum for interoperability, demonstrations done at Geneva and SC10, and future plans.
February 01, 2011, between 3:50 – 4:10 p.m. Eli Dart, ESnet will talk about “The Science DMZ – A well-known location for high-performance network services”. Eli will discuss the critical need for network performance for productivity as science becomes increasingly data-intensive. Frequently problems with network performance occur very near the endpoints of a data transfer. Eli will propose a simple network architecture, dubbed the “Science DMZ” for simplifying network performance tuning.
February 2, from 8:30 – 8:50 a.m. Steve Cotter, head of ESnet, will present an update on ESnet’s projects in 2010, including the Advanced Networking Initiative and the related 100 gigabits/second testbed, and give an overview of ESnet’s plans for 2011.
Other Lawrence Berkeley National Laboratory staff will be giving talks at the meeting, including:
January 31, 2011, 9:30 – 9:50 a.m. Mike Bennett, LBNL IT Division, will present on “Green Ethernet”. Bennett will provide an overview of the recently approved IEEE standard for Energy-Efficient Ethernet, IEEE Std 802.3az-2010, which specified energy efficient modes for copper interfaces. He will discuss the operation of Low Power Idle and illuminate its anticipated benefits as well as identify opportunities for innovation beyond the scope of the standard.
February 01, from 11:20 a.m. – 11:40 a.m., Dan Klinedinst, LBNL IT Division will discuss “3D Modeling and Visualization of Real Time Security Events” where he will introduce Gibson, a tool for modeling real time security events and information in 3D. This tool allows users to watch a visual representation of threats, defenses and compromises on their systems as they occur, or record them for later analysis and forensics.
February 01, 4:10- 4:30 p.m., Dan Gunter, Advanced Computing for Science Department, LBNL, will discuss “End-to-End Data Transfer Performance with Periscope and NetLogger.” Gunter will examine the data currently being collected by the NetLogger toolkit for end-to-end bulk data transfers performed by the “Globus Online” service (between GridFTP servers), and describe the integration of that data with the Periscope service, which provides on-demand caching and analysis of perfSONAR data.
At the following ESCC meeting topic talks will focus on specific issues pertaining to ESnet’s goals for the coming year, as well as how they implement the DOE Office of Science vision. By the way, if you are an ESCC member and can’t make it to the meeting, ESnet will videostream it using the ESnet ECS Ad-hoc bridge.
To access the ESCC meeting remotely:
1) Open a browser to mcu1.es.net 2) In the Conf ID field type 372211 (no service prefix) 3) Next to “Streaming rate” select RealPlayer or Quicktime Press<Stream this conference>
ESCC – February 2, 2011
1:10 p.m. Greg Bell, ESnet will review the Site Cloud/Remote Services template discussed at the July ESCC meeting, and provide a brief summary of recent conversations with cloud-services vendors.
1:30 p.m., Vince Dattoria, ESnet program manager for DOE/SC/ASCR will give the view from Washington.
2:00 p.m., Steve Cotter will present an ESnet update.
2:45 p.m., Sheila Cisko, FNAL will give an update on ESnet Collaborative Services.
3:30 p.m., Brian Tierney, ESnet will provide the rundown on ANI, ESnet’s ARRA-funded Advanced Networking Initiative. So far, the ANI testbed has accepted the second round of research proposals. Brian will describe some highlights of recent experiments, and discuss further research opportunities.
3:50 p.m., Steve Cotter will discuss site planning for the upcoming 100G production backbone.
4:20 p.m. Eli Dart will talk about Science Drivers for ESnet Planning, and discuss the Science DMZ.
6:30 p.m., Kevin Oberman will participate in the evening focus section on IP Address Management issues, emphasizing IPv6
ESCC- February 3, 2011
8:30 a.m. onwards, panels and discussions on aspects of IPv6
1:35 p.m., Joe Metger, ESnet will participate in the session on R&D Monitoring Efforts, discussing DICE monitoring directions
The first crop of experiments using ESnet’s Advanced Networking Initiative testbed are now in full swing. In a project funded by the DOE Office of Science, Prof. Malathi Veeraraghavan and post-doc Zhenzhen Yan at the University of Virginia, along with consultant Prof. Admela Jukan, are investigating the role of hybrid networking in ESnet’s next generation 100 Gbps network.
Their goal is to learn how to optimize a hybrid network comprised of two components, an IP datagram network and a high-speed optical dynamic circuit network, to best serve users’ data communication needs. ESnet deployed a hybrid network in 2006, based on an IP routed network and the “science data network” (SDN), which is a dynamic virtual circuit network. “It is a question of efficiency, which essentially boils down to cost.” Veeraraghavan notes, “IP networks have to be operated at low utilization for the performance requirements of all types of flows to be met. With hybrid networks, it is feasible to meet performance requirements while still operating the network at higher utilization.”
Data flows have certain characteristics that make them suited for certain types of networks. It is a complex problem to match flows with the “right” networks. In the ESnet core network, one can identify flows by looking at multiple fields in packet headers, according to Veeraraghavan, but you can’t know the size of the flow (bytes) or whether a flow is long or short. A challenge of this project is to predict characteristics of data flows based on prior history. To do this, the researchers are using machine learning techniques. Flows are classified based on size and duration. Large-sized (“elephant”) flows are known to consume a higher share of bandwidth and thus adversely affect small-sized (“mice”) flows. Therefore, they are good candidates to redirect to the SDN. If SDN circuits are to established dynamically, i.e., after a router starts seeing packets in a flow that history indicates is a good candidate for SDN, then the flow needs to not only be large-sized but also of long-duration (“tortoise”) because circuit setup takes minutes. Short-duration (“dragonfly”) flows are not good candidates for dynamic circuits, but if they are of large size and occur frequently, static circuits could be used.
Boston is notorious for its mixed traffic
The concept of different lanes handling different types of traffic is seen commonly in other contexts. For example, Veeraraghavan notes that on some urban streets with mixed traffic, “separate lanes are set aside for buses, cars, motorcycles, and bicycles.” Also, grocery store checkouts have separate express lanes for the equivalent of “mice flows”. To support this concept, the researchers developed several modules of a system called Hybrid Network Traffic Engineering Software (HNTES), and tested these modules on the ANI Testbed.
HNTES, diagrammed
Their experiments use two computers loaded with the HNTES software, and two Juniper routers. The HNTES software configures the routers to mirror packets of certain pre-determined flows (this determination is made with an offline flow analysis tool that analyzes previously collected Netflow data to find flow identifiers of elephant and tortoise flows) to a server that runs a flow-monitoring module (part of HNTES). Upon detecting such a flow, HNTES reconfigures the router to redirect packets from this flow to a circuit (different path). In future versions, if dynamic circuits are deemed feasible, a HNTES module called IDC Interface Module will send messages to an OSCARS IDC server to reserve and provision a circuit before reconfiguring the router. So far Veeraraghavan and her colleagues have completed phase I of the software implementation, and demonstrated it. The demonstrations were presented in a Oct. 2010 DOE annual review meeting with previously recorded CAMtasia video. The next step will be to improve several features in the software to understand what happens to the flow when it is redirected; do packets get lost, or does redirection cause out of order arrivals at the destination? They are also doing a theoretical study with flow simulations, to see if taking the trucks off the parkway and putting them on the freeway really benefits the flow of “traffic.”
You must be logged in to post a comment.