ESnet & its regional, national and international partners have built high speed, high performance 100 Gbps networks. See how the research community is using these networks to accelerate science & research.
Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.
Step 4: Make it even faster with DUAL Satellite links! We’re talking 56 kilobits per second! Except for the Princeton fusion scientists – they get 112 Kbps! (1987)
Step 5: Whoa, when an upgrade to 1.5 MEGAbits per second isn’t enough, add ATM (not the money machine, but Asynchronous Transfer Mode) to get more bang for your buck. (1995)
Step 6: Duty now for the future—roll out the very first IPv6 address to ensure there will be enough Internet addresses for decades to come. (2000)
Step 7: Crank up the fastest links in the network to 10 GIGAbits per second—16 times faster than the old gear—a two-generation leap in network upgrades at one time. (2003)
Step 8: Work with other networks to develop really cool tools, like the perfSONAR toolkit for measuring and improving end-to-end network performance and OSCARS (On-Demand Secure Circuit and Advance Reservation), so you can reserve a high-speed, end-to-end connection to make sure your data is delivered on time. (2006)
Step 9: Why just rent fiber? Pick up your own dark fiber network at a bargain price for future expansion. In the meantime, boost your bandwidth to 100G for everyone. (2012)
Step 10: Here’s a cool idea, come up with a new network design so that scientists moving REALLY BIG DATASETS can safely avoid institutional firewalls, call it the Science DMZ, and get research moving faster at universities around the country. (2012)
Step 11: We’re all in this science thing together, so let’s build faster ties to Europe. ESnet adds three 100G lines (and a backup 40G link) to connect researchers in the U.S. and Europe. (2014)
Step 12: 100G is fast, but it’s time to get ready for 400G. To pave the way, ESnet installs a production 400G network between facilities in Berkeley and Oakland, Calif., and even provides a 400G testbed so network engineers can get up to speed on the technology. (2015)
Step 13: Celebrate 30 years as a research and education network leader, but keep looking forward to the next level. (2016)
As part of its look at things to expect in 2015, Popular Science magazine highlights ESnet’s new trans-Atlantic links which will have a combined capacity of 340 gigabits per second. The three 100 Gbps and one 40 Gbps connections are being tested and are expected to go live at the end of January.
The booths have been dismantled, the routers and switchers shipped back home and the SC14 conference in New Orleans officially ended Nov. 21, but many attendees are still reflecting on important connections made during the annual gathering of the high performance computing and networking community.
Among those helping make the right connections were ESnet staff, who used ESnet’s infrastructure to bring a combined network capacity of 400 gigabits-per-second (Gbps) in the Ernest Morial Convention Center. Those links accounted for one third of SC14’s total 1.22 Tbps connectivity, provided by SCinet, the conference’s network infrastructure designed and built by volunteers. The network links were used for a number of demonstrations between booths on the exhibition floor and sites around the world.
A quick review of the ESnet traffic patterns at https://my.es.net/demos/sc14#/summary shows that traffic apparently peaked at 12:15 p.m. Thursday, Nov. 20, with 79.2 Gbps of inbound data and 190 Gbps flowing out.
The Naval Research Laboratory (NRL), in collaboration with the DOE’s Energy Sciences Network (ESnet), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, the Center for Data Intensive Science (CDIS) at the University of Chicago, the Open Cloud Consortium (OCC) and significant industry support, have conducted a 100 gigabits per second (100G) remote I/O demonstration at the SC14 supercomputing conference in New Orleans, LA.
The remote I/O demonstration illustrates a pipelined distributed processing framework and software defined networking (SDN) between distant operating locations. The demonstration shows the capability to dynamically deploy a production quality 4K Ultra-High Definition Television (UHDTV) video workflow across a nationally distributed set of storage and computing resources that is relevant to emerging Department of Defense data processing challenges.
Berkeley Lab staff from ESnet, NERSC and the IT Division will be among the presenters at the 2014 Technology Exchange, a leading technical event in the global research and education networking community. The annual meeting is co-organized by ESnet and Internet2. The conference will be held Oct. 27-30 in Indianapolis.
Among the topics to be addressed by Berkeley Lab staff are ESnet’s recently announced 100 gigabits-per-second connections to Europe, the newest release of the perSONAR network measurement software library, Science DMZs and cyber security.
The annual meeting brings together a wide range of technical experts to address the challenges facing the research and education networking community as it supports data-intensive research. The conference will be hosted this year by Indiana University.
ESnet, the Department of Energy’s (DOE’s) Energy Sciences Network, is deploying four new high-speed transatlantic links, giving researchers at America’s national laboratories and universities ultra-fast access to scientific data from the Large Hadron Collider (LHC) and other research sites in Europe.
ESnet’s transatlantic extension will deliver a total capacity of 340 gigabits-per-second (Gbps), and serve dozens of scientific collaborations. To maximize the resiliency of the new infrastructure, ESnet equipment in Europe will be interconnected by dedicated 100 Gbps links from the pan-European networking organization GÉANT.
Funded by the DOE’s Office of Science and managed by Lawrence Berkeley National Laboratory, ESnet provides advanced networking capabilities and tools to support U.S. national laboratories, experimental facilities and supercomputing centers.
Among the first to benefit from the network extension will be U.S. high energy physicists conducting research at the Large Hadron Collider (LHC), the world’s most powerful particle collider, located near Geneva, Switzerland. DOE’s Brookhaven National Laboratory and Fermi National Accelerator Laboratory—major U.S. computing centers for the LHC’s ATLAS and CMS experiments, respectively—will make use of the links as soon as they are tested and commissioned.
As a research and education network, one of ESnet’s accomplishments came to light at the end of 2013 during an SC13 demo in Denver, CO. Using ESnet’s 100 Gbps backbone network, NASA Goddard’s High End Computer Networking (HECN) Team achieved a record single host pair network data transfer rate of over 91 Gbps for a disk-to-disk file transfer. By close collaboration with ESnet, Mid-Atlantic Crossroads (MAX), Brocade, Northwestern University’s International Center for Advanced Internet Research (iCAIR), and the University of Chicago’s Laboratory for Advanced Computing (LAC), the HECN Team showcased the ability to support next generation data-intensive petascale science, focusing on achieving end-to-end 100 Gbps data flows (both disk-to-disk and memory-to-memory) across real-world cross-country 100 Gbps wide-area networks (WANs).
To achieve 91+ Gbps disk-to-disk network data transfer rate between a single pair of high performance RAID servers, this demo required a number of techniques working in concert to avoid any bottlenecks in the end-to-end transfer process. This required parallelization using multiple CPU cores, RAID controllers, 40G NICs, and network data streams; a buffered pipelined approach to each data stream, with sufficient buffering at each point in the pipeline to prevent data stalls, including application, disk I/O, network socket, NIC, and network switch buffering; a completely clean end-to-end 100G network path (provided by ESnet and MAX) to prevent TCP retransmissions; synchronization of CPU affinities for the application process and the disk and network NIC interrupts; and a suitable Linux kernel.
The success of the HECN Team SC13 demo proves that it is possible to effectively fully utilize real-world 100G networks to transfer and share large-scale datasets in support of petascale science, using Commercial Off-The-Shelf system, RAID, and network components, together with open source software.
View of the MyESnet Portal during the SC13 demo. Top visual shows a network topology graph with colors denoting the scale of traffic going over each part of the network. Bottom graph shows the total rate of network traffic vs. time across the backbone.Diagram of the SC13 demo connections.
ESnet, the Department of Energy’s high-bandwidth network, is working with Caltech and DOE’s National Energy Research Scientific Computing Center (NERSC) to move large data files at an average 90 gigabits per second (Gbps) to the SC13 conference in Denver.
ESnet’s Eric Pouyoul, who set up the data transfers, said that the transfer rate is both more impressive and more realistic as it is being transferred from disc to memory. “That’s reality – moving files from memory to memory is faster, but it’s not real life,” Pouyoul said.
To set up the demo, Pouyoul created 24 files, each containing 130 gigabytes of data and stored them at NERSC in Oakland, Calif. He configured them to transfer in loops over ESnet’s link to Denver and into the convention center – and over 15 hours achieved an average of 90 Gbps. The demo used the FDT (Fast Data Transfer) tool forr transferring the files, with support from the FDT team.ESnet is supporting four 100 Gbps links into SC13 at the Colorado Convention Center in Denver.
Working in the Caltech booth in the SC13 Exhibition, network engineer Azher Mughal said the demonstration was created to help people understand how large datasets are becoming and how to efficiently transfer the data. Caltech manages the trans-Atlantic link that brings data from the Large Hadron Collider in Switzerland to the United States.Caltech-portal
“We want to showcase how you can efficiently manage large data sets, the challenges you’re likely to see, and how to optimize the transfer,” Mughal said. By sending the data across a single 100 Gbps line rather than multiple 10 Gbps links, there are fewer transfer problems and it requires fewer CPUs on the computer systems, making them more efficient in the process.
It has almost been a year since we turned 25, and transferred a “whole universe of data” at Supercomputing 2011 – and that was over a single 100G link between NERSC and Seattle. Now we are close to the end of building out the fifth generation of our network, ESnet5.
In order to minimize the downtime for the sites, we are building ESnet5 parallel to ESnet4, with just a configuration-driven switch of traffic from one network to the other. Since the scientific community we serve depends on the network to be up, it’s important to have assurance that the transition is not disruptive in anyway. The question we have heard over and over again from some of our users – when you switch the ESnet4 production traffic to ESnet5, how confident are you that the whole network will work, and not collapse?
In this blog post, I’d like to introduce an innovative testing concept the ESnet network engineering team (with special kudos to Chris Tracy) developed and implemented to address this very problem.
The goal of our testing was to ensure that the entire set of backbone network ports would perform solidly at full 100 Gbps saturation with no packet loss, over a 24 hour period. However we had some limitations. With only one Ixia test-set with 100 GE cards at hand to generate and receive packets and not enough time to ship that equipment to every PoP and test each link, we had to create a test scenario that would generate confidence that all the deployed routers and optical hardware, optics, the fiber connections, and the underlying fiber would performing flawlessly in production.
This implied creating a scenario where the 100 Gbps traffic stream being generated by the Ixia would be carried bi-directionally over every router interface deployed in ESnet5, traverse it only once and cover the entire ESnet5 topology before being directed back to the test hardware. A creative traffic loop was created that traversed the entire footprint, and we called it the ‘Snake Test’. Even though the first possible solution was used to create the ‘snake’, I am wondering if this could be framed as a NP-hard theoretical computer science and optimization approach known as the traveling salesman problem for more complex topologies?
The diagram below illustrates the test topology:
So after sending around 1.2 petabytes of data in 24 hours, and accounting for surprise fiber maintenance events that caused the link to flap, the engineering team was happy to see a zero loss situation.
Here’s a sample portion of the data collected:
Automation is key – utility scripts had been built to do things like load/unload the config from the routers, poll the firewall counters (to check for loss ingress/egress at every interface), clear stats, parse the resulting log files and turn them into CSV (a snapshot you see in the picture) for analysis.
Phew! – the transition from ESnet4 to ESnet5 continues without a hitch. Watch out for the completion news, it may come quicker than you think…..
The ESnet tools team is pleased to announce that they are launching a brand new interface to http://my.es.net that will showcase real time statistics on the 100 Gbps demos running at SC11.
This interface is designed to provide the community with access to multiple live visualization tools of 100 Gbps network utilization and topology. The tools team continues to build out the MyESnet portal to meet the evolving needs of the community—we will keep you posted on new developments.
ESnet is supporting eight 100 Gbps community projects at SC11:
· Brookhaven National Laboratory with Stony Brook University: Booth 2443
· Indiana University with collaborators Brocade, Ciena, Data Direct Networks, IBM, Whamcloud, and ZIH: Booth 2239
· Lawrence Berkeley National Laboratory, Booth 512
· NASA with collaborators from MAX, International Center for Advanced Research (iCAIR) at Northwestern University, Laboratory for Advanced Computing at University of Chicago, Open Cloud Consortium, Ciena, Alcatel, StarLight, MREN, Fujitsu, Brocade, Force 10, Juniper, Arista: Booths 615, 2615, 635
· Orange Silicon Valley with the InfiniBand Trade Association and OpenFabrics Alliance: Booth 6010
· The California Institute of Technology: Booth 1223
· Fermi National Accelerator Laboratory and University of California, San Diego: Booth 1203 and 1213.
You must be logged in to post a comment.