International Climate Community Kicks-Off a Year of Networking!

Starting this January, the Earth System Grid Federation (ESGF) has started a new working group—the International Climate Network Working Group—to help set up and optimize network infrastructures for their climate data sites around the world. They need network connections that can deal with petabytes of modeling and observational data, which will traverse more than 13,000 miles of networks (more than half the circumference of the Earth!), spanning two oceans.

By the end of 2014, this working group will aim to obtain at least 4Gbps of data transfer throughput at five of their climate data centers at PCMDI/LLNL (US), NCI/ANU (AU), CEDA/SFTC (UK), DKRZ (DE), and KNMI (NE). This goal runs in parallel with the Enlighten Your Research Global international networking program award that ESGF received this last November 2013. This initiative is lead by Dean Williams of Lawrence Livermore National Lab and ESnet’s Science Engagement Team, along with collaborating international network organizations in Australia (AARnet), Germany (DFN), the Netherlands (SURFnet), and the UK (Janet). We are helping to shepherd ESGF’s project and working group to make sure all their climate sites get up and running at proficient network speeds for the future peta-scale climate data that is expected within the next 5 years.

As we work closely with ESGF to pave the way for climate science, we look forward to developing a new set of networking best practices to help future climate science collaborations. In all, we are excited to get this started and see their science move forward!

The Enlighten Your Research Global program will set up, optimize and/or troubleshoot 5 ESGF locations in different countries throughout 2014.
The Enlighten Your Research Global program will set up, optimize and/or troubleshoot 5 ESGF locations in different countries throughout 2014.

NASA HECN Team Achieves Record Disk-to-Disk 91+ Gbps via ESnet

As a research and education network, one of ESnet’s accomplishments came to light at the end of 2013 during an SC13 demo in Denver, CO. Using ESnet’s 100 Gbps backbone network, NASA Goddard’s High End Computer Networking (HECN) Team achieved a record single host pair network data transfer rate of over 91 Gbps for a disk-to-disk file transfer. By close collaboration with ESnet, Mid-Atlantic Crossroads (MAX), Brocade, Northwestern University’s International Center for Advanced Internet Research (iCAIR), and the University of Chicago’s Laboratory for Advanced Computing (LAC), the HECN Team showcased the ability to support next generation data-intensive petascale science, focusing on achieving end-to-end 100 Gbps data flows (both disk-to-disk and memory-to-memory) across real-world cross-country 100 Gbps wide-area networks (WANs).

To achieve 91+ Gbps disk-to-disk network data transfer rate between a single pair of high performance RAID servers, this demo required a number of techniques working in concert to avoid any bottlenecks in the end-to-end transfer process.  This required parallelization using multiple CPU cores, RAID controllers, 40G NICs, and network data streams; a buffered pipelined approach to each data stream, with sufficient buffering at each point in the pipeline to prevent data stalls, including application, disk I/O, network socket, NIC, and network switch buffering; a completely clean end-to-end 100G network path (provided by ESnet and MAX) to prevent TCP retransmissions; synchronization of CPU affinities for the application process and the disk and network NIC interrupts; and a suitable Linux kernel.

The success of the HECN Team SC13 demo proves that it is possible to effectively fully utilize real-world 100G networks to transfer and share large-scale datasets in support of petascale science, using Commercial Off-The-Shelf system, RAID, and network components, together with open source software.

View of the MyESnet Portal during the SC13 demo. Top visual shows a network topology graph with colors denoting the amount of traffic going over each part of the network. Bottom graph shows the total rate of network traffic vs. time across the backbone.
View of the MyESnet Portal during the SC13 demo. Top visual shows a network topology graph with colors denoting the scale of traffic going over each part of the network. Bottom graph shows the total rate of network traffic vs. time across the backbone.
Diagram of the SC13 demo connections.
Diagram of the SC13 demo connections.

perfSONAR-PS Project Reaches Deployment Milestone

The perfSONAR-PS project celebrated a milestone by surpassing 1000 deployed software instances in December of 2013.  The perfSONAR software is designed to assist network operators and end users with the task of monitoring end-to-end performance, and assisting with debugging tasks in the event that problems arise.  As the number of deployments grows, the software becomes more effective by offering better coverage across more network paths, including ESnet and the connectivity to Department of Energy and NSF funded resources.  perfSONAR-PS is a joint effort between ESnet, Fermilab, Georgia Tech, Indiana University, Internet2, SLAC, and the University of Delaware.

20140109-pSDeployments