ESnet, Globus Experts Design a Better Portal for Scientific Discovery

Globus, Science DMZ provide new architecture to meet demand for accessing shared data

These days, it’s easy to overlook the fact that the World Wide Web was created nearly 30 years ago primarily to help researchers access and share scientific data. Over the years, the web has evolved into a tool that helps us eat, shop, travel, watch movies and even monitor our homes.

Meanwhile, scientific instruments have become much more powerful, generating massive datasets, and international collaborations have proliferated In this new era, the web has become an essential part of the scientific process, but the most common method of sharing research data remains firmly attached to the earliest days of the web. This can be a huge impediment to scientific discovery.

That’s why a team of networking experts from the Department of Energy’s Energy Sciences Network (ESnet), with the Globus team from the University of Chicago and Argonne National Laboratory, have designed a new approach that makes data sharing faster, more reliable and more secure. In an article published Jan. 15 in Peer J Comp Sci, the team describes their “The Modern Research Data Portal: a design pattern for networked, data-intensive science.”

“Both the size of datasets and the quantity of data objects has exploded, but the typical design of a data portal hasn’t really changed,” said co-author Eli Dart, a network engineer with the Department of Energy’s Energy Sciences Network, or ESnet. “Our new design preserves that ease of use, but easily scales up to handle the huge amounts of data associated with today’s science.”

Read the full story.

The Modern Research Data Portal design pattern from a network architecture perspective: The Science DMZ includes multiple DTNs that provide for high-speed transfer between network and storage. Portal functions run on a portal server, located on the institution’s enterprise network. The DTNs need only speak the API of the data management service (Globus in this case).


Berkeley Lab and ESnet Document Flow, Performance of 56 Terabytes Climate Data Transfer

Visualization by Prabhat (Berkeley Lab).
The simulated storms seen in this visualization are generated from the finite volume version of NCAR’s Community Atmosphere Model. Visualization by Prabhat (Berkeley Lab).

In a recent paper entitled “An Assessment of Data Transfer Performance for Large‐Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6,” experts from Lawrence Berkeley National Laboratory (Berkeley Lab) and ESnet (the Energy Sciences Network, document the data transfer workflow, data performance, and other aspects of transferring approximately 56 terabytes of climate model output data for further analysis.

The data, required for tracking and characterizing extratropical storms, needed to be moved from the distributed Coupled Model Intercomparison Project (CMIP5) archive to the National Energy Research Supercomputing Center (NERSC) at Berkeley Lab.

The authors found that there is significant room for improvement in the data transfer capabilities currently in place for CMIP5, both in terms of workflow mechanics and in data transfer performance. In particular, the paper notes that performance improvements of at least an order of magnitude are within technical reach using current best practices.

To illustrate this, the authors used Globus to transfer the same raw data set between NERSC and Argonne Leadership Computing Facility (ALCF) at Argonne National Lab.

Read the Globus story:
Read the paper:

30 Years Ago this Month ESnet Rolled Out its Rollout Plans

1988 ESnet map2

Although officially established in 1986, ESnet did not formally begin network operations until 1988, as the Department of Energy’s Magnetic Fusion Energy Network (MFEnet, affectionately known as MuffyNet) and High Energy Physics Network (HEPnet) were gradually melded into a single entity.

In January 1988, then-ESnet head Jim Leighton laid out the plans for the new network in the Buffer, the monthly user newsletter for the National Magnetic Fusion Energy Computing Center (known today as NERSC). At the time, ESnet was managed by the Networking and Engineering Group at the center.

After giving some background on the organization of ESnet, Leighton wrote “Now you are probably saying to yourself that this really is very exciting stuff, but it would be even more exciting if we knew when we could expect to see something running. Well, I just happen to be ready to outline our schedule for the next two years:

“January 1988: We believe that the new approach ESnet is taking will require much closer coordination with people responsible for the local area networking at each site. Accordingly, we are planning to convene a new committee in January, with sites involved in Phase I (see below) of ESnet deployment (“Boy, a new committee, that is exciting!” you are probably saying to yourself.). Additional site members will be added to the committee as the implementation continues.

“Phase 0 (January-March 1988): We expect to bring up all the sites on the X.25 backbone, including Brookhaven National Laboratory (BNL), CERN, Fermi National Accelerator Laboratory (FNAL), Florida State University (FSU), Lawrence Berkeley Laboratory (LBL), Lawrence Livermore National Laboratory (LLNL), and the Massachusetts Institute of Technology (MIT). Additional foreign sites will be added during the year.

“Demonstration (March 1988): During the MFESIG meeting to be held at LBL, we expect to demonstrate some ‘beta release’ capabilities of ESnet.

“Phase I (June-September 1988): We will begin deploying and installing a terrestrial 56-K bits per second backbone for ESnet. Sites affected include Argonne National Laboratory (ANL), FSU, GA Technologies, Los Alamos National Laboratory, LBL, MFECC, Princeton Plasma Physics Laboratory, and the University of Texas at Austin. No sites will be disconnected from MFEnet during this phase.

“Phase II (October-December 1988): We will complete the ESnet backbone and connect additional sites to the backbone. This phase will require some sites to be disconnected from MFEnet. The MFEnet to ESnet transition gateway must be installed during this phase. Additional sites affected include CEBAF, FNAL, MIT, Oak Ridge National Laboratory, and UCLA.

“Phase III (Calendar Year 1989): We will continue to switch major hub sites from MFEnet to MFEnet II, along with all secondary sites connected through those hub sites.”

Read more from the Buffer about  the 1988 ESnet launch.