ESnet Highlights from ZeekWeek’21

Fatema Bannat Wala presenting at ZeekWeek21

Slides and videos from ZeekWeek have just been made available — here are links to ESnet highlights.


ZeekWeek, an annual Fall conference organized by the Zeek Project, took place online from October 13-15 this year. The conference had over 2000 registered participants from the open source user community this year, who got together to share the latest and greatest about this cyber-security and network monitoring software tool.

Berkeley Lab staff member Vern Paxson developed the precursor to the Zeek intrusion detection software, then called Bro, in 1994. As an early adopter, ESnet’s cybersecurity team has strong relationships with the Zeek community, and this ZeekWeek was an opportunity to showcase advances and uses made by the software by ESnet and the entire Research and Educational Networking Community.


The talk “DNS and Spoofed traffic investigation with Zeek,” presented by Fatema Bannat Wala, discussed how Zeek is being used to do network traffic analysis/investigations at ESnet by triaging abnormal activities when these occur on our network.

The talks “A Better Way to Capture Packets with DPDK” and “Details for DPDK plugin development and performance measurement presented by Vlad Grigorescu and Scott Campbell, detailed the development process of the plugin and the performance enhancements it brings to the network packet capture technology.

Fatema Bannat Wala also did a training session on “Introduction to Zeek,” which provided hands-on experience with Zeek tools and information about how to get involved with the collaboration.

ESnet’s cybersecurity team looks forward to continued collaboration with the Zeek community, attending next year’s ZeekWeek, and to contributing future code enhancements to this great software ecosystem.

ESnet Machine Learning Researchers Win Best Paper at MLN ‘2021!

MLN '2021 Best Paper Award Notification

Sheng Shen, Mariam Kiran, and Bashir Mohammed have just been awarded the Best Paper award at the International Conference on Machine Learning for Networking (MLN). Sponsored by the Conservatoire National des Arts et Métiers (CNAM), the École Supérieure d’Ingénieurs en Électrotechnique et Électronique (ESIEE), and Laboratoire d’Informatique Gaspard-Monge (LIGM), MLN is being held virtually 1-3 December 2021.

The paper, “DynamicDeepFlow: An Approach for Identifying Changes in Network Traffic Flow Using Unsupervised Clustering,” uses a hybrid of deep learning variational autoencoder model and a shallow learning k-means to help identify unique traffic patterns across ESnet. These unique patterns can help identify if a new experiment has started or whether current network bandwidth is changing.

DynamicDeepFlow (DDF) model structure

“We’re very excited to receive this recognition and the conference was a wonderful opportunity to exchange thoughts and ideas with peers in France. MLN is a conference dedicated to discussing machine learning applications in networks. Our next task is to integrate DynamicDeepflow with Netpredict to show real-time information in ESnet data” — Mariam Kiran

Papers from MLN will be published as post-proceedings in Springer’s Lecture Notes in Computer Science (LNCS).

ESnet Highlights from the National Science Foundation’s Cybersecurity Summit ’21

The National Science Foundation (NSF) Cybersecurity Center of Excellence, Trusted CI Project hosts a yearly cybersecurity summit, inviting people from various NSF-funded research organizations to share innovations and ideas. Here are some videos of ESnet presentations.

Scott Campbell presented “ESnet Security Group Impact on Network Architecture” where he discussed some of the social, technical, and architectural outcomes of the ESnet6 network upgrade that were beneficial to the organization. By being involved early, security design elements were incorporated into workflows at early stages and were both tightly integrated and vetted during the core design process. This early involvement also heightened the security group’s visibility, which led to a better understanding of how the various groups interact and their different methods of problem-solving and time management.

Eli Dart and Fatema Bannat Wala presented “Best practices for securing Science DMZ,” focusing on disentangling security policies and enforcement for science flows from traditional security approaches for business systems, and use of the Science DMZ model to protect high-performance science flows. They discussed thinking of the Science DMZ as a security architecture that provides useful and implementable security controls without impacting performance. 

ESnet Scientists awarded best paper at SC21 INDIS!

A combined team from ESnet and Lehigh University was awarded the best paper for Exploring the BBRv2 Congestion Control Algorithm for use on Data Transfer Nodes at the 8th IEEE/ACM International Workshop on Innovating the Network for Data-Intensive Science (INDIS 2021), which was held in conjunction with the 2021 IEEE/ACM International Conference for High Performance Computing, Networking, Storage and Analysis (SC21) on Monday, November 15, 2021.

The team was comprised of:

  • Brian Tierney, Energy Sciences Network (ESnet)
  • Eli Dart, Energy Sciences Network (ESnet)
  • Ezra Kissel, Energy Sciences Network (ESnet)
  • Eashan Adhikarla, Lehigh University

The paper can be found here. Slides from the presentation are here. In this Q+A, ESnet spoke with the award-winning team about their research — answers are from the team as a whole.

INDIS 21 Best Paper Certificate

The paper is based on extensive testing and controlled experiments with the BBR (Bottleneck Bandwidth and Round-trip propagation time), BBRv2 and the Cubic Function Binary Increase Congestion Control (CUBIC) Transmission Control Protocol (TCP) Internet congestion algorithms. What was the biggest lesson from this testing?

BBRv2 represents a fundamentally different approach to TCP congestion control. CUBIC (as well as Hamilton, Reno, and many others) are loss-based, meaning that they interpret packet loss as congestion and therefore require significant network engineering effort to achieve high performance. BBRv2 is different in that it measures the network path and builds a model of the path – it then paces itself to avoid loss and queueing. In practical terms, this means that BBRv2 is resilient to packet loss in a way that CUBIC is not. This comes through loud and clear in our data.

What part of the testing was the most difficult and/or interesting?

We ran a large number of tests in a wide range of scenarios. It can be difficult to keep track of all the test configurations, so we wrote a “test harness” in python that allowed us to keep track of all the testing parameters and resulting data sets.

The harness also allowed us to better compare results collected over real-world paths to those in our testbed environments. Managing the deployment of the testing environment though containers also allowed for rapid setup and improved reproducibility. 

You provide readers with links to great resources so they can do their own testing and learn more about BBRv2. What do you hope readers will learn?

We hope others will test BBRv2 in high-performance research and education environments. There are still some things that we don’t fully understand, for example there are some cases where CUBIC outperforms BBRv2 on paths with very large buffers. It would be great for this to be better characterized, especially in R&E network environments.

What’s the next step for ESnet research into BBRv2? How will you top things next year?

We want to further explore how well BBRv2 performs at 100G and 400G. We would also like to spend additional time performing a deeper analysis of the current (and newly generated) results to gain insights into how BBRv2 performs compared to other algorithms across varied networking infrastructure. Ideally we would like to provide strongly substantiated recommendations on where it makes sense to deploy BBRv2 in the context of research and educational network applications.

Graduate students publish on network telemetry with ESnet

Two graduate students working with ESnet have published their papers recently in IEEE and ACM workshops.

Bibek Shrestha, a graduate student at the University of Nevada, Reno, and his advisor Engin Arslan worked with Richard Cziva from ESnet to publish a work on “INT Based Network-Aware Task Scheduling for Edge Computing”. In the paper, Bibek investigated the use of in-band network telemetry (INT) for real-time in-network task scheduling. Bibek’s experimental analysis using various workload types and network congestion scenarios revealed that enhancing task scheduling of edge computing with high-precision network telemetry can lead up to a 40% reduction in data transfer times and up to 30% reduction in total task execution times by favoring edge servers in uncongested (or mildly congested) sections of the network when scheduling tasks. The paper will appear in the 3rd Workshop on Parallel AI and Systems for the Edge (PAISE) co-conducted with IEEE IPDPS 2021 conference to be held on May 21st, 2021, in Portland, Oregon. 

Zhang Liu, a former ESnet intern and a current graduate student at the University of Colorado at Boulder, worked with the ESnet High Touch Team – Chin Guok, Bruce Mah, Yatish Kumar, and Richard Cziva – on fastcapa-ng, ESnet’s telemetry processing software. In the paper “Programmable Per-Packet Network Telemetry: From Wire to Kafka at Scale,” Zhang showed the scaling and performance characteristics of fastcapa-ng, and highlighted the most critical performance considerations that allow the pushing of 10.4 million telemetry packets per second to Kafka with only 5 CPU cores, which is more than enough to handle 170 Gbit/s of original traffic with 1512B MTU. This paper will appear in the 4th International Workshop on Systems and Network Telemetry and Analytics (SNTA 2021) held at the ACM HPCD 2021 conference in Stockholm, Sweden between 21-25 June 2021.

Congratulations Bibek and Zhang!


If you are a networked systems research student looking to collaborate with us on network measurements, please reach out to Richard Cziva. If you are interested in a summer internship with ESnet, please visit this page.

How the World’s Fastest Science Network Was Built

Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

Funded by DOE’s Office of Science and managed by the Lawrence Berkeley National Laboratory (Berkeley Lab), ESnet moves about 51  petabytes of scientific data every month. This is a 13-step guide about how ESnet has evolved over 30 years.

Step 1: When fusion energy scientists inherit a cast-off supercomputer, add 4 dialup modems so the people at the Princeton lab can log in. (1975)

Online3

Step 2: When landlines prove too unreliable, upgrade to satellites! Data screams through space. (1981)

18ogxd

Step 3: Whose network is best? High Energy Physics (HEPnet)? Fusion Physics (MFEnet)?  Why argue? Merge them into one-Energy Sciences Network (ESnet)-run by the Department of Energy!  Go ESnet! (1986)

ESnetListicle

Step 4: Make it even faster with DUAL Satellite links! We’re talking 56 kilobits per second! Except for the Princeton fusion scientists – they get 112 Kbps! (1987)

Satellite

Step 5:  Whoa, when an upgrade to 1.5 MEGAbits per second isn’t enough, add ATM (not the money machine, but Asynchronous Transfer Mode) to get more bang for your buck. (1995)

18qlbh

Step 6: Duty now for the future—roll out the very first IPv6 address to ensure there will be enough Internet addresses for decades to come. (2000)

18s8om

Step 7: Crank up the fastest links in the network to 10 GIGAbits per second—16 times faster than the old gear—a two-generation leap in network upgrades at one time. (2003)

18qlnc

Step 8: Work with other networks to develop really cool tools, like the perfSONAR toolkit for measuring and improving end-to-end network performance and OSCARS (On-Demand Secure Circuit and Advance Reservation), so you can reserve a high-speed, end-to-end connection to make sure your data is delivered on time. (2006)

18qn9e

Step 9: Why just rent fiber? Pick up your own dark fiber network at a bargain price for future expansion. In the meantime, boost your bandwidth to 100G for everyone. (2012)

18on55

Step 10: Here’s a cool idea, come up with a new network design so that scientists moving REALLY BIG DATASETS can safely avoid institutional firewalls, call it the Science DMZ, and get research moving faster at universities around the country. (2012)

18onw4

18oo6c

Step 11: We’re all in this science thing together, so let’s build faster ties to Europe. ESnet adds three 100G lines (and a backup 40G link) to connect researchers in the U.S. and Europe. (2014)

18qnu6

Step 12: 100G is fast, but it’s time to get ready for 400G. To pave the way, ESnet installs a production 400G network between facilities in Berkeley and Oakland, Calif., and even provides a 400G testbed so network engineers can get up to speed on the technology. (2015)

18oogv

Step 13: Celebrate 30 years as a research and education network leader, but keep looking forward to the next level. (2016)

ESnetFireworks

ESnet Co-Leads Washington Workshop on Developing Prototype SDN Network

About 100 networking experts from academia, industry, national labs and federal agencies met for a two-day workshop at the National Science foundation to plan a path forward to develop, deploy and operate a prototype SDN network. SDN, or Software Defined Networking, is an upcoming technology paradigm aimed at making it easier for software applications to automatically configure and control the various layers of the network to improve flexibility, predictability and reliability.

ESnet Chief Technologist Inder Monga was the lead organizer of the workshop and ESnet network engineer Erich Pouyoul gave a talk on science drivers for SDN. Monga also led a breakout session on “Technology and Operational Gap Analysis.” ESnet has been a pioneer in developing and deploying SDN technology in support of data-intensive science for almost a decade, starting with research on virtual network circuits that eventually culminated in the facility’s OSCARS project, recipient of a 2013 R&D100 award.

The invitation-only workshop was held at the National Science Foundation in Arlington, Va., and included speakers from the White House Office on Science and Technology Policy (OSTP), Google, DARPA, Internet2, SRI and Brocade, as well as ESnet. Among the areas covered were transparency and interoperation among SDN domains, security and identity management, and the participation of equipment vendors to advance technology transfer.

The workshop was organized after the OSTP directed federal agencies participating in the Networking and Information Technology Research and Development (NITRD) Subcommittee’s Large Scale Networking (LSN) Coordinating Group to plan and hold an LSN workshop. The goal was to have participation by representatives from federal agencies, the commercial sector, researchers, and other networking and distributed systems research community participants to explore and report on the need for a prototype SDN network.

The workshop participants will draft a report documenting recommendations for needed R&D, resources and collaboration to deploy and operate the prototype SDN network and to identify future SDN research needs.

On the eve of the workshop, Federal Computer Week magazine published an article about federal agencies looking into SDN. Monga was among the sources interviewed for the article, which describes SDN as the next major architectural change looming for the IT community.

Image

  ESnet Chief Technologist Inder Monga

ESnet’s OSCARS Bandwidth Reservation System Wins R&D 100 Award

Widely recognized as a mark of excellence, the R&D 100 Awards are the only industry-wide competition rewarding the practical applications of science. Among the 2013 winners of this prestigious award is the latest version of OSCARS, the On-demand Secure Circuits and Reservation System, the development of which was led by ESnet. OSCARS is a software service that creates dedicated bandwidth channels for scientists who need to move massive, time-critical data sets around the world.

What makes OSCARS so useful is that it can automatically create end-to-end circuits, crossing multiple network domains. Before OSCARS, this was a time consuming process–in 2010, for example, ESnet engineers needed 10 hours of phone calls and about 100 emails over three months to do what one person can do in five minutes using OSCARS. The automation of this complex process—through a technique known as Software Defined Networking—is accelerating scientific discovery in high-energy physics and many other disciplines.

“It’s wonderful to see the innovation that’s gone into the latest version of OSCARS recognized with an R&D100 Award,” said ESnet Director Greg Bell. “This early example of Software Defined Networking is yet another example of research networks taking the lead. But OSCARS is not just an ESnet achievement. We’re grateful for collaborations with many partners over the last decade, as the project matured from a bold idea into a production software suite.”

Image

Read more.

ECSEL leverages OpenFlow to demonstrate new network directions

ESnet and its collaborators successfully completed three days of demonstrating its End-to-End Circuit Service at Layer 2 (ECSEL) software at the Open Networking Summit held at Stanford a couple of weeks ago. Our goal is to build “zero-configuration circuits” to help science applications seamlessly use networks for optimized end-to-end data transport. ECSEL, developed in collaboration with NEC, Indiana University, and the University of Delaware builds on some exciting new conceptual thinking in networking.

Wrangling Big Data 

To put ECSEL in context, the proliferating tide of scientific data flows – anticipated at 2 petabytes per second as planned large-scale experiments get in motion – is already challenging networks to be exponentially more efficient. Wide area networks have vastly increased bandwidth and enable flexible, distributed, scientific workflows that involve connecting multiple scientific labs to a supercomputing site, a university campus, or even a cloud data center.

Heavy network traffic to come

The increasing adoption of distributed, service-oriented computing means that resource and vendor independence for service delivery is a key priority for users. Users expect seamless end-to-end performance and want the ability to send data flows on demand, no matter how many domains and service providers are involved.  The hitch is that even though the Wide Area Network (WAN) can have turbocharged bandwidth, at these exponentially increasing rates of network traffic even a small blockage in the network can seriously impair the flow of data, trapping users in a situation resembling commute conditions on sluggish California freeways. These scientific data transport challenges that we and other R&E networks face are just a taste of what the commercial world will encounter with the increasing popularity of cloud computing and service-driven cloud storage.

Abstracting a solution

One of the key feedback from application developers, scientists and end-users is that they do not want to deal with the complexity at the infrastructure level while still accomplishing their mission. At ESnet, we are exploring various ways to make networks work better for users. A couple of concepts could be game-changers, according to Open Network Summit conference presenter and Berkeley professor Scott Shenker: 1) using abstraction to manage network complexity, and 2) extracting and exposing simplicity out of the network. Shenker himself cites Barbara Liskov’s Turing Lecture as inspiration.

ECSEL is leveraging OSCARS and OpenFlow within the Software Defined Networking (SDN) paradigm to elegantly prevent end-to-end network traffic jams.  OpenFlow is an open standard to allow application-driven manipulation of network flows. ECSEL is using OSCARS-controlled MPLS virtual circuits with OpenFlow to dynamically stitch together a seamless data plane delivering services over multi-domain constructs.  ECSEL also provides an additional level of simplicity to the application, as it can discover host-network interconnection points as necessary, removing the requirement of applications being “statically configured” with their network end-point connections. It also enables stitching of the paths end-to-end, while allowing each administrative entity to set and enforce its own policies. ECSEL can be easily enhanced to enable users to verify end-to-end performance, and dynamically select application-specific protocol forwarding rules in each domain.

The OpenFlow capabilities, whether it be in an enterprise/campus or within the data center, were demonstrated with the help of NEC’s ProgrammableFlow Switch (PFS) and ProgrammableFlow Controller (PFC). We leveraged a special interface developed by them to program a virtual path from ingress to egress of the OpenFlow domain. ECSEL accessed this special interface programmatically when executing the end-to-end path stitching workflow.

Our anticipated next step is to develop ECSEL as an end-to-end service by making it an integral part of a scientific workflow. The ECSEL software will essentially act as an abstraction layer, where the host (or virtual machine) doesn’t need to know how it is connected to the network–the software layer does all the work for it, mapping out the optimum topologies to direct data flow and make the magic happen. To implement this, ECSEL is leveraging the modular architecture and code of the new release of OSCARS 0.6.  Developing this demonstration yielded sufficient proof that well-architected and modular software with simple APIs, like OSCARS 0.6, can speed up the development of new network services, which in turn validates the value-proposition of SDN. But we are not the only ones who think that ECSEL virtual circuits show promise as a platform for spurring further innovation. Vendors such as Brocade and Juniper, as well as other network providers attending the demo were enthusiastic about the potential of ECSEL.

But we are just getting started. We will reprise the ECSEL demo at SC11 in Seattle, this time with a GridFTP application using Remote Direct Memory Access (RDMA) which has been modified to include the XSP (eXtensible Session Protocol) that acts as a signaling mechanism enabling the application to become “network aware.”  XSP, conceived and developed by Martin Swany and Ezra Kissel of Indiana University and University of Delaware,  can directly interact with advanced network services like OSCARS – making the creation of virtual circuits transparent to the end user. In addition, once the application is network aware, it can then make more efficient use of scalable transport mechanisms like RDMA for very large data transfers over high capacity connections.

We look forward to seeing you there and exchanging ideas. Until Seattle, any questions or proposals on working together on this or other solutions to the “Big Data Problem,” don’t hesitate to contact me.

–Inder Monga

imonga@es.net

ECSEL Collaborators:

Eric Pouyoul, Vertika Singh (summer intern), Brian Tierney: ESnet

Samrat Ganguly, Munehiro Ikeda: NEC

Martin Swany, Ahmed Hassany: Indiana University

Ezra Kissel: University of Delaware

Why this spiking network traffic?

ESnet November 2010 Traffic

Last month was the first in which the ESnet network crossed a major threshold – over 10 petabytes of traffic! Traffic volume was 40% higher than the prior month and 10 times higher than just a little over 4 years ago. But what’s behind this dramatic increase in network utilization?  Could it be the extreme loads ESnet circuits carried for SC10, we wondered?

Breaking down the ESnet traffic highlighted a few things.  Turns out it wasn’t all that demonstration traffic sent across thousands of miles to the Supercomputing Conference in New Orleans (151.99 TB delivered), since that accounted for only slightly more than 1% of November’s ESnet-borne traffic.  We observed for the first time significant volumes of genomics data traversing the network as the Joint Genome Institute sent over 1 petabyte of data to NERSC. JGI alone accounted for about 10% of last month’s traffic volume. And as we’ve seen since it went live in March, the Large Hadron Collider continues to churn out massive datasets as it increases its luminosity, which ESnet delivers to researchers across the US.

Summary of Total ESnet Traffic, Nov. 2010

Total Bytes Delivered: 10.748 PB
Total Bytes OSCARS Delivered: 5.870 PB
Pecentage of OSCARS Delivered: 54.72%

What is is really going on is quite prosaic, but to us, exciting. We can follow the progress of distributed scientific projects such as the LHC  by tracking the proliferation of our network traffic, as the month-to-month traffic volume on ESnet correlates to the day-to-day conduct of science. Currently, Fermi and Brookhaven LHC data continue to dominate the volume of network traffic, but as we see, production and sharing of large data sets by the genomics community is picking up steam. What the stats are predicting: as science continues to become more data-intensive, the role of the network will become ever more important.