At 25, ESnet Transfers a Universe of Data


ESnet turns 25 this year. This anniversary marks a major inflection point in our evolution as a network in terms of bandwidth, capability, and control. We are running data through our network a billion—that is 106—times faster than when ESnet was established.   Yet, we are still facing even greater demands for bandwidth as the amount of scientific data explodes and global collaborations expand.

Created in 1986 to meet the Department of Energy’s needs for moving research data and enabling scientists to access remote scientific facilities, ESnet combined two networks serving researchers in fusion research; (MFEnet) and high-energy physics (HEPnet). But ESnet’s roots actually stretch back to the mid-1970s, when staff at the CTR Computer Center at Lawrence Livermore National Laboratory installed four acoustic modems on a borrowed CDC 6600 computer. Our technology morphed over the years from fractional T1s to T1s to DS3, then to ATM, and in the last ten years we have moved to packet over SONET—all driven by the needs of our thousands of scientific users.

Over the last 25 years, ESnet has built an organization of excellence driven by the DOE science mission. We have consistently provided reliable service even as the science areas we support—high energy physics, climate studies, and cosmology, to name a few—have become exponentially more data-intensive. These fields especially rely on ever more powerful supercomputers to do data crunching and modeling. Other fields, such as genomics, are growing at a rapid pace as sequencing technologies become cheaper and more distributed.

Based on the dizzying trajectory of change in scientific computing technology, we urgently need to act now to expand the capacity of scientific networks in order to stay ahead of the demand in years to come.  At this point the ESnet high-speed network carries between 7 and 10 petabytes of data monthly (a single petabyte is equivalent to 13.3 years of HD video). The level of ESnet data traffic is increasing an average of 10 times every 4 years, steeply propelled by the rise in data produced. More powerful supercomputers can create more accurate models, for instance, of drought and rainfall patterns—but greater resolution requires vastly bigger datasets. Scientific collaborations can include thousands of researchers exchanging time-sensitive data around the world. Specialized facilities like the Large Hadron Collider and digital sky surveys produce torrents of data for global distribution. All these factors are poised to overload networks and slow scientific progress.

Tonight we face our biggest milestone yet: We are launching the first phase of our brand new 100 gigabit per second (Gbps) network, currently the fastest scientific network in the world today. Called the Advanced Networking Initiative (ANI), this prototype network forms the foundation of a soon-to-be permanent national network that will vastly expand our future data handling capacity.

The ANI prototype network will initially link researchers to DOE’s supercomputing centers: the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, Oak Ridge Leadership Computing Facility (OLCF) , and Argonne Leadership Computing Facility (ALCF), as well as MANLAN, the Manhattan Landing Exchange Point.  ESnet will also deploy 100 Gbps-capable systems throughout the San Francisco Bay Area and Chicago. The ANI prototype network will then transition into a permanent, national 100 Gbps network, paving the way to an eventual terabit-scale DOE network.

To prepare the way, ESnet acquired almost 13,000 miles of dark fiber. This gives the DOE unprecedented control over the network and its assets, enabling upgrades and expansion to capacity and capabilities according to the needs of our scientists. By owning the fiber, we are able to lock in the cost of future upgrades for decades to come.

The third facet of ANI is a nationwide testbed which is being made available to researchers both in the public and private sector as a first of its kind platform to test experimental approaches to new network protocols and architectures in a greater than 10 Gbps network environment.

Researchers are already working on multiple experiments investigating emerging technologies like Remote Direct Memory Access (RDMA)-enabled data transfers at 40 Gigabits per second (Gbps) or new TCP congestion control algorithms that scale to 100Gbps and beyond. By creating a research testbed, ESnet will enable researchers to safely experiment with disruptive technologies that will build the next generation Internet—something impossible to do on networks that also carry daily production traffic.

Bringing You the Universe at 100 Gbps speed

Just this past week our engineers completed the installation of the first phase of network – nearly 6 weeks ahead of schedule. Tonight at the SC11 conference we are showcasing the inauguration of our brand new 100 Gbps network by demonstrating how high-capacity networks can open up the universe – or at least a highly sophisticated computer simulation of the early days of the universe.  The demo will include the transfer of over 5 terabytes of data over the new network from NERSC in Oakland, CA to the Washington State Convention Center in Seattle.

This demonstration is important as astrophysicists are interested in studying high-resolution simulations to better understand the complex structural patterns in the universe. ESnet’s new 100 Gbps network will enable scientists to interactively examine large datasets at full spatial and temporal fidelity without compromising data integrity. This novel capability will also enable remotely located scientists to gain insights from large data volumes located at DOE supercomputing facilities such as NERSC. For comparison purposes, a simulation utilizing a 10 Gpbs network connection will also be displayed on a complementary screen to showcase the vast difference in quality that a magnitude difference of bandwidth can bring to scientific discovery.

To view the 100 Gbps and 10 Gbps simulations, visit: http://www.es.net/RandD/advanced-networking-initiative/

In addition to this great demo, seven other community collaborations will be leveraging the new ESnet 100 Gbps network to support innovative HPC demonstrations at SC11.

As we toast to this great new accomplishment this evening, we recognize that we are building on an amazing 25-year legacy in network innovation. We owe tremendous gratitude to the ESnet staff both past and present for their over two decades of hard work and dedication that has been keenly focused on helping the DOE community solve some of society’s greatest challenges.  Given our team’s accomplishments to date, I cannot wait to see what this next chapter in our history brings.

–Steve Cotter

Transfer lots of data, effortlessly. Here’s how. Tune in to webcast Sept 8th.

Got a lot of data to move around? Say, more than 100 Gigabytes? ESnet recommends that you attend this webcast on how to use Globus Online, one of ESnet’s recommended file transfer services.  The webcast (https://www.globusonline.org/esnet-webcast-09-08-11/) will show you how to move data as you need it without having to become an IT expert, learn a new command vocabulary or install software.

ESnet provides the high-bandwidth, reliable connections that link scientists at national laboratories to universities and other research institutions so they can more effectively collaborate. We provide the infrastructure, but after listening to our customers, we are taking the next step and recommending the tools to make our network easier to use. As part of this effort, we are introducing our users to services like Globus Online that will help you to move data faster and more reliably.

Globus Online (https://www.globusonline.org/) is a fast, reliable file transfer service that simplifies the process of secure data movement. This free service automates the activity of managing file transfers, whether between computing facilities or from a facility to your local machine. Users can fire-and-forget their request and Globus Online will manage the entire operation – monitoring performance, retrying failed transfers, recovering from faults automatically whenever possible, and reporting status. Globus Online makes it a trivial thing to move big data around, to whatever location needed, without spending lots of time figuring out the right commands or dealing with complicated systems. Simply sign up, specify your endpoints (where the file is now (source) and where you are moving it (destination) authenticating as necessary on the servers, and then click to transfer. Once the endpoint you need is configured into Globus Online, it really is that simple – try it out for yourself!  (https://www.globusonline.org/SignUp)

In this session, the Globus Online project lead Steve Tuecke will cover all the basics of usage, including command line interface and using Globus Connect to make your own server or laptop an endpoint. I will be there to provide the ESnet perspective. We will answer your questions and also provide any constructive feedback. To get the most out of the session, sign up and try the service beforehand so you know what you are looking for and are ready to ask the right questions.

For more information, see https://www.globusonline.org/esnet-webcast-09-08-11/ or contact info@globusonline.org.

Brian Tierney, ESnet

100 Gbps ANI Prototype Network is Just the Beginning

After much work and planning, we proudly introduce you to our new scientific network-to-be.  Berkeley Lab just announced the signing of an agreement to begin construction of a 100 gigabits per second (Gbps) prototype network, part of the Lab’s Advanced Networking Initiative funded by the American Recovery and Reinvestment Act (ARRA).  Under the terms of the deal, Internet2 and its industry partners will construct the network under ESnet’s direction.

Coming Soon: ANI 100 Gpbs Prototype Network, First Step Towards Nationwide 100 Gbps Network

While our ANI network testbed is already open for business (mark your calendar –the next call for research proposals is October 1st) in a matter of months researchers can conduct experiments on the 100 Gbps prototype network. This network will link the three supercomputing facilities at national laboratories – the National Energy Research Scientific Computing Center (NERSC), Oak Ridge Leadership Computing Facility (OLCF), and Argonne Leadership Computing Facility (ALCF) to MANLAN – the Manhattan Landing international exchange point.

But more is yet to come; the ANI prototype network will also be framework for ESnet to obtain concrete network energy use data for research into “green networking” – a priority here at ESnet.  Also part of the agreement, Berkeley Lab negotiated access to a nation-wide pair of dark fiber for 20 years which will be immediately available to network researchers and industry to experiment with new protocols and disruptive technologies.  Stay tuned to our blog to follow construction of the network and the exciting happenings at ESnet.

The 100 Gbps prototype network is just the beginning.  Berkeley Lab and ESnet will leverage the experience gained deploying the prototype network to extend these capabilities to the national labs and facilities ESnet currently serves, and connect DOE scientists to university collaborators around the world.  When moved to production status, the new network will increase the information carrying capacity of ESnet’s current 10 Gbps network by several orders of magnitude.  One terabyte of data, which takes approximately 13 minutes to transfer on ESnet’s present 10 Gbps network, will be delivered in under a minute on the 100 Gbps network.

What we hope you will get out of this

While our network currently meets the capacity needs of its users, we see new challenges on the horizon.  On behalf of the DOE Office of Science, we regularly survey scientists about their projected needs in order to better target services for our users. Demand on ESnet’s network has grown by a factor of 10 every 47 months since 1990.  Over the last year, we have seen a rise in ESnet network traffic by 70 percent – most of that data traffic coming from the Large Hadron Collider at CERN but with genomics and climate data also picking up steam.

Just as investments in national infrastructure built the Interstate Highway System that sped the delivery of goods and opened up commerce in the United States, high-speed networks will speed the scientific discovery that will drive this nation’s economy in the future.  New, large-scale instruments are in the offing, and we anticipate a growing tide of data in various disciplines as scientists work to understand our climate, develop clean fuels, and investigate the basic nature of matter.  High-speed networks like this one will have a huge impact on an increasing number of disciplines, as more scientists collaborate in global teams and depend on remote supercomputers to model and solve complex problems.

How to be really thrifty with $62 million.

This is a challenging time when budgets are under severe pressure, both for government and the research and education community.  We believe that by pooling resources and expertise we can help curb the costs of scientific research.  By working closely with Internet2 to build this prototype network, we are leveraging our ARRA award and their funding for synergistic effect and getting more return on investment for the taxpayer.

We strongly believe this new 100Gbps network infrastructure will allow us to better support data-intensive science by providing more capacity at lower cost, lower energy consumption, and lower carbon emissions per bit.  And that makes us feel good about what we do.

Harnessing bugs for new drugs and potential jet fuel

D. radiodurans is impervious to ionizing radiation; can clean up mercury and toluene contamination

The high speed networking that ESnet provides supports an incredibly varied range of scientific projects. The express purpose of DOE national labs is to conduct research in the national interest. Much is basic research, exploring fundamental issue in physics, energy, cosmology, and climate science. However the study of microbial evolution is one intriguing area that is already changing our lives with everything from new medicines to potential fuels.

Microbes are single-cell organisms that live in colonies and can be found in nearly every corner of our planet, in places ranging from insects’ intestines to some of the most toxic chemical environments. The site for the most detailed information on the genetic makeup of these organisms only lives in one place – at the DOE’s Joint Genome Institute – and is accessed via ESnet.  The Integrated Microbial Genomes (IMG) Data Management System provides the genetic makeup of thousands of microbes and tools for analyzing the functional capability of microbial communities based on their metagenome DNA sequence. Understanding these tiny organisms can provide new insights into a wide range of important problems, but in order to study the microbes, scientists need reliable access to the genomic data.

Microbes such as bacteria are responsible for a number of diseases, such as plague, tuberculosis and anthrax, while microbes known as protozoa cause diseases such as malaria, sleeping sickness and toxoplasmosis. On the plus side, microbes also live helpfully in human digestive systems, helping to digest carbohydrates and synthesize certain vitamins.

Where the wild things are

Famously, taq polymerase, an enzyme isolated from the bacterium Thermus aquaticus found in a hot spring at Yellowstone National Park in 1965, became the basis for PCR, the technique for amplifying short strands of DNA that has revolutionized biologic and genetic research. While some scientists examine exotic microbes like Deinococcus radiodurans, the most radiation-resistant organism known (and the darling of exobiologists, found as a contaminant of irradiated canned meat in 1956) for clues to how life began on earth and could evolve on other planets, your laundry detergent probably contains enzymes developed from bacteria that evolved in hot, alkaline conditions.

In the private sector, microbes are genetically modified to develop innovative drugs as well as industrial products. In an example close to home, a few years back, LBNL’s Joint BioEnergy Institute director Jay Keasling used microbial evolution techniques to synthesize artemisinic acid, a precursor to artemisinin, an anti-malarial compound. Artemisinin is derived from Artemisia annua or wormwood, a plant known to Chinese medicine for centuries. Keasling made a steady supply that could be manufactured extremely cheaply and at large volumes, so accessible to people in developing countries.

Keasling’s next project is to use microbes as potential energy sources by turning them into factories to produce sugars. Certain microbes living in termite guts are essential for digesting the wood fibers eaten by the insect. While this can be bad news for homeowners, the chemical capabilities of the “bugs” in these bugs are being studied as for their potential in converting wood and other plant matter into new energy sources. Successfully producing fuel using plant waste instead of food crops like corn, could improve our country’s future energy options.  To be sure, there still are prosaic bars to overcome, such as chemical separation and processing in volumes high enough to meet our insatiable demand for transport fuels. Other scientists at DOE national labs such as NREL and a host of private companies, some right across San Francisco Bay from our headquarters at ESnet, are investigating how to make gasoline and jet fuel from microorganisms. Genetic analysis is the essential first step in growing hardworking bacteria with the desired qualities.

Another promising microbial application is environmental cleanup. One form of bacteria found almost 2 miles underground in a South African gold mine lives in total darkness, 140 degrees Fahrenheit – and no oxygen. The organism gets its energy not from the sun but from hydrogen and sulfate produced by the radioactive decay of uranium. By understanding how life can thrive in such an apparently toxic setting, scientists may get new insight into using microbes to clean up environmental contamination.

Currently, the IMG database contains complete genomes for 4,879 microbes, with another 1,569 in draft form. Of the total, 1,107 are bacteria and 2,536 are viruses. The information, containing data for more than 20 million genes, is provided freely to interested researchers.

Why this spiking network traffic?

ESnet November 2010 Traffic

Last month was the first in which the ESnet network crossed a major threshold – over 10 petabytes of traffic! Traffic volume was 40% higher than the prior month and 10 times higher than just a little over 4 years ago. But what’s behind this dramatic increase in network utilization?  Could it be the extreme loads ESnet circuits carried for SC10, we wondered?

Breaking down the ESnet traffic highlighted a few things.  Turns out it wasn’t all that demonstration traffic sent across thousands of miles to the Supercomputing Conference in New Orleans (151.99 TB delivered), since that accounted for only slightly more than 1% of November’s ESnet-borne traffic.  We observed for the first time significant volumes of genomics data traversing the network as the Joint Genome Institute sent over 1 petabyte of data to NERSC. JGI alone accounted for about 10% of last month’s traffic volume. And as we’ve seen since it went live in March, the Large Hadron Collider continues to churn out massive datasets as it increases its luminosity, which ESnet delivers to researchers across the US.

Summary of Total ESnet Traffic, Nov. 2010

Total Bytes Delivered: 10.748 PB
Total Bytes OSCARS Delivered: 5.870 PB
Pecentage of OSCARS Delivered: 54.72%

What is is really going on is quite prosaic, but to us, exciting. We can follow the progress of distributed scientific projects such as the LHC  by tracking the proliferation of our network traffic, as the month-to-month traffic volume on ESnet correlates to the day-to-day conduct of science. Currently, Fermi and Brookhaven LHC data continue to dominate the volume of network traffic, but as we see, production and sharing of large data sets by the genomics community is picking up steam. What the stats are predicting: as science continues to become more data-intensive, the role of the network will become ever more important.


Cheers for Magellan

We were glad to see DOE’s Magellan project getting some well-deserved recognition by the HPCwire Readers’ and Editors’ Choice Award at SC10 in New Orleans. Magellan investigates how cloud computing can help DOE researchers to manage the massive (and increasing) amount of data they generate in scientific collaborations. Magellan is a joint research project at NERSC at Berkeley Lab in California and Argonne Leadership Computing Facility in Illinois.

This award represents teamwork on several fronts. For example, earlier this year, ESnet’s engineering chops were tested when the Joint Genome Institute, one of Magellan’s first users, urgently needed increased computing resources at short notice.

Within a nailbiting span of several hours, technical staff at both centers collaborated with ESnet engineers to establish a dedicated 9 Gbps virtual circuit between JGI and NERSC’s Magellan system over ESnet’s Science Data Network (SDN). Using the ESnet-developed On-Demand Secure Circuits and Advance Reservation System (OSCARS), the virtual circuit was set up within an hour after the last details were finalized.

NERSC raided its closet spares for enough networking components to construct a JGI@NERSC local area network and migrated a block of Magellan cores over to JGI control.  This allowed NERSC and JGI staff to spend the next 24 hours configuring hundreds of processor cores on the Magellan system to mimic the computing environment of JGI’s local compute clusters.

With computing resources becoming more distributed, complex networking challenges will occur more frequently. We are constantly solving high-stakes networking problems in our job connecting DOE scientists with their data. But thanks to OSCARS, we now have the ability to expand virtual networks on demand. And OSCARS is just getting better as more people in the community refine its capabilities.

The folks at JGI claim they didn’t feel a thing. They were able to continue workflow and no data was lost in the transition.

Which makes us very encouraged about the prospects for Magellan, and cloud computing in general. Everybody is hoping that putting data out there in the cloud will expand capacity.  At ESnet, we just want to make the ride as seamless and secure as possible.

Kudos to Magellan. We’re glad to back you up, whatever the weather.

Visit Jon Dugan’s BoF in network measurement

ESnet’s Jon Dugan will lead a Bof on network measurement 12:15, Thurs in room 278-279 at SC10. Functional networks are critical to high performance computing, but to achieve optimal performance, it is necessary to accurately measure networks.  Jon will open up the session to discuss ideas in measurement tools such as perfSONAR, emerging standards, and the latest in research directions.

The circuits behind all those SC10 demos

It is midafternoon Wednesday at SC10 and the demos are going strong. Jon Dugan supplied an automatically updating graph in psychedelic colors  http://bit.ly/9HUrqL of the traffic ESnet is able to carry with all the circuits we set up. Getting this far required many hours of work from a lot of ESnet folk to accommodate the virtual circuit needs of both ESnet sites and SCinet customers using the OSCARS IDC software.  As always, the SCinet team has put in long hours in a volatile environment to deliver a high performance network that meets the needs of the exhibitors.

Catch ESnet roundtable discussions today at SC10, 1 and 2 p.m.

Wednesday Nov. 17 at SC10:

At 1 p.m. at Berkeley Lab booth 2448, catch ESnet’s Inder Monga’s round-table discussion on OSCARS virtual circuits. OSCARS, the acronym for On- demand Secure Circuits and Advance Reservation System, allows users to reserve guaranteed bandwidth. Many of the demos at SC10 are being carried by OSCARS virtual circuits which were developed by ESnet with DOE support. Good things to come: ESnet anticipates the rollout of OSCARS 0.6 in early 2011. Version 0.6 will offer greatly expanded capabilities and versatility, such as a modular architecture enabling easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture.

Then, stick around, because next at 2 p.m.  Brian Tierney from ESnet will lead a roundtable on the research being produced from the ARRA-funded Advanced Networking Initiative (ANI) testbed.

In 2009, the DOE Office of Science awarded ESnet $62 million in recovery funds to establish ANI, a next generation 100Gbps network connecting DOE’s largest unclassified supercomputers, as well as a reconfigurable network testbed for researchers to test new networking concepts and protocols.

Brian will discuss progress on the 100Gbps network, update you on the several research projects already underway on the testbed, discuss testbed capabilities and how to get access to the testbed. He will also answer your questions on how to submit proposals for the next round of testbed network research.

In the meantime, some celeb-spotting at the LBNL booth at SC10.

Inder Monga
Brian Tierney

Autumn means SC10

Jon Dugan blogs about ESnet at SC10

By autumn most folks are thinking about the holidays, but for me, fall is filled with thoughts of something different.  Since 1998, I’ve had the privilege of working with the SCinet committee of Supercomputing, a.k.a. the International Conference for High Performance Computing, Networking, Storage, and Analysis, if you are not into the whole brevity thing. SCinet is the team of people that builds the local area and wide area network for the Supercomputing conference.

This year’s conference, SC10 is in New Orleans, LA, from November 13 until November 19.  Planning for each year’s show starts a few years ahead of time. Not long after one year’s show ends, the serious planning for the next year’s SCinet begins. It’s a cycle that I’ve been through many times now, and it’s a bit like an old friend at this point.  Most of the time we enjoy each other’s company immensely but when things get stressful, we can really irritate each other.

SCinet is a pretty amazing network.  After a year of planning, there are three weeks of concentrated effort to set it up. It’s operational for about one week and it takes about two days to tear down.  This year we will have 270 Gbps of wide area network connectivity with dedicated circuits to ESnet, Internet2, NLR, and NOAA.  We will deliver over 200 network connections to the various booths on the show floor.

As amazing as the network is, the people who build it are even more amazing. They are drawn from universities, national laboratories, network equipment vendors and nationwide research and education networks.  It’s not just Americans; there are people from several different countries with strong showings from the Netherlands and Germany most years.  Many of these folks are leaders in their areas of expertise and all of them are bright, capable people.  Each of them has given up a fair bit of their own time to participate (while most have some sponsorship from their employers, it’s not unheard of for people to take vacation time to participate).

Why would people give up many evenings and weekends every fall to be a part of SCinet?  Because it’s an amazing opportunity to learn about the state of the art in computer networking and to expand your professional network as welll. I consider myself extremely fortunate to work with each of the people that make up SCinet.

So what is ESnet doing for with SCinet this year?  Glad you asked.  First off, we are bringing three 10G circuits to the show floor.  As of Friday, October 29th all three were up and operational.  One of these circuits will be used for general IP traffic, but the other two will be used to carry dynamic circuits managed by the OSCARS IDC software.

These circuits will provide substantial support for various demonstrations by exhibitors, connecting scientific and computational resources at labs and universities to the booths on the show floor.

Finally, ESnet has four people who are volunteering in various capacities within SCinet. Evangelos Chaniotakis and myself will be working with the routing team. The routing team provides IP, IPv6, IP multicast service, manages the wide area peerings, manages wide area layer 2 circuits, configures the interfaces that face the booths on the show floor and works closely with several other teams to provide a highly scalable and reliable network. John Christman is working with the fiber team, building the optical fiber infrastructure to support SCinet (all booth connections are delivered over optical fiber, which allows booths to be connected to the network using the highest-speed interfaces available.) Brian Tierney will be working with the SCinet measurement team collecting network telemetry, and using it to provide useful and meaningful visualizations of what’s happening inside SCinet as well as providing tools and hosts to allow making active network measurement such as Iperf, nuttcp, and OWAMP. The measurement data is also made accessible using the perfSONAR suite of tools. They’re also using the SNMP polling software I wrote for ESnet called ESxSNMP.


Important spots to visit:

If you are coming to SC10 this year, be sure to come by the SCinet NOC in booth 3351.  I’d be happy to meet anyone who’s read this; feel free to ask for me at the SCinet help desk at the same booth. LBNL (ESnet’s parent organization) is located in booth 2448.  Finally, I am hosting a Bird’s of a Feather (BOF) session on network measurement during the show, the details are here.

And check out the other ESnet demos: You can download a map of ESnet at SC10: SC 2010_floormapFL

LBNL Booth 2448, ESnet roundtable discussions

Inder Monga, Advanced Network Technologies Group, ESnet, will lead a roundtable discussion on: On-demand Secure Circuits and Advance Reservation System (OSCARS), 1-2 p.m., Wednesday, Nov. 17

Many of the demos at SC10 are being carried by OSCARS virtual circuits developed by ESnet with DOE support. OSCARS enables networks to reserve and schedule virtual circuits that provide bandwidth and service guarantees to support large-scale science collaborations. In the first quarter of 2011, ESnet expects to unveil OSCARS 0.6, which will offer vastly expanded capabilities, such as a modular architecture allowing for easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture. Adoption of OSCARS has been accelerating as 2010 has seen deployments at Internet2 and other domestic and international research and education networks.  Since last year, ESnet saw a 30% increase in the use of virtual circuits. OSCARS virtual circuits now carry over 50% of ESnet’s monthly production traffic.  Increased use of virtual circuits was a major factor enabling ESnet to easily handle a nearly 300% rise in traffic from June 2009 to May 2010.

Brian Tierney, Advanced Network Technologies Group, ESnet, will lead a roundtable discussion on: ARRA-funded Advanced Networking Initiative (ANI) Testbed, 2- 3 p.m. Wednesday, Nov. 17

The research and education community’s needs for managing and transferring data are exploding in scope and complexity. In 2009 the DOE Office of Science awarded ESnet $62 million in Recovery Act funds to create the Advanced Networking Initiative (ANI). This next-generation, 100 Gbps network will connect DOE’s largest unclassified supercomputers. ANI is also establishing a high performance, reconfigurable network testbed for researchers to experiment with advanced networking concepts and protocols. ESnet has now opened the testbed to researchers. A variety of experiments pushing the boundaries of current network technology are underway. Another round of proposals are in the offing. The testbed will be moving from Lawrence Berkeley National Laboratory to ESnet’s dark fiber ring at Long Island (LIMAN: Long Island Metropolitan Area Network) in January 2011 and eventually the 100 Gbps national prototype network ESnet is building to accelerate deployment of 100 Gbps technologies and provide a platform for the DOE experimental facilities at Oak Ridge National Laboratory and the Magellan resources at at the National Energy Research Scientific Computing Center (NERSC)  and Argonne National Laboratory.

–Jon Dugan