This award represents teamwork on several fronts. For example, earlier this year, ESnet’s engineering chops were tested when the Joint Genome Institute, one of Magellan’s first users, urgently needed increased computing resources at short notice.
Within a nailbiting span of several hours, technical staff at both centers collaborated with ESnet engineers to establish a dedicated 9 Gbps virtual circuit between JGI and NERSC’s Magellan system over ESnet’s Science Data Network (SDN). Using the ESnet-developed On-Demand Secure Circuits and Advance Reservation System (OSCARS), the virtual circuit was set up within an hour after the last details were finalized.
NERSC raided its closet spares for enough networking components to construct a JGI@NERSC local area network and migrated a block of Magellan cores over to JGI control. This allowed NERSC and JGI staff to spend the next 24 hours configuring hundreds of processor cores on the Magellan system to mimic the computing environment of JGI’s local compute clusters.
With computing resources becoming more distributed, complex networking challenges will occur more frequently. We are constantly solving high-stakes networking problems in our job connecting DOE scientists with their data. But thanks to OSCARS, we now have the ability to expand virtual networks on demand. And OSCARS is just getting better as more people in the community refine its capabilities.
The folks at JGI claim they didn’t feel a thing. They were able to continue workflow and no data was lost in the transition.
Which makes us very encouraged about the prospects for Magellan, and cloud computing in general. Everybody is hoping that putting data out there in the cloud will expand capacity. At ESnet, we just want to make the ride as seamless and secure as possible.
Kudos to Magellan. We’re glad to back you up, whatever the weather.
As SC10 wound down, ESnet started disassembling the network of connections that brought experimental data from the rest of the country to New Orleans, (and at least a bit of the universe as well). We detected harbingers of 100Gbps in all sorts of places. We will be sharing our observations on promising and significant networking technologies with you in blogs to come.
We were impressed by the brilliant young people we saw at the SC Student Cluster Competition organized collaboratively part of SC Communities, which brings together programs designed to support emerging leaders and groups that have traditionally been under-represented in computing. Teams came from U.S. universities, including Purdue, Florida A&M, SUNY Stonybrook, and University of Texas at Austin, as well as universities from China and Russia.
Florida A&M teamNizhni Novgorod State University team
At ESnet, we are always looking for bright, committed students interested in networking internships (paid!). We are also still hiring.
As SC10 concluded, the computer scientists and network engineers on the streets of the city dissipated, replaced by a conference of anthropologists. SC11 is scheduled for Seattle. But before we go, a note of appreciation to New Orleans.
Katrina memorial
Across from the convention center is a memorial to the people lost to Katrina; a sculpture of a wrecked house pinioned in a tree. But if you walk down the street to the corner of Bourbon and Canal, each night you will hear the trumpets of the ToBeContinued Brass Band. The band is a group of friends who met in their high school marching bands and played together for years until scattered by Katrina. Like the city, they are regrouping, and are profiled in a new documentary.
Our mission at ESnet is to help scientists to collaborate and share research. But a number of ESnet people are also musicians and music lovers, and we draw personal inspiration from the energy, technical virtuosity and creativity of artists as well as other engineers and scientists. We are not alone in this.
New Orleans is a great American city, and we wish it well.
During the SC10 conference, NASA, NOAA, ESnet, the Dutch Research Consortium, US LHCNet and CANARIE announced that they would transmit 100Gbps of scientific data between Chicago and New Orleans. Through the use of 14 10GigE interconnects, researchers attempted to completely utilize the full 100 Gbps worth of bandwidth by producing up to twelve 8.5-to-10Gbps individual data flows.
Brian Tierney reports: “We are very excited that a team from NASA Goddard completely filled the 100G connection from the show floor to Chicago. It is certainly the first time for the supercomputing conference that a single wavelength over the WAN achieved 100Gbps. The other thing that is so exciting about it that they used a single sending host to do it.”
“Was this just voodoo?” asked NERSC’s Brent Draney.
Tierney assures us that indeed it must have been… but whatever they did, it certainly works.
We caught up with Tracey Wilson, the DICE program manager, who had this to say about the Data Intensive Computing Environment/Obsidian Strategics SCinet Sandbox project, or DOS3 for short: “In our Sandbox project, we wanted to look at the effects of putting E-100 Infiniband extenders with encryption,” said Wilson. “We wanted to test out the performance, with regular TCP-IP transfer and also RDMA transfers. We worked with Lawrence Livermore and they allowed us to have access to almost 100 nodes of Hyperion, a big test cluster there. Our intent there was to do a lot of wide area luster testing over the wide area Infiniband network. We also worked with BlueArc to attach a Mercury NAS system to an IB gateway and access that remotely from both the Hyperion site and the other sites we were using across the fabric using the Obsidian extenders. ESnet is the primary carrier for Hyperion, but we have four other circuits coming from NLR to our other sites. So we’ve been able to get to NASA Goddard and do a span test across the Obsidian on 10G links using NLR. We had a testing slot scheduled from both NLR and ESnet, but we had 10G available through most of the show, thanks to ESnet.”
“Real thanks go to the ESnet guys; Evangelos Chaniotakis, Jon Dugan, and Eli Dart who were supporting SCinet as well as Linda Winkler from Argonne and Caroline Weilhammer from the global NOC, Andrew Lee from NLR and the rest of the routing team.” said Wilson. “It was so much work getting all the VLANs configured across the show floor, because there were a huge number of demos this year and it was a considerable effort to get all these things configured.”
ESnet’s Jon Dugan will lead a Bof on network measurement 12:15, Thurs in room 278-279 at SC10. Functional networks are critical to high performance computing, but to achieve optimal performance, it is necessary to accurately measure networks. Jon will open up the session to discuss ideas in measurement tools such as perfSONAR, emerging standards, and the latest in research directions.
It is midafternoon Wednesday at SC10 and the demos are going strong. Jon Dugan supplied an automatically updating graph in psychedelic colors http://bit.ly/9HUrqL of the traffic ESnet is able to carry with all the circuits we set up. Getting this far required many hours of work from a lot of ESnet folk to accommodate the virtual circuit needs of both ESnet sites and SCinet customers using the OSCARS IDC software. As always, the SCinet team has put in long hours in a volatile environment to deliver a high performance network that meets the needs of the exhibitors.
At 1 p.m. at Berkeley Lab booth 2448, catch ESnet’s Inder Monga’s round-table discussion on OSCARS virtual circuits. OSCARS, the acronym for On- demand Secure Circuits and Advance Reservation System, allows users to reserve guaranteed bandwidth. Many of the demos at SC10 are being carried by OSCARS virtual circuits which were developed by ESnet with DOE support. Good things to come: ESnet anticipates the rollout of OSCARS 0.6 in early 2011. Version 0.6 will offer greatly expanded capabilities and versatility, such as a modular architecture enabling easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture.
Then, stick around, because next at 2 p.m. Brian Tierney from ESnet will lead a roundtable on the research being produced from the ARRA-funded Advanced Networking Initiative (ANI) testbed.
In 2009, the DOE Office of Science awarded ESnet $62 million in recovery funds to establish ANI, a next generation 100Gbps network connecting DOE’s largest unclassified supercomputers, as well as a reconfigurable network testbed for researchers to test new networking concepts and protocols.
Brian will discuss progress on the 100Gbps network, update you on the several research projects already underway on the testbed, discuss testbed capabilities and how to get access to the testbed. He will also answer your questions on how to submit proposals for the next round of testbed network research.
In the meantime, some celeb-spotting at the LBNL booth at SC10.
It’s Wednesday at 10 a.m. in the SCSD booth, and Rick Wagner is testing simulations of cosmic matter and gases streamed in from Argonne National Lab. Wagner about to run a real time volume-rendering application at Argonne. The application renders data in real time, which will stream the results across a wide area (from Argonne to New Orleans) and display it on the tiled screen in the SDSC booth. To do so, SDSC is using OSCARS, ESnet’s on-demand reservation software to schedule data movement on demand.
Aside from the sheer technical feat of rendering data in real time and streaming massive amounts of it across long distances, on-demand data scheduling enables scientists to be more versatile–easily working with the data as needed. For Wagner and his collaborators, improvements in data streaming are all about new capabilities. “We’ve never had this functionality,” said Wagner. “We want to be able to compare the data sets side by side.”
Wagner will next add in variables such as radiation, to the images depicting gasses and matter from the early moments of the universe. This kind of demo illustrates what ESnet is all about. It is our mission to link scientists to collaborators and their data. But we are always striving for improvements in functionality, so that our end users will be more effective in their research.
By autumn most folks are thinking about the holidays, but for me, fall is filled with thoughts of something different. Since 1998, I’ve had the privilege of working with the SCinet committee of Supercomputing, a.k.a. the International Conference for High Performance Computing, Networking, Storage, and Analysis, if you are not into the whole brevity thing. SCinet is the team of people that builds the local area and wide area network for the Supercomputing conference.
This year’s conference, SC10 is in New Orleans, LA, from November 13 until November 19. Planning for each year’s show starts a few years ahead of time. Not long after one year’s show ends, the serious planning for the next year’s SCinet begins. It’s a cycle that I’ve been through many times now, and it’s a bit like an old friend at this point. Most of the time we enjoy each other’s company immensely but when things get stressful, we can really irritate each other.
SCinet is a pretty amazing network. After a year of planning, there are three weeks of concentrated effort to set it up. It’s operational for about one week and it takes about two days to tear down. This year we will have 270 Gbps of wide area network connectivity with dedicated circuits to ESnet, Internet2, NLR, and NOAA. We will deliver over 200 network connections to the various booths on the show floor.
As amazing as the network is, the people who build it are even more amazing. They are drawn from universities, national laboratories, network equipment vendors and nationwide research and education networks. It’s not just Americans; there are people from several different countries with strong showings from the Netherlands and Germany most years. Many of these folks are leaders in their areas of expertise and all of them are bright, capable people. Each of them has given up a fair bit of their own time to participate (while most have some sponsorship from their employers, it’s not unheard of for people to take vacation time to participate).
Why would people give up many evenings and weekends every fall to be a part of SCinet? Because it’s an amazing opportunity to learn about the state of the art in computer networking and to expand your professional network as welll. I consider myself extremely fortunate to work with each of the people that make up SCinet.
So what is ESnet doing for with SCinet this year? Glad you asked. First off, we are bringing three 10G circuits to the show floor. As of Friday, October 29th all three were up and operational. One of these circuits will be used for general IP traffic, but the other two will be used to carry dynamic circuits managed by the OSCARS IDC software.
These circuits will provide substantial support for various demonstrations by exhibitors, connecting scientific and computational resources at labs and universities to the booths on the show floor.
Finally, ESnet has four people who are volunteering in various capacities within SCinet. Evangelos Chaniotakis and myself will be working with the routing team. The routing team provides IP, IPv6, IP multicast service, manages the wide area peerings, manages wide area layer 2 circuits, configures the interfaces that face the booths on the show floor and works closely with several other teams to provide a highly scalable and reliable network. John Christman is working with the fiber team, building the optical fiber infrastructure to support SCinet (all booth connections are delivered over optical fiber, which allows booths to be connected to the network using the highest-speed interfaces available.) Brian Tierney will be working with the SCinet measurement team collecting network telemetry, and using it to provide useful and meaningful visualizations of what’s happening inside SCinet as well as providing tools and hosts to allow making active network measurement such as Iperf, nuttcp, and OWAMP. The measurement data is also made accessible using the perfSONAR suite of tools. They’re also using the SNMP polling software I wrote for ESnet called ESxSNMP.
Important spots to visit:
If you are coming to SC10 this year, be sure to come by the SCinet NOC in booth 3351. I’d be happy to meet anyone who’s read this; feel free to ask for me at the SCinet help desk at the same booth. LBNL (ESnet’s parent organization) is located in booth 2448. Finally, I am hosting a Bird’s of a Feather (BOF) session on network measurement during the show, the details are here.
And check out the other ESnet demos: You can download a map of ESnet at SC10: SC 2010_floormapFL
LBNL Booth 2448, ESnet roundtable discussions
Inder Monga, Advanced Network Technologies Group, ESnet, will lead a roundtable discussion on: On-demand Secure Circuits and Advance Reservation System (OSCARS), 1-2 p.m., Wednesday, Nov. 17
Many of the demos at SC10 are being carried by OSCARS virtual circuits developed by ESnet with DOE support. OSCARS enables networks to reserve and schedule virtual circuits that provide bandwidth and service guarantees to support large-scale science collaborations. In the first quarter of 2011, ESnet expects to unveil OSCARS 0.6, which will offer vastly expanded capabilities, such as a modular architecture allowing for easy plug and play of the various functional modules and a flexible path computation engine (PCE) workflow architecture. Adoption of OSCARS has been accelerating as 2010 has seen deployments at Internet2 and other domestic and international research and education networks. Since last year, ESnet saw a 30% increase in the use of virtual circuits. OSCARS virtual circuits now carry over 50% of ESnet’s monthly production traffic. Increased use of virtual circuits was a major factor enabling ESnet to easily handle a nearly 300% rise in traffic from June 2009 to May 2010.
Brian Tierney, Advanced Network Technologies Group, ESnet, will lead a roundtable discussion on: ARRA-funded Advanced Networking Initiative (ANI) Testbed, 2- 3 p.m. Wednesday, Nov. 17
The research and education community’s needs for managing and transferring data are exploding in scope and complexity. In 2009 the DOE Office of Science awarded ESnet $62 million in Recovery Act funds to create the Advanced Networking Initiative (ANI). This next-generation, 100 Gbps network will connect DOE’s largest unclassified supercomputers. ANI is also establishing a high performance, reconfigurable network testbed for researchers to experiment with advanced networking concepts and protocols. ESnet has now opened the testbed to researchers. A variety of experiments pushing the boundaries of current network technology are underway. Another round of proposals are in the offing. The testbed will be moving from Lawrence Berkeley National Laboratory to ESnet’s dark fiber ring at Long Island (LIMAN: Long Island Metropolitan Area Network) in January 2011 and eventually the 100 Gbps national prototype network ESnet is building to accelerate deployment of 100 Gbps technologies and provide a platform for the DOE experimental facilities at Oak Ridge National Laboratory and the Magellan resources at at the National Energy Research Scientific Computing Center (NERSC) and Argonne National Laboratory.
Are you interested in working on a cutting edge software system along with the team that is rolling out the first 100 Gigabit/second nationwide network?
ESnet is looking for an experienced software developer to work on our dynamic virtual circuit setup and reservation systems called “OSCARS”.
OSCARS is an open source project, and is hosted here. More information on OSCARS can be found at http://www.es.net/oscars/ or https://oscars.es.net/OSCARS/docs/index.html. The ideal candidate has at least 3 years experience doing Java web services, and knows J2EE, CXF, JETTY, and Hibernate. However anyone with extensive software development skills will be considered.
Interested applicants should submit their resume here.
About ESnet
ESnet is located at Lawrence Berkeley National Laboratory (LBL) , and funded by the US Department of Energy. ESnet’s mission is to run the network connecting of the US Department of Energy Laboratories. All work performed at LBL is unclassified, and there are no citizenship or security clearance requirements. ESnet is a team of 35 enthusiastic, hard-working folks who are passionate about being leaders in high-end networking and enabling scientific discovery. Ability to work on-site in Berkeley is desirable but not a requirement.
You must be logged in to post a comment.