As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, more than 5,000 scientists work on up to 700 projects encompassing a wide range of disciplines, including climate modeling, solar energy, fusion science, astrophysics, bioinformatics and more.
With scientific data growing rapidly, the organization recognizes the importance of having centralized “scratch” storage architecture. By selecting a unified scratch file system, NERSC was able to build to the precise levels of performance and capacity required while optimizing the cost of configuration at every step. This continues NERSC’s strategy to move away from distributed storage “islands” in favor of “global” storage.
Furthermore, by embedding file systems in the storage controller, DDN’s converged infrastructure approach enabled further optimization by removing latency and a significant number of network connections and servers. As a result, NERSC was able to meet performance and capacity requirements at 30 percent lower cost of storage than implementing local storage and file systems for each compute platform. Moreover, the facility saved hundreds of thousands of dollars in infrastructure costs by eliminating the need for additional servers, cabling, network switches and adapters.
NERSC, along with other leading research facilities in the U.S., including the Texas Advanced Computing Center (TACC) and Oak Ridge National Laboratory, is a pioneer in adopting site-wide file systems to enable cost savings, faster application burst performance, workflow efficiencies and a much simpler approach to deploying HPC resources.
With DDN storage, NERSC now can ensure that both high-bandwidth and highly transactional applications perform at optimum levels as the total bandwidth of any one storage system is now the aggregate performance of all of the site-wide storage deployed.
DDN’s in-storage processing capabilities, which embed the file system inside the controller, enable NERSC to achieve maximum performance for its compute investments while minimizing storage investment costs.
NERSC has also been able to achieve industry-leading performance of 80 GBps with the fewest number of systems, which also has resulted in reduced administrative overhead and lower data center costs.
Additionally, the ability to write temporary data to a central repository for further analysis has enabled NERSC to reduce its local “scratch” storage costs by more than 50 percent.