With the rapid adoption of AI and DL technologies, a new type of storage workload profile has emerged characterized by high read performance and a low number of re-writes. Growing data lakes, increase in numbers of hidden layers in DL models, larger input data, as well as the need to share training data sets across multiple users and models have resulted in the demand for higher performing shared storage solutions compared to HDD NAS.
The StorMax NFS series is designed specifically to address this requirement. Compared to HDD NAS storage arrays, AMAX's QLC SSD-centralized NAS storage repositories provide an 11x performance increase for both read bandwidth and read IOPS at a cost between TLC Flash and spinning media. In combination with optional 100GbE or Mellanox EDR Infiniband connectivity,
The StorMax NFS line is available in storage array capacities ranging from 15TB to 184TB per node. With 33% more bits per cell, the Micron 5210 ION SSD-based StorMax NFS storage solution offers a compelling means of reducing storage costs for very read-intense workloads while maintaining the read-performance of traditional TLC-based SSD storage arrays. The solution helps shrink the gap between high-performance all-flash and traditional HDD-based storage solutions.
"Micron has a strong focus on innovation and delivering leading-edge memory products for emerging areas such as artificial intelligence and deep learning," said Roger Peene, vice president of product planning and strategy for the Storage Business Unit at Micron Technology. "Through our collaboration with AMAX on the StorMax NFS solution, Micron aims to help companies develop AI applications faster and more effectively by reducing data transfer times, which breaks down traditional deep learning bottlenecks."
"As AI continues to drive business transformation, the demand for high-performance, low-cost, all-flash storage solutions is constantly increasing," says Rene Meyer, vice president of technology at AMAX. "We are excited to introduce StorMax NFS Solution to address this market need and enable organizations to streamline and accelerate deep learning developments through cost-efficient, fast and remote attached shared storage."