Skip to main content

 PNY boosts AI development with 3S-2400 storage server


PNY enhances NVIDIA DGX-1 with the storage capacity needed for innovative deep learning projects

Mérignac (France), April 2nd 2020 – PNY Technologies, the European leading solutions provider for the AI market, has launched the 3S-2400 AI storage server. It is an affordable flash storage server dedicated to Artificial Intelligence (AI) development, designed to maximise the performance of NVIDIA’s DGX-1 supercomputer.

AI has gone mainstream as organisations race to develop machine learning solutions for automation and innovation. However, the quantities of data needed to properly train these artificial neural networks is immense. Organisations need substantial processing power and storage capacity to bring their AI projects to fruition.

Using eight powerful GPUs, DGX-1 delivers four-times faster training speeds than other GPU-based systems, but it is not designed to store data. As organisations scale up their AI training projects, they will need to find supplementary storage solutions to avoid disruption and delays.

While addressing the newer needs of this emerged AI market, PNY worked with Mark Klarzynski, a longstanding storage expert and a pioneer of the Software Defined Storage movement and the All Flash Array concept.  While Mark’s technology is still relied upon by many of the industry’s leading storage solutions, he fully embraced PNY’s vision of a more AI focused solution.

“Storage goes through cycles, we tend to incrementally improve existing technologies, increasing their speed and performance, but core limitations remain.  There comes a point when the entire solution needs a reboot, a complete from the ground-up redesign.   Working with PNY and the newer GPU servers, it was clear that today’s flash is simply faster than the traditional storage controller can deal with, and so it becomes an expensive bottleneck and removing this was the first step in a new generation of storage.”  Commented Mark Klarzynski

“AI does not need snapshot or deduplication or many of the features standard office focused storage require, it needs ultra-low latency and tremendous bandwidth at a price that does not impact the investment into GPU’s” added Mark.

“As the complexity of AI solutions increases, so will storage requirements,” said Jérôme Belan, CEO at PNY. “The PNY 3S-2400 server delivers this capacity, while also making it fast and easy to scale up your capacity as your dataset grows. It avoids costly delays and ensures AI projects don’t become bottlenecked.”


NVIDIA’s DGX-1 connects easily to the PNY 3S-2400 server. 3S-2400’s remote direct memory access (RDMA) technology allows the DGX-1’s GPUs to access the flash drives directly, ensuring extra low-latency performance and keeping the GPUs constantly supplied with data. The PNY 3S-2400 server is enhanced by a layer of software defined storage provided by Excelero’s NVMesh. It improves performance and offers users an intuitive interface where they can easily monitor drives and manage their volumes.

“By building upon Excelero’s NVMesh, we have created a solution that delivers the highest performance per rack unit in the market. Working with PNY’s customers clearly highlighted the benefit of Elastic NVMe, providing impressive improvements in training times,” said Lior Gal, CEO at Excelero. “By leveraging the latest generation of Intel hardware, we’re able to provide our customers with the performance and agility they need to succeed.”

PNY 3S-2400 server provides more reliable storage and enhances performance for AI projects. It enables developers, data scientists and researchers to easily scale their AI projects, without the need for an expensive storage server. In addition, they can train their AI and DL models using the same data set stored on a single storage server. It is ideal for professionals working for innovative startups as well as established organisations beginning their own AI projects.


For more information, visit :


Follow us:

Follow @PNYEMEA on Twitter and join our corporate community on LinkedIn.