Global Technical Service Hotline Tel: 1-844-860-0011 1-646-517-4966 Follow Inspur

Current position: Home > About Us > Company news

Inspur Launches GPU Deep Learning Appliance

  Salt Lake City-November 16, 2016-Inspur today announced the Inspur D1000 deep learning appliance at SC16. The D1000 is a total HPC solution enabled by NVIDIA® Tesla™ GPU high-performance computing cluster technology and runs the Caffe-MPI parallel computing deep learning framework. The D1000 greatly enhances deep learning capability with applications for artificial intelligence fields such as facial recognition, picture classification, and object recognition.


  Inspur also presented its 6-node design for the D1000 deep learning appliance. Inspur specifically developed this design for deep learning GPU servers. Each node is configured with two CPUs and four Tesla M40 GPUs.

  “The D1000 solution yields a much stronger performance in comparison to others; it is able to meet the needs of most customers scalability and implementing deep learning solutions,” said Jay Zhang, Inspur Vice GM of ORH and Vice GM of American Region. ”Inspur is committed to building a strong ecosystem for practical applications by partnering with industry leaders like NVIDIA to use the latest technology in implementing practical solutions.”


  Jay Zhang, Inspur Vice GM of ORH and Vice GM of American Region

  Caffe-MPI is an open source, clustered version of Caffe developed by Inspur, which enables Caffe, the industry’s leading deep learning framework, to achieve efficient multi-node parallel learning. Caffe-MPI not only achieves better computational efficiency in standalone multi-GPU solutions, but also supports distributed cluster expansion. As a deep learning platform, with 6-node, 24 Tesla M40 GPUs and Caffe-MPI, the D1000 can achieve the efficiency of 2,000 images per second when training the GoogLeNet, increasing the accuracy of the GoogLeNet network in as little as 18 hours to 78%. With the increase of training time, Caffe-MPI's accuracy will be further improved. Moreover, Caffe-MPI has good scalability and the node expansion efficiency can achieve 72%. Caffe-MPI completely retains the user-friendly characteristics of the original Caffe architecture, with pure C++/ CUDA systems, programming support for the command line, Python, MATLAB, and other interfaces.

  “Inspur provides customers with out of the box deep learning solutions and consistent service from beginning to end,” said Mr. Zhang. The Inspur D1000 provides easy operation of product deployment by integrating Inspur’s optimized high-performance computing cluster hardware, Caffe-MPI parallel computing framework and dependency library, fully tested OS and CUDA environment and Inspur ClusterEngine (which is a cluster management and dispatching platform). TheD1000 can achieve the integration of hardware and software in the production line installation and configuration.