Inspur HF18000G5 high-end all-flash storage is designed for medium/large-sized enterprises and is oriented to structured and unstructured.
LEARN MORELearn what's at the heart of new hybrid cloud platforms.Extend the cloud experience across your business in away that‘s open flexible, and hybrid by design.
Read the INSPUR paperNovember 18-21 | Colorado Convention Centre, Denver, USA
Thinking about attending SC19?Register to get a free pass and join Inspur at the conference for all things HPC.We’d like to cordially invite you to join Inspur in NeurIPS 2019 from December 8 to 14 for a discussion on AI innovations!
The annual Neural Information Processing Systems meeting is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. The core focus is peer-reviewed novel research which is presented and discussed in the general session, along with invited talks by leaders in their field.
Inspur is honored to be one of the Platinum Sponsors this year and will continuously showcase our latest AI technologies, share our keen insights towards the innovation of AI.
Showcase highlights at a glance:
● Design for your training applications: Extreme High Density AI Server
AGX-2 –8 NVLink-Empowered V100 GPUs in 2U Compact Form Factor
●Design for your inference applications: flexible configuration match exactly your demand
NF5468M5 –Popular Inference Server with 16*T4 GPU in 4U
FPGA F10A – A half-height-half-length FPGA card with the highest functional density and best performance in the industry and supports development in OpenCL
FPGA F37X - The world's first FPGA AI accelerator card with integrated on-chip HBM2. With 28.1 INT8 TOPS, 460GB/s bandwidth and a power consumption of less than 75W for typical AI applications
●AI PaaS platform & AutoML Suite
AI Station- A complete AI deep learning cluster management software enabling simple and flexible deep learning, and makes it easy to train and combine popular model types across multiple GPUs and servers.
Auto ML- The world’s first highly parallel extension to provide on-premise and cloud deployment, and it enables non-professionals to build network models with minimal operations.
Stop by Inspur booth #710 to know more about how Inspur can empower your AI business and win some exciting raffle prizes.
Please let us know what level you are interested in, or would like to talk with our experts, we can get an arrangement and send right over the calendar.
We look forward to meeting you at NeurIPS 2019!