Home > Events > ISC2020

This year, meet Inspur @ISC2020 Digital!

ISC High Performance 2020 will bring together a community devoted to the relentless improvement of technologies and products that will help drive our future. We’d like to cordially invite you to join Inspur in ISC2020 from June 22 to 25 for a discussion on HPC and AI innovations.

Join us at the digital event!


Key Sessions

Monday, June 22nd 6:00pm -9:10pm Vendor ShowdownInspur——Leading AI&HPC Solution Overview
Wednesday, June 24th 12:00am Exhibitor Forum Inspur——Full Stack AI&HPC Product Ecosystem

Product Showcase

Powerful AI&HPC Computing Platform

As world’s leading AI computing provider, Inspur offers a broad range of cutting-edge computing platforms to empower some of the most challenging AI supercomputing tasks the world facing today.


Unprecedented System for the Next Generation Datacenter

  • 8 NVSwitch-empowered NVIDIA A100 GPUs in 4U, a true generational leap integrating the computing performance and feature sets for a datacenter
  • Feature leading AI computing technology and mature ecosystem, offering unprecedented computing performance, flexibility and scalability
  • Ultimate choice for the most demanding AI and HPC workloads including AI training and inference, scientific computing and so much more


Industry 1st 4U 8GPU NVSwitch-Empowered AI Server

  • 8 NVSwitch-empowered NVIDIA V100 GPUs in 4U, offer 1 petaFLOPS of superb performance
  • Ideal choice for NLP model training, enable faster AI training with lower cost compared to more servers with fewer GPUs each,
  • Designed for a wide range of deep learning and HPC applications
Learn More


Extreme High Density AI Server

  • 8 NVLink-empowered NVIDIA V100 GPUs in a compact 2U design, deliver uncompromising performance in maximum density
  • Options of liquid cooling, make it easy to deploy in Green Datacenters with lower PUE
  • Power a variety of AI&HPC applications with flexible configurations
Learn More


Elastic AI Cloud Server

  • Up to 20 AI Accelerators including GPUs, FPGAs and ASICs in one server, turbocharge AI Inference to gain real-time insight
  • Up to 8 NVLink V100/ dual-slot PCIe GPUs, supercharge large-scale AI training
  • Provide perfect combination of performance and internal storage with 384 TB
  • Drive a broad range of applications from AI cloud, telecom to healthcare with flexible topologies
Learn More


2U-4Node High Density Compute Server

  • Accommodate 4x 1U 2-socket node in a compact 2U chassis with high performance and workload capability
  • Easy to maintain and manage with CMC & BMC; Support both IPMI 2.0 & Redfish
  • Ideal choice for Hyper-scale Infrastructure, Cloud Computing and HPC
Learn More


4U-8Node High Density Compute Server

  • Accommodate 8 high-density compute nodes in 4U chassis with high performance and workload capability
  • Shared power, cooling and in-chassis management provides outstanding efficiencies, flexibility and agility
  • Flexible configuration and deployment optimized for AI, HPC and NFV
Learn More


Groundbreaking OAI UBB System

  • World’s first OAM AI system that supports different types of ASIC Mezz Card from multiple manufacturers
  • 21-inch rack-scale OAM solution providing scale up with simplified inter-module communication, scale out with high-speed input/output bandwidth
  • Disparate network architectures supported through OAM direct connect, delivering increased efficiency, flexibility and manageability
  • Features two OCP interconnect topologies: Hybrid Cube Mesh (HCM), Fully Connected (FC)


Distributed Storage defined by capacity on demand, performance on demand and service on demand

  • Convergent architecture: Provide five storage services including File, Object, Block, Bigdata and Database, and supports multiple sharing protocols such as NAS / S3 / Swift / iSCSI / private protocol
  • Excellent Performance System: Lots of software improve performance by 300%  such as object aggregation, global read-write cache, storage pooling separation, multi-channel acceleration, AIOPS etc.
  • Superior Reliability Cluster: Support multiple data redundancy protection mechanisms, such as replication and erasure code. The rebuild speed is up to 4TB/h, and the overall reliability of the system is 99.9999%
Learn More


Brand-new all flash arrays with excellent performance and reliable latency for corporate-level application.

  • Excellent Performance: flash memory-dedicated design to release the full potential of SSD, linear scalability to 16 controllers, more than 8million IOPS for random write, latency less than 0.5ms.
  • Excellent Efficiency: max. 15T per SSD to improve storage density, intelligent and distributed RAID, real-time online compression supports 80% (5:1) data reduction capacity.
  • Superior Reliability: Gateway-free storage InMetro, the reliability of the system is 99.9999%.
Learn More


Industry 1st FPGA Accelerator Card with On-Chip HBM2

  • Deliver 28.1 TOPS@INT8 of superior performance with low latency in full-height half-length form factor
  • Feature 8GB integrated on-chip HBM2, offering 460GB/s of ultra-bandwidth
  • Support C/C++, OpenCL & RTL, enabling flexible development and migration of AI algorithms and applications
  • Accelerate AI applications including AI inference, video transcoding, image recognition, natural language processing, genome sequencing analysis and more
Learn More


Extreme Density FPGA Accelerator Card

  • Powered by Intel Arria10 chip, deliver 1.366 TFLOPS of superb performance with low latency
  • Support OpenCL framework, dramatically drive AI development efficiency with mature ecosystem
  • Ideal for compute-intensive applications like AI inferencing, data compression,image encoding, video transcoding, and more
Learn More

AI&HPC Management Platform

Based on its diverse experiences in AI and HPC, Inspur offers a series of agile management tools, powering AI HPC applications from development to production.


Agile AI Development Platform

    Inspur AIStation is designed to provide complete AI development software stack, unified management of AI computing resources and simplified shift of AI models from development to production. It has been adopted by a wide range of industries users, accelerating AI transformation.

    Key Features:

  • Unified Management of Computing Resources and Flexible Scheduling
  • Shared Data Under Unified Management and Speedup for System Cache
  • Efficient AI Model Development and Production Process
Learn More


Efficient HPC Cluster Management Tool

    Inspur ClusterEngine offers integrated management of HPC clusters, including hardware monitoring, job scheduling, HPC and Bid Data applications management and so on. It has been widely adopted to improve resource utilization of the entire HPC system within one overview dashboard.

    Key Features:

  • Integrated Management of Four Modules
  • Holistic Visualization of Cluster
  • Deep Integration of Big Data and HPC Applications
Learn More


AI Application Profiling and Performance Tuning Tool

    Inspur T-Eye is management tool used to analyze AI applications performance features of hardware and system resources running on GPU clusters, revealing running features, hotspots and bottlenecks of these applications.

    Key Features:

  • Runtime Performance Monitoring
  • Identify Critical Index with Radar Chart
  • Comparison Analysis to Facilitate Optimization
Learn More