Home > Events > SC22
Join Inspur at SC 2022

Inspur is back for the SC22 in-person event at the Kay Bailey Hutchison Convention Center in Dallas, TX!

SC22 is an annual conference established in 1988 by the Association for Computing Machinery and the IEEE Computer Society, and has gone on to become the best international conference for High Performance Computing, Networking, Storage, and Analysis.

Don’t miss out! Save your spot now and join us with a FREE PASS for the event. We look forward to connecting with you, our developer community, and kicking off the event in Dallas.

Join us at booth #2233 from November 14-18, 2022 to learn about Inspur’s latest innovations:

Empower challenging AI supercomputing tasks. As world’s leading AI computing provider, Inspur offers a broad range of high performance AI training and inferencing platforms and end-to-end cooling solutions.

Learn AI HPC applications from development to production. Inspur’s agile resource platforms provide full AI business process support, offers HPC resource scheduling and distribution in supercomputing business scenarios.

Hear from Inspur AI experts. Don’t miss the chance to talk with Inspur AI developers and computer scientists face-to-face to explore the latest technologies and business breakthroughs!

Discover the next-generation computing platforms. Inspur’s latest platforms are built to handle the most demanding AI computing tasks like trillion-parameter Transformer model training, massive recommender systems, and AI+Graphics workloads. They can also support both air and liquid cooling with optimized PUE. Come check them out along with the latest products from Inspur.


Product Showcase

Powerful AI&HPC Computing Platform

As world’s leading AI computing provider, Inspur offers a broad range of cutting-edge computing platforms to empower some of the most challenging AI supercomputing tasks the world facing today.

  • High Performance AI Training
    & Inference Platforms

  • Next-Generation Computing Platforms

  • End-to-End Liquid
    Cooling Solutions


Highly Flexible AI Server

  • Supports up to 20x PCIe external plug-in cards in 4U space
  • Compatible with mainstream AI accelerator cards.
  • One-key balance/common/cascade topologies switching for flexible AI applications
  • SMulti-host function for efficient interconnection between multiple computing nodes and storage nodes
  • For scenarios such as Internet AI public cloud, enterprise-level AI cloud platform, video codec, etc.
Learn More


Versatile Advanced AI Server

  • 8x GPUs fully interconnected in 4U space
  • Full-link optimization for PCIe 4.0, doubling the bandwidth among CPUs, GPUs and NICs
  • Ranked top in single sever performance in MLPerf v0.7 benchmark
  • Ideal for AI scenarios consisting of intelligent image, video and voice processing, financial analysis and virtual assistant
Learn More


2U 2-Socket General Purpose Open Compute Server

  • Supports 4x GPUs for high quality and performance in a 2U space
  • Dual-socket rackmount server optimized for AI applications
  • Superior scalability, with an optimized cooling design and modular system architecture
  • Suitable for a wide spectrum of demanding AI applications
Learn More


High-Density Compute Server with AMD EPYC™ 9004 Series Processors

Supports scenarios such as virtualization, high-performance computing and big data

  • 2x AMD EPYC™ 9004 Series Processors in 1U
  • Up to 96 cores, TDP up to 400W, 384MB L3 cache
  • Memory frequency up to 4800 MHZ greatly improves operation efficiency
  • Front storage supports 2x E1.S, delivering greater storage capacity and easier maintenance
  • Modular design with tool free disassembly and maintenance


Compute and Storage Server with AMD EPYC™ 9004 Series Processors

Supports scenarios such as virtualization, high-performance computing and big data

  • 2x AMD EPYC™ 9004 Series Processors in 2U
  • Flexible PCIe expansions with OCP 3.0 enabling both on-premise and cloud deployment
  • Up to 192 cores and 384MB L3 cache
  • Up to 400TB storage and maximum of 4 dual-width GPU
  • 24 x E3.S delivers high flash storage density and power efficiency
  • Monitoring system via BMC ensures stable operation


Rackmount Server Optimized for Data Centers

  • Non-onboard PCIe slot design satisfies more non-PCIe I/O customizations; supports up to 2×hot-pluggable OCPs, free up more I/O resources and ensure uninterrupted network services.
  • Unique front I/O design proposes innovative measures to isolate the cold and hot aisles in the data center, which increases the lifecycle of thermos sensitive components, facilitates maintenance, and effectively solves the wiring problem in the data center.
  • Centralized power supply modules and CRPS power supplies can be switched freely.
  • Supports the stand-alone node and rack-scale delivery and ensure hassle-free deployment.
  • The reserved liquid cooling solution reduces PUE in the data center to the limit; air cooling scheme is available as needed.
Learn More


The First Liquid-cooling OAM AI Server

  • Adopts the advanced system architecture with global AI benchmark MLPerf champion.
  • Supports 8*500W liquid-cooling OCP Accelerator Modules (OAM), interconnection between each of the two OAMs, a bidirectional P2P bandwidth reaching up to 896 GB/s.
  • The 40kW rack system with high-density AI computing capacity can be deployed, with PUE less than 1.1.
Learn More

AI&HPC Management Platform

Based on its diverse experiences in AI and HPC, Inspur offers a series of agile management tools, powering AI and HPC applications from development to production.

First Name
Last Name
Company / Organization
Job Title

I understand and agree to the use of my personal information by Inspur besed on Inspur'sPrivacy Policy.