Inspur Releases the First OAM System MX1 supporting Multiple AI Chips at the SC19
1. Inspur released MX1, the first OAM AI system that supports different types of AI chips from multiple manufacturers
2. MX1 supports a variety of OAM (OCP Accelerator Module)-compliant AI chips on a single AI server
Denver, Colorado, Nov. 21 - At SC19 in Denver, Inspur launched the MX1 AI System which supports a variety of OAM (OCP Accelerator Module)-compliant AI chips on a single AI server, and is also the first OAM AI system that supports different types of AI chips from multiple manufacturers.
Currently, with the increasing demand of data center users for AI computing performance, hundreds of companies around the world have invested in the R&D and production of new AI chips, and the trend of AI chip diversification has become increasingly prevalent. However, due to the different ASIC solutions adopted by various manufacturers in AI development, AI accelerators are incompatible with each other in terms of interfaces, interconnections, and protocols. As a result, data center users face major obstacles with various hardware and tool kits in AI infrastructures.
Inspur is committed to promoting the establishment of specifications in the AI industry, and hopes to promote the development of AI chips and technologies through an open and common specification of AI infrastructure. This vision is highly consistent with OCP, the global open computing community. As the cornerstone of the next generation hyperscale accelerated computing platforms, the OAM standard established by the OCP community defines a unified interface for AI accelerators that supports multiple architectures such as ASIC, GPU, and FPGA, and provides innovative designs in physical form factor, power supply, connectors, definition of pins, and system architecture.
Inspur actively participates in the development of the OAM specification and took the lead in designing and developing the MX1, the world’s first OAM-compliant open AI acceleration system. MX1 adopts technologies such as high bandwidth and dual power supply, and is compatible with a wide variety of OAM-compliant AI accelerators. MX1 features a total interconnection bandwidth of up to 224Gbps and provides two interconnect topologies — fully-connected and Hybrid Cube Mesh (HCM) — so that users can flexibly design on-chip interconnection schemes according to the needs of on-chip communication for different neural network models. MX1 has two independent power supply schemes, 12V and 54V, with the maximum powers of 300W and 450W-500W respectively, which can support various AI accelerators with high power consumption. The single-node design of MX1 supports eight AI accelerators and supports up to 32 accelerators with high-speed interconnection scale-up extensions to accommodate the computing needs of ultra-large-scale deep neural network models.
Inspur is a leading AI computing solutions vendor and the world's largest GPU AI server supplier, with more than 50% market share of AI servers in China. Working closely with leading AI companies in systems and applications, Inspur helps them achieve significant performance gains in NLP, image recognition, video analysis, search recommendation algorithms, intelligent networking, etc. Inspur shares AI computing resources and algorithms with industrial partners to enable them to accelerate the utilization of AI.
Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges.