BEIJING — Chindata Group has announced an upgrade to its integrated data center architecture for AI workloads during the 2025 China Data Center Code Summit. The latest iteration, known as AI Data Center Total Solution NEXT, builds on the company’s previously released AI Data Center Total Solution and its 2.0 upgrade in 2024, reflecting Chindata’s continued evolution in power systems, cooling technologies, modular construction and intelligent operations as AI deployments scale.

The architecture was unveiled at the Power-Compute Convergence and Product Innovation Forum, co-hosted by Chindata and the China Data Center Committee (CDCC). The forum brought together cloud service providers, energy storage companies and infrastructure vendors to explore how data center design, power systems and operational models must adapt to the rapid growth of AI training and inference workloads.
As AI models continue to grow in scale and complexity, data center operators are facing unprecedented challenges, including sharply rising rack power densities, accelerating demand for energy and cooling resources, and increasingly volatile workload patterns. To address these pressures, Chindata has worked closely with ecosystem partners including MORONG Electric, Envicool, Shenling Environmental and Delta Group to further integrate facility construction, power supply, cooling systems and operational management within a single, coordinated architecture. The objective is to make hyperscale GPU environments easier to deploy, operate and expand with predictable performance and efficiency.
As part of the upgrade, Chindata has strengthened the integration of energy, infrastructure and operational resources across the data center lifecycle. The architecture incorporates renewable power, IT-grade energy storage solutions and wastewater recovery systems to enhance on-site resource resilience and reduce reliance on external utilities. By coordinating energy supply and facility planning from the outset, the solution helps mitigate constraints related to energy availability and site selection, providing a more sustainable foundation for gigawatt-scale AI infrastructure developments.
The upgraded architecture is organised around a set of tightly coordinated system modules spanning power delivery, cooling, building design and operations. The "X-Power" power system is designed to support a wide range of AI deployment scenarios, from edge environments primarily used for inference at approximately 12 kW per rack, to hyperscale GPU clusters for large-scale AI training workloads requiring up to 150 kW per rack. Built on an 800-volt high-voltage DC electrical architecture, X-Power combines layered energy storage with high-efficiency power distribution to improve overall electrical efficiency and ease power supply bottlenecks in dense AI clusters.
Cooling capabilities have been expanded through the X-Cooling system, which includes air-cooled, liquid-cooled and hybrid configurations capable of supporting rack power densities of 200 kW and above. Highly prefabricated and modular designs shorten on-site delivery timelines, while self-developed control systems dynamically optimise cooling performance based on real-time operating conditions, reducing overall energy consumption.
Facility construction is based on a highly flexible modular building model with standardised load ratings, structural spans and floor heights. Using a building-block assembly approach, data centers can be rapidly adapted to different regions and site conditions while maintaining consistent build quality. This approach improves on-site installation efficiency by approximately 50%, supporting faster and more repeatable delivery at scale.
Operational capabilities are supported by the "Kunpeng" intelligent operations and management platform, which integrates locally deployed AI models with site-wide monitoring systems to enable automated energy optimisation, precise fault localisation and streamlined alarm handling. The platform is designed to evolve toward a closed operational loop linking prediction, decision-making, execution and feedback, supporting more proactive, autonomous and energy-aware operations over time.
The upgraded architecture is delivered as a configurable suite of modular technologies that can be flexibly combined based on rack power density, power supply conditions, business requirements and local resource constraints. This approach allows standardised system designs to be efficiently adapted to diverse deployment scenarios while maintaining consistent delivery quality, operational performance and scalability as AI workloads continue to grow.

Commenting on the release, Xue Guoqing, CTO of Chindata Group, said:
“AI and energy are two technological revolutions that are fundamentally reshaping the development paradigm of data centers. Gigawatt-scale AI campuses, megawatt-level rack modules, and the mismatch between green power generation and highly volatile AI workloads present system-level challenges that the industry has never faced before. Chindata will continue to focus on integrated innovation across energy systems, infrastructure products and operations, working with ecosystem partners to advance new power systems, high-density liquid cooling and intelligent operations, and to build a strong foundation for the efficient conversion of electrical power into computing power.”
About Chindata Group
Chindata Group is a leading carrier-neutral hyperscale data center solution provider and a pioneer in AI-ready facilities in China. Guided by its mission to "efficiently convert electrical power into computing power" and driven by innovation, the company focuses on the planning, investment, design, construction, and operation of high-performance, reliable, and low-carbon computing infrastructure.
媒体与品宣 请联系 media@chindatagroup.com
投资者关系 请联系 ir@chindatagroup.com
招聘与HR 请联系 hrenquiry@chindatagroup.com
业务与合作 请联系 bd@chindatagroup.com
中国总部地址:
北京市朝阳区来广营西路5号院望京诚盈中心8号楼 邮编:100012