Up to Speed
Scaling network infrastructure for AI workloads

The artificial intelligence revolution is reshaping the physical infrastructure that powers the digital world. As AI workloads become increasingly prevalent across industries, facility managers and data center operators are grappling with unprecedented challenges that require them to rethink traditional network design approaches.
Although traditional data center applications also utilize parallel processing, AI workloads require massive amounts of it, creating architectural demands that legacy infrastructure was not designed to handle. This transformation means FMs must confront complex questions about scalability and agility, seeking the future-proofed assurance that will define the next decade of data center evolution.
The scale of AI transformation
To understand the magnitude of AI workloads, consider the physical connection requirements alone. Traditional data centers typically require a couple dozen fiber connections per rack –a manageable number that existing infrastructure can accommodate with relative ease. AI data centers, however, can demand several thousand connections per rack, and may eventually require thousands of connections per rack unit — an increase that traditional cabling approaches cannot efficiently support.
This dramatic scaling challenge extends beyond mere quantity. The parallel processing nature of AI workloads means that data must flow simultaneously between multiple components with speed and minimal latency. Every microsecond of delay can impact performance. The traditional approach is to use point-to-point, also known as direct-attach connections. But this method becomes increasingly problematic as systems scale.
Innovation outpacing infrastructure
Part of the challenge stems from the rapid pace of AI hardware development outstripping industry standardization efforts. While traditional data center equipment has benefited from decades of established cabling standards and best practices, AI infrastructure is still in its relative infancy. Equipment manufacturers are establishing their own implementation guidelines, often without considering the broader implications for data center operations.
This lack of standardization creates a challenging environment for FMs accustomed to predictable deployment processes. Where traditional projects might follow established protocols for cabling installation and equipment connection, AI deployments often require custom approaches that can vary significantly between vendors and equipment generations.
The result is an environment where end users find themselves caught between competing approaches. Equipment vendors may specify direct-attach methodologies that work well for their particular or proprietary hardware. However, this hardware does not translate effectively to large-scale, multi-vendor environments. Meanwhile, FMs are left to navigate these requirements while maintaining operational agility and planning for future growth.
The structured cabling solution
The solution to these challenges lies not in completely new approaches, but in adapting proven methodologies to meet AI's unique requirements. Structured cabling systems, which have served as the backbone of traditional data centers for decades, offer a framework that can address many of AI infrastructure's most pressing challenges.
The principle of structured cabling is to separate the permanent infrastructure from the temporary connections. By installing pre-terminated trunk cabling pathways between major equipment areas independently of specific hardware requirements, facilities can accommodate multiple generations of technology without requiring complete infrastructure overhauls.
In the context of AI data centers, this approach offers several critical advantages. The trunk infrastructure can be installed before any AI equipment arrives on site, allowing facilities to complete a majority of cabling work without waiting for hardware delivery or installation. This pre-installation capability can dramatically reduce deployment timelines and minimize the labor-intensive work required during equipment installation phases.
The business case for proactive infrastructure
The business implications of these deployment efficiencies from structured cabling cannot be overstated. When organizations invest billions of dollars in AI hardware, every day of delayed deployment represents lost revenue opportunity. Traditional direct-attach approaches require all equipment to be in place before cabling work can begin, creating sequential dependencies that extend project timelines.
Structured cabling breaks these dependencies by enabling parallel work streams, thereby speeding deployment times. While equipment is manufactured, shipped and staged, cabling installation can proceed independently. When hardware finally arrives, the connection process becomes a matter of plugging into pre-installed pathways rather than running individual cables between specific devices.
This approach also addresses skilled labor availability. The specialized skills required for data center cabling work are in high demand, and projects that minimize the time these skilled technicians spend on site have significant advantages. By front-loading cabling work and simplifying the final connection process, structured approaches can reduce labor requirements by thousands of person-hours on large deployments.
Future-proofing in an era of rapid change
AI hardware is evolving at an unprecedented pace, with new chip architectures and performance capabilities emerging regularly. Organizations that invest in AI infrastructure today must assume that they will need to upgrade or modify their systems multiple times over the infrastructure's operational lifetime.
Traditional direct-attach approaches treat each hardware generation as a completely new deployment, requiring full cable replacement with every major upgrade. Structured cabling unlocks agility and allows organizations to retain most of their infrastructure investment while adapting to new hardware requirements. The trunk pathways remain constant while only the endpoint connections change, dramatically reducing both the cost and complexity of technology refreshes.
This sustainability aspect extends beyond mere cost considerations. The environmental impact of constantly replacing cabling infrastructure is significant, particularly as data centers face increasing pressure to minimize their ecological footprint. Structured approaches that enable cable reuse across multiple technology generations align with broader sustainability goals while reducing operational complexity.
Optimizing space for the demands of AI
The physical constraints of AI infrastructure are driving innovation in connector technology and space utilization while simultaneously demanding that FMs plan for requirements they cannot fully predict. As fiber counts increase dramatically, traditional connector approaches would quickly overwhelm available cabinet space. New high-density connector technologies are emerging to address these challenges, allowing organizations to accommodate the massive connection requirements of AI systems within existing space constraints.
These space optimization efforts extend beyond individual components to encompass entire system architectures. FMs are developing new approaches to cable management that minimize space requirements while maintaining accessibility for maintenance and upgrades. The goal is to create infrastructure that can accommodate AI's demanding requirements while remaining manageable from an operational standpoint.
Structured cabling provides a framework for managing this uncertainty by providing assurance that the infrastructure can be agile and accommodate a range of potential future requirements. Rather than optimizing for today's specific hardware, these approaches focus on creating flexible pathways and connection points that can support various configurations as technology evolves. This planning challenge extends to all aspects of facility design: power distribution systems must accommodate potential increases in density and efficiency improvements in AI hardware; cooling systems must be designed with expansion capability to handle future heat loads; and network infrastructure must provide the bandwidth and low-latency characteristics that future AI applications will demand.
Bridging traditional & AI requirements
For FMs overseeing the transition to AI infrastructure, the key lies in developing implementation strategies that bridge traditional data center practices with AI-specific requirements. This means adapting proven methodologies that are agile rather than abandoning them entirely.
The structured cabling approach provides a foundation that can accommodate both traditional and AI workloads, allowing organizations to support mixed environments during transition periods. This capability is particularly valuable for organizations that cannot migrate all their applications to AI systems simultaneously.
Training and skill development also become critical considerations in terms of customer assurance. Technical teams familiar with traditional data center operations must develop expertise in AI-specific requirements and installation techniques. This includes understanding the unique performance characteristics of AI workloads, the installation requirements of liquid cooling systems, and the space constraints of high-density deployments.
The road ahead
As the AI infrastructure market matures, industry standardization efforts are beginning to address many of the current challenges. Standards organizations are working to establish recommended practices for AI data center design, including guidelines for cabling, cooling, and power distribution. These efforts will help create more predictable deployment processes and reduce the custom engineering required for each project.
However, the pace of AI technology development means that standardization will likely lag behind innovation. FMs must develop approaches that are agile and can adapt to emerging requirements with speed while simultaneously leveraging established best practices wherever possible.
The next few years will be critical in establishing the infrastructure paradigms that will support the next generation of AI applications. Organizations that invest in agile, scalable infrastructure approaches today will be best positioned to capitalize on future AI innovations while minimizing the costs and complexities of ongoing technology transitions.
Embracing infrastructure evolution
The AI revolution is driving a fundamental transformation in how organizations design, build and operate data center infrastructure. The challenges are significant, from the massive scaling requirements to the integration of new cooling technologies, but the solutions lie in adapting proven principles to meet new demands.
Structured cabling approaches that emphasize agility, scalability and future-proofing assurance offer a pathway through this transformation. By focusing on creating infrastructure that can adapt to changing requirements with speed rather than optimizing for today's specific hardware, FMs can build data centers that will serve their organizations effectively through multiple generations of AI technology evolution.
The organizations that successfully navigate this transformation will be those that embrace both innovation and proven practices, creating infrastructure that can support the demanding requirements of AI workloads while maintaining the operational agility and reliability that modern businesses demand. The future of data center infrastructure is being written today, and the decisions made now will determine which organizations are best positioned to capitalize on the AI revolution's enormous potential.

Mike Connaughton, RCDD, CDCD is senior product manager at Leviton Network Solutions. He has more than 30 years of experience with fiber optic cabling and is responsible for strategic data center account support and alliances at Leviton. He has received the Aegis Excellence Award from the U.S. Navy for his work on the Fiber Optic Cable Steering Committee and was a key member of the committee that developed the SMPTE 311M standard for a hybrid fiber optic HD camera cable. Connaughton has participated in standardization activities for TIA ICEA, ANSI and IEEE.
Read more on Technology , Operations & Maintenance and Project Management or related topics Facility Technology , Data strategy and information management and Space Management
Explore All FMJ Topics