By Marc Cram
Global cloud providers are collaborating with mobile cell carriers to create a more immersive customer experience. AT&T and Google Cloud have had an ongoing relationship to develop joint solutions that benefit enterprise networks. Now, according to AT&T, the two partners are kicking it up a notch and evaluating “how network APIs could optimize applications, using near-real-time network information at the Google Cloud edge.” This new venture takes AT&T Multiaccess Edge Compute (MEC) and welds it to Google Cloud to supercharge AT&T’s 5G with artificial intelligence (AI), machine learning (ML), data analytics, and an edge ISV ecosystem.
Not to be outdone, AT&T’s rival, Verizon, has partnered with Amazon Web Services (AWS) to provide private multi-access computing (Private MEC) for enterprises. In this collaboration, the 5G edge MEC platform is packaged with AWS’ Infrastructure, services, APIs, and AWS Outposts to help data centers decrease costs and improve large data transfers. Each of these collaborations is also in pursuit of establishing better customer experiences and business services.
These partnerships are causing cloud services to be specialized and have unique location-based functions that require Amazon, Google, and Microsoft to architect their silicon outside what an Intel or AMD may provide.
Google has their now, fourth generation Tensor Processing Unit (AI processing chip), the Tensor Processing Unit (TPU) v4, Amazon has their Arm-based Graviton2 processors and according to Bloomberg, Microsoft is using designs to “produce a processor that will be used in its data centers.” Other sources report that it will be specifically designed for use in its Azure servers. All this newly customized silicon is designed to optimize how customers run their workloads in the cloud.
In preparation for 5G, new virtualized infrastructures that power networks to handle compute-intensive, wireless data have been in development over the past several years. These new cloud chips will enable data center operators to pack the edge networks with more powerful, denser devices that will force them to adapt to changing power requirements for endpoints as well as be flexible enough to accommodate the rapid changes over time, as they begin to digest a 5G-infused workload evolution.
IMPACT ON THE DATA CENTER
To accommodate the change within the data center all these new chips are ushering in, more cooling is being implemented and new rack form factors are being pushed out. For example, there are now more open compute size racks that are slightly wider and deeper than the standard 19-inch form factor racks.
In addition, data center operators are also adopting form factors that are different at the compute level. Instead of being a 1u-wide box now they are adopting a 1/2u-wide box so they can fit two side-by side in a rack location. Instead of a single server motherboard inside a 1u rack, now we are finding that a motherboard may have 4 to 8, 32-bit processors. Each of these processors instead of having a single or two core configuration may now have 8, 16, or 24 cores on them—enabling a lot more processing to take place in a confined area.
With an increase in bandwidth comes the need for an increase in adaptable power solutions to support all this dense-chip connectivity on the motherboard. Flexible rack power distribution units (PDUs) with C13 and C19 outlets are now sought after to support the AI workloads initiated by the chip densities and broader bandwidth. There is also a greater need for intelligent horizontal smart or switched PDUs with the flexibility to conform to new rack space requirements —and these changes are increasing daily.
Knowing how to properly optimize the rack space with these power solutions is a matter of understanding trade-offs and being able to balance the actual requirements between the cabinet and its configuration. When deciding upon the best-fit rack PDU for a particular rack or location, there are five major considerations to contend with:
As implied above with the mention of “airflow,” the intense workloads are also contributing to a rise in heat production around the server racks. In the event that optimizing airflow is not enough to dissipate the rising temperatures, there are some novel cooling approaches being taken. Some examples are adiabatic cooling, spraying a mist inside a cabinet, or in the case of Microsoft, making liquid boil inside a steel holding tank and packing it with computer servers.
Other data center operators are turning to simpler but very effective methods such as configure-to-order cabinet platforms. These platforms are designed to be flexible, sturdy, and secure for housing data center devices while also providing the scalability and future-proof architecture needed to support the rise in digital transitions, Internet of-Things (IoT) connectivity, 5G services, edge computing, and AI applications.
Mega-industry collaborations are spawning a new breed of silicon that will drive intense workloads to deliver on the 5G promises made by AT&T, Sprint, and Verizon. Many of these new chipsets will be located in edge networks that will perform a considerable amount of the heavy lifting for these bandwidth-intensive applications. Denser motherboards and server racks will be deployed as the “beasts of burden” to process the load. However, with the 5G promise comes the need for increased power consumption.
Traditional methods of powering these new devices may not apply in every circumstance, so turn to space conforming and intelligent PDUs that match the innovation now packed into the motherboards. And along with more power-hungry devices comes the byproduct of heat. Unless you have Microsoft’s budget to experiment with liquid-boiling cooling methods, the more practical means of heat dissipation must be applied in methods such as configure-to-order cabinet platforms.
Remember, it’s not all about the glitz and glamor of the 5G application, it’s also about the preparation to ensure data center and edge networks are architected correctly to process it all.
Marc Cram is Director of New Market Development for Legrand’s Data, Power, and Control division. He can be reached at firstname.lastname@example.org.