For the final piece of the Global Summit wrap up, I focus on Networking, both inside the server and between racks, and the crystal ball state of AI.
Networking
In the area of networking, the conference shed light on some fantastic initiatives that are set to redefine how we connect and scale AI systems. These are set to give users viable alternatives and real choice, providing some well needed competition in the space, and reduce reliance and supply-chain risks.
SCALE UP with UALink (Ultra Accelerator Link)
As an alternative to NVLink, UALink focuses on scaling up by connecting GPUs together to form a more powerful, unified GPU. The UALink Consortium has the who's who of hyperscalers (Meta, Amazon Web Services (AWS), Microsoft), silicon providers (Intel Corporation, AMD) and networking (Cisco, Nokia ) plus many others, collaborating together to provide a solution for interconnecting non-NVIDIA chips (like AMD's MI300X, or Intel's Gaudi 3) in a single node, at high speed and low latency, to create a large logical processor, sharing resources (critically, memory) to host GenAI workloads like LLMs.
Essentially, make the biggest GPU possible.

SCALE OUT with UEC (Ultra Ethernet Consortium)
One of the key highlights was the progress made on products from the Ultra Ethernet Consortium (UEC). The UEC is spearheading efforts to develop next-generation Ethernet technologies tailored for AI workloads. Not unsurprisingly, the same folks interested in solving for connecting accelerators (GPUs) together, are many of the same ones that want to connect as many large GPUs together as possible.