We investigate the advantages of using co-packaged optics for building low-diameter, large-scale high-performance computing (HPC) and data center networks. The increased escape bandwidth offered by co-packaged optics can enable high-radix switch implementations of more than 150 switch ports, which can be combined with data rates of up to 400 Gb/s per port. From the network architecture perspective, the key benefits of using co-packaged optics in future fat-tree networks include (a) the ability to implement large-scale topologies of > --> 11 , 000 end points by eliminating the need for a third switching layer and (b) the ability to provide up to 4 × higher bisection bandwidth compared to existing solutions, reducing at the same time the number of required switch application-specific integrated circuits by > --> 80 % . From the network operation perspective, both reduced energy consumption and lower packet delays can be achieved since fewer hops are required; i.e., packets need to traverse fewer serializer/deserializer lanes and fewer switch buffers, which reduces the probability of contending with other packets and improves the tolerance of network congestion. The performance of the proposed architecture is evaluated via discrete-event simulations for a wide range of representative HPC synthetic-traffic cases that include both hotspot and non-hotspot scenarios. The simulation results suggest that co-packaged optics form a promising solution to keep up with bandwidth scaling in future networks, while the reduced number of switching layers can lead to significant mean packet delay improvements that start from 30% and reach up to 74% for high-load conditions.