How Are Datacenters Physically Wired?

I sent this question to my team in Intel some time ago.

I gave a run down on SDN, NFV and all things Open to the OpenStack new hires today. One of the questions that came out of this concerned the physical wiring of server room or datacenter using SDN. Does anyone have any info on how n servers in a datacenter would be physically connected (where n >= 100, for example)? In case it matters, I’m picturing either a mesh network (high efficiency, high complexity) or a hierarchical network of increasingly large-bandwidth switches and routers (low efficiency, low complexity), but I’m only guessing here.

Robin Giller started with an excellent introduction:

I believe that “leaf and spine” is the current topology of choice, moving away from the “fat tree” architecture of the past when one inbound request needed to be routed to one server, who would compute and send data back up to the core and out. Leaf and spine is more efficient when you’ve got lots of east-west traffic. There’s an explanation of both in the link below, and loads more available - just search for leaf and spine.

http://searchdatacenter.techtarget.com/feature/Data-center-network-design-moves-from-tree-to-leaf

While the always helpful Sean Mooney provided that little bit of additional info:

To expand on that, I believe it is leaf spine at the pod level (~5-10 racks of servers) with spine switches interconnected in a mesh.

So each spine switch will be connected to leaf top-of-rack switches and then interconnected with other spine switches to form a core mesh network.

There is also work in OpenStack around Hierarchical Port Binding to allow different overlay technologies to be used at the spine and leaf layers. https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding

With Hierarchical Port Binding you can use VLANs between server and leaf level and VXLAN or other more scalable/computationally expensive overlays at the leaf/spine level.

Interesting stuff.

comments powered by Disqus