Features

Overcoming challenges in data centre cabling

data centre

Enterprise data centres have traditionally focused on data storage and preparation for disaster recovery, but they rarely meet spike or excess demand for real-time multi-user data retrieval.

In today’s evolving digital market, there are more users and more data. There is a greater expectation on data centres to facilitate faster data transmissions for an increasing number of internet users worldwide. In the face of big data, data centre operations are shifting from storage to the real-time analysis and processing of data based on demand. The Australian infrastructure-as-a-service market is expected to reach $1 billion by 2020, according to findings from Telsyte. This indicates that a significant portion of IT budgets will be used for cloud services in future enterprise networks.

Large data centres are evolving their digital infrastructures, driven by the rapid growth of cloud computing. Many companies, including internet giants, are increasing investments in data centres located both domestically and abroad, to ensure they are ready for the next generation of cloud services. They need the right infrastructure in place to ensure a rapid and seamless transmission of data, voice and video to an increasing number of users.

3-Level Network Structure vs. 2-Level Spine-and-Leaf Network Structure
In contrast to the traditional enterprise where data centre traffic is dominated by local client to server interactions (north to south), the network traffic of the large internet data centre is dominated by server to server traffic (east to west) which is required for cloud computing applications. The number of users accessing data via applications is not only huge: they have diversified and fragmented demands, and require uninterrupted user experiences. Internet data centres require higher bandwidth and a much more efficient network architecture to support spikes in heavy traffic from their large number of users. These spikes in data traffic could be driven by anything from video calling, demand for online music and videos, gaming, and shopping and news events and more.

The current mainstream three-level tree network architecture is based on the traditional north to south transmission model. When a server needs to communicate with another server from a different network segment, its server must pass through the path of access layer -> aggregation layer -> core layer

To address these challenges, the world’s large internet data centres are increasingly adopting spine-and-leaf network architecture, which is more effective for transferring data between servers (east to west).

This network architecture mainly consists of two parts – a spine switching layer and leaf switching layer. As its best feature, each leaf switch is connected to each spine switch within a pod, which greatly improves the communication efficiency and reduces the delay between servers. In addition, spine-and-leaf 2-level network architecture avoids expensive core-layer switching device(s), and makes it easier to gradually add switches and network devices for expansion based on business needs, saving on the initial investment costs.

Dealing with the Cabling Challenges of a Spine-and-Leaf 2-Level Architecture
Data centre managers encounter new issues when deploying a data centre with a spine-and-leaf 2-level architecture. Since each leaf switch is required to connect each spine switch, managing a massive quantity of cabling becomes a major challenge. Corning’s mesh interconnection module (Table 1) solves this difficult problem.

4×4 Mesh Module Description
·      4 x 8 fibre MTP® input port,4 x 8 fibre MTP output port

·      Fiber type: OS2 and OM4

·      SR4 vs. PSM4 meshed interconnection does not need LC port conversion

Many users have started using high-density 40G switch line cards to break out as part of 10G applications. For example, a high-density 10G SFP+ line card has 48 x 10G ports, while a high-density 40G QSFP+ board may have 36 x 40G ports. As such, a 40G line card can be used to obtain 4×36 = 144 x 10G ports in the same cabling space and power consumption conditions, thus lowering the cost and power consumption of the single-port 10G.

There are three typical applications of mesh modules in the cabling system.  Four QSFP 40G channels (A, B, C, D) are broken out into 4×4 10G channels at the MTP input of the mesh module.  The 10G channels are then shuffled inside the mesh module, such that the four 10G channels associated with QSFP transceiver A are split across the four MTP outputs.  The result is that the four SFP transceivers connected to one MTP output, receive a 10G channel from each of QSFP transceivers A, B, C and D.  Thus we achieve a fully meshed 10G fabric connection between the QSFP spine switch ports and the leaf switch ports without ever having to break out to LC connections at the main distribution area (MDA).

The example below depicts how to optimise the cabling structure of a spine-and-leaf at the MDA. For example, we use a leaf switch with a 48 x 10G SFP+ port line card and a spine switch with 4×36 40G QSFP+ port line cards. If a leaf switch has an oversubscription ratio of 3:1, 16 x 10G uplink ports of each leaf switch need to connect to 16 spine switches. Given that the 40G port of the spine switch is used as four 10G ports, each spine switch needs to connect 4x36x4 = 576 leaf switches.

If traditional cabling is used to achieve full fabric mesh of the spine and leaf switches, a 40G QSFP+ port of each spine switch is broken out into 4 x 10G channels through an MTP®-to-LC module in the MDA and then cross-connected through a jumper with the corresponding number of the MTP-to-LC modules that connect to the 10G channels of the leaf switch. The traditional method has not been widely used because the cabling system is very complex, the cost is relatively high, and requires a lot of rack space at the MDA. In this scenario, the use of a mesh module can be a good solution to resolve these problems. As shown in the graphic on the right side of Figure 4, in the case of a network module used in MDA, the full mesh of the leaf switches is achieved without having to break out the 40G port of the spine switch into 10G channels via an MTP-to-LC module. This greatly improves the MDA cabling structure by eliminating massive LC-to-LC patch fields and can be of great value for the user.

Advantages Value
Density Save MDA distribution space by 75per cent
MTP connections Reduce number of jumpers in MDA by 75 per cent
Link-loss Decrease link-loss by 10 per cent
Cost Reduce installation cost by 45 per cent

 

Conclusion
As network bandwidth requirements for the data centre have risen, the data centre backbone network has been gradually upgraded from 10G to 40G, and will reach 100G in the near future. By using 40G broken out into 4 x 10G now, and 100G into 4 x 25G in the future, the spine-and-leaf network architecture will deliver an economical and efficient network structure for the management of large data distribution. Utilising the mesh module to achieve a full fabric mesh of the spine-and-leaf network supports the current 40G network while ensuring seamless transition to future 100G network capabilities as user demand grows.

Send this to a friend