Media releases are provided as is by companies and have not been edited or checked for accuracy. Any queries should be directed to the company itself.
  • 25 March 2010 11:31

How to counter virtualisation and blade server complexity in the network by reducing tiers

New technologies simplify data centre architecture

By AARON CONDON, Acting ANZ Manager, Extreme Networks

While server virtualisation and blade servers are driving consolidation in today’s data centres and changing the way networks are built, they tend to result in more complex network designs. These technologies are delivering impressive efficiencies on the server side, yet their impact on growing the numbers of network tiers in the data centre is significant.

However, by taking a holistic view of the network, newer architectures may be implemented that can both simplify the network and make it more efficient.

Direct-attach architecture, utilising high density modules in the aggregation switch, along with unique cabling solutions, eliminates multiple switching tiers within the network. This leads to reduced end-to-end latency, reduced oversubscription in the network, better power efficiencies, and lower cost. For example, Extreme Networks’ BlackDiamond 8900-G96T-c modules in conjunction with MRJ21 cable technology form Tyco provide a comprehensive solution for implementing direct-attach architecture. e Networks White Paper With server virtualisation, multiple server instances can be consolidated onto a single server in the form of virtual machines (VMs). The VMs on a single server communicate with each other through a virtual switch (vSwitch), software running in the virtualisation layer on the server (also called the hypervisor) that functions as a Layer 2 switch.

Blade server technology allows packing great computing power into a highly compact form factor - a blade server enclosure can pack 8, 16 or 32 blade servers in a single chassis enclosure. Each blade server can have multiple Ethernet ports, for example one or two dedicated ports for LAN traffic, a port for management, and a port for supporting virtual machine mobility. Each port connects into the backplane of the blade chassis enclosure and is brought to the front panel using either a blade switch or a pass-through module. This module simply passes through each connection from every server out to the front panel for connectivity to the external network, often presenting a cabling challenge.

For example with 4 Ethernet ports per blade server, and up to 16 servers in the blade chassis, up to 64 Ethernet cables can be required for each blade chassis. With two or more blade chassis per rack, cabling soon becomes unmanageable. However, the blade switch module does local switching of traffic between servers within a blade chassis, and provides a smaller set of uplink ports for connectivity to the external network, significantly simplifying the cabling challenge. Hence the blade switch is gaining in popularity as an efficient cabling solution for blade servers.

This applies to both network switches and blade switches. While increasing virtualisation is driving greater throughput demands right down to the edge of the data centre (more VMs on a server pushing more traffic out the server) the increasing oversubscription due to the increasing number of network tiers can lead to sub-optimal performance due to increasing choke points or bottlenecks in the network.

Each tier adds management overhead and troubleshooting complexity, since each switch has to be configured, monitored, maintained and updated with the latest software updates. Finally, each tier increases the network’s overall cost.

Clearly a different approach to data centre network architecture is needed to take advantage of the benefits of the blade server technology and virtualisation while addressing the issues involved. One such approach follows.

Both the virtual switch and the blade switch have been driven from the server side of the data centre. However, their impact to the network is significant and is often overlooked. While traditionally, larger data centres would have a 3-tier network i.e. core, aggregation and access tiers (the access tier is typically a Top-of-Rack or TOR switch), adding the blade switch tier and the virtual switch tier leads to a 5-tier network. The extra tiers create several issues.

Each tier typically leads to increased end-to-end latency. In an environment where applications in industries such as finance, video and content delivery, and HPC, demand increasingly lower latencies, adding tiers to the data centre network can adversely impact the application performance by increasing latency across the network. Typically each tier can add oversubscription. Oversubscription ratios of 2:1 or 3:1 are common at each tier, but could be higher. Direct-attach architecture is based on the premise of connecting blade servers (or any server) directly into a very high density aggregation switch, bypassing both the blade switch and the TOR or access switch. There are two main components:

1. Very high density network aggregation switch modules that provide 96 Ethernet ports on a single I/O switch module. Since access and aggregation tiers are typically added to the network to increase fan-out, these high density modules in a chassis form factor reduce the need to have both an access and an aggregation tier in the network. The high density is supported by a high capacity switch fabric and an overall switching capacity of just under 4 Tbs. 2. Cabling technology which consolidates six Ethernet cables and connectors into one. The MRJ21 cable comes in different flavours. One variant (octopus cable) uses 6 RJ-45 connectors on one end and an MRJ21 connector at the other end. Another variant uses MRJ21 connectors at both ends. Combining or aggregating six Ethernet cables into one, the MRJ21 cable provides significant cabling simplification.

However, the pass-through module can lead to significant cabling challenges. By using MRJ21 cables, six ports on the pass-through module can be connected to the external network via a single cable. This reduces complexity significantly by achieving a 6:1 cable reduction. In effect, the pass-through module in conjunction with the MRJ21 cables can replace the blade switch, eliminating a tier of switching from the network.

MRJ21 technology also allows very high density network switches to be built. By using MRJ21 instead of RJ-45 connectors, very high density fan-out can be achieved. For example the Extreme Networks® BlackDiamond 8900-G96T-c switch blade for the BlackDiamond 8810 chassis uses 16 MRJ21 connectors on a single I/O blade to achieve a fan-out of 96 Ethernet ports on a single blade. Up to eight of these blades can be stuffed into a single chassis providing connectivity for up to 784 GbE ports.

Using MRJ21 cables to connect ports from the pass-through module of the blade chassis directly into these high density network switch modules, can eliminate the TOR or access switch layer, since the high density switch blades on the chassis provide the adequate fan-out required. In effect, the servers become directly attached to the aggregation switch tier, eliminating both the blade switch tier and the TOR or access switch tier.

Advantages of this architecture include: improved overall network latency by eliminating two active switching tiers from the network; significantly reduced oversubscription since both the TOR or access switch and the blade switch tiers added oversubscription; reduced power consumption from eliminating two switching tiers; reduced management complexity, and lower overall solution cost.

So deploying direct-attach architecture enables the data centre to take advantage of newer server technologies while reducing inefficiencies in the network. For more information Aaron Condon Extreme Networks www.extremenetworks.com Phone: +61-2-8912 2401 Mobile: +61-410 537 006 Email: acondon@extremenetworks.com

Extreme Networks’ distributor in Australia is Distribution Central www.distributioncentral.com.au

Submit a media release