

On one hand, this is a drawback, but on the other hand, it simplifies management in that the PowerConnect 8024 10G switch is really a switch and can be configured as such. This means that the onus of QoS and bandwidth limiting and prioritization falls to the OS running on the blade, or the QoS present in the PowerConnect 8024 10G modules. The 10G pipe to each blade is just that, a raw 10G interface without the four virtual interfaces provided by HP's Virtual Connect and IBM's Virtual Fabric. Unlike the HP and IBM blade systems, Dell's setup doesn't have virtualized network I/O. Some of the blades do offer internal SD-card options for booting embedded hypervisors like VMware ESXi. You'll get two or four CPUs, DIMM slots, and two 2.5-inch drive bays in each. There are no storage blades or virtualization-centric blades. Dell offers several different models of blades, but they're all iterations of the same basic compute blade with different CPU and disk options. One drawback to the Dell solution compared to the HP blades is the relative lack of blade options. There's also an SSD option for the local disk. An internal SD card option permits flash booting of a diskless blade, which can come in handy when running embedded hypervisors like VMware ESXi. The blades are fairly standard, offering two CPU sockets and 12 DIMM slots, two 2.5-inch SAS drive bays driven by a standard Dell PERC RAID controller, two USB 2.0 ports on the front, and a selection of mezzanine I/O cards at the rear to allow for gigabit, 10G, or InfiniBand interfaces. They slide easily in and out of the chassis and have a very well-designed handle that doubles as a locking mechanism. The blades themselves have a very solid, compact feel. This is a limitation shared by all the chassis tested. If there's only a single switch in the back, only one port will be active per blade. The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis. On the front of the chassis is a 2-inch color LCD panel and control pad that can be used to step through initial configuration and to perform chassis monitoring and simple management tasks. If InfiniBand is your flavor, there's a 24-port Mellanox option as well. It doesn't offer some of the expansion of the HP chassis, but does offer similar features to the IBM solution, but if you have no need for internal storage or centralized multichassis management, it's a great solution.Ĭhassis and blades The M1000e blade enclosure squeezes 16 half-height blades into a 10U chassis with six redundant hot-plug power supplies, nine hot-plug fan modules, six I/O module slots (supporting Dell PowerConnect gigabit and 10G switches), three different Cisco modules (supporting gigabit internals and 10G uplinks), a Brocade 8Gbps FC module, and both Ethernet and 4Gbps FC pass-through modules.

The M1000e makes a great virtualization platform, but would do well in just about any situation. The main selling points of the Dell blades system are the density and price.

Unless you put external management tools to use, each Dell chassis exists as an island. The downsides include some lack of visibility into chassis environmental parameters and the absence of multichassis management capabilities. In today's M1000e, a brand-new set of chassis management tools offer many features suited for day-to-day operations, and the chassis-wide deployment and modification tools are simply fantastic. The Dell PowerEdge M1000e is far more attractive and functional than its predecessors.

In the intervening few years, Dell has clearly taken the time to polish up its solution. In our January 2007 blade server shoot-out, Dell was the dark horse candidate that posted impressive performance numbers, but fell short on features compared to the other solutions.
