2013年4月1日星期一

Cisco UCS Data Strategy

 

image

To better understand how UCSs operate, it is important to take a look at the building blocks.

At the core of the system is the Cisco UCS Manager, which is embedded in the fabric switch. It provides centralized management for the entire system. Fabric switches are installed in pairs, and are available with 20 or 40 Port 10 Gigabit FCoE. Fabric Extenders are logically part of the fabric switch, and are inserted into the blade enclosure, two per blade. These devices contain a service processor. The blade chassis is also a logical part of the fabric switch, and allows for flexible bay configurations.

Next are the blades themselves. There are two blade types, and these types can be mixed within the enclosure. The final component is the mezzanine adapter. There are three adapter options, and these can be mixes within the blade as well.

image

The simple design of the enclosure makes switches and blades very easy to install and replace. 1U and 2U fabric switches are supported and can be mixed within the enclosure. The enclosure can accommodate up to eight half-width server blades or up to four full-width server blades. Ejector handles make it easy to switch out blades. Hot-swappable SAS drives are optional as well.

In addition, these switches feature redundant, hot swappable power supplies and fans.

image

In the back of the fabric switch, a series of 10 Gigabit Ethernet ports are available for connecting the chassis, and there is an expansion bay for added flexibility. Large fans provide a large amount of coverage in the rear of the system, and they are both redundant and hot-swappable. In addition, a pair of fabric extenders can be found. These allow for high availability and redundancy.

image

The UCS Fabric Switch family is available in two versions. There is the 28-port L2 Fabric Switch, which features 20 fixed 10 Gigabit Ethernet and FCoE ports and one expansion module. The 56-port L2 Fabric Switch has twice the capacity, with 40 fixed 10 Gigabit Ethernet and FCoE ports and two expansion modules.

The UCS Fabric Switch family is built from Nexus 5000 technology but is specialized. It features additional RAM, Flash, and embedded Cisco UCS Manager software. These components are for use within a UCS deployment only and will not support connectivity to other 10 Gigabit Ethernet adapters. The chassis cannot plug into other switches. Note that you cannot upgrade a Nexus 5000 to a UCS switch.

image

Shown here are the three types of fabric switch expansion modules. The first is a Fibre Channel-only module that features eight 1/2/4 Gigabit Fibre Channel connectivity. The second is an Ethernet-only module that has six 10 Gigabit Ethernet SFP+ ports. There is also a module that combines both. This module features four Ethernet ports and four Fibre Channel ports that offer the same connectivity as the other two modules.

For environments requiring heavy use of Fibre Channel, you can use all Fibre Channel expansion modules, and then flip all switch ports to fabric mode for external networking.

image

At the heart of the UCS is the Cisco UCS Manager. The Cisco UCS Manager provides a single domain of device management, including adapters, blades, chassis, and LAN and SAN connectivity. It is an embedded manager that can be accessed either through a GUI or basic CLI commands.

The Cisco UCS Manager features standard application programming interfaces (APIs) for systems management, such as XML, SMASH-CLP, WSMAN, IPMI, and SNMP. The software development kit (SDK) is intended for commercial and custom implementations.

image

Another critical UCS component is the fabric extender. The fabric extender contains an IO Mux, which provides flexible bandwidth allocation between the blades and the fabric switch. It is transparent to the user and is managed completely by the CAM.

The fabric extender contains a Chassis Management Controller (CMC). This features a service processor that controls power supply and fan speeds as well as system monitoring. Algorithms in the CMC determine how fast the fans run, through data collected from the blades. The CMC supports high availability through another fabric extender and CMC. The CMC also plays a key part in the hardware discovery process.

image

As mentioned earlier, blades are available in half-width and full-width forms. Common to both blade types are a pair of Intel® Nehalem-EP processors, two optional single attachment station (SAS) hard drives, a blade service processor, blade and hard drive hot plug support, stateless blade design, and 10 Gigabit CNA and 10 Gigabit Ethernet adapter options. The difference, however, is that a half-width blade features 12 dual in-line memory module (DIMM) slots and a single dual port adapter. The full-width blade features four times the memory with 48 DIMM slots, and it has two dual port adapters.

image

Network connectivity for a server blade is provided one of three ways.

The Oplin adapter, an Intel card, is the least expensive option, but provides only 10 Gigabit DCE (Data Center Ethernet). If you wish to use FCoE, then you have to use an operating system that offers an FCoE software stack. Currently only SUSE 11, a Linux OS, offers these tools. This is the least expensive but perhaps the least flexible and powerful of the three choices.

Menlo is our first Converged Network Adapter (CNA). It's called converged because it contains both an application-specific integrated circuit (ASIC) for Fibre Channel and an ASIC for Data Center Ethernet as well as a third ASIC onboard, called the Menlo ASIC. The Menlo ASIC combines Fibre Channel and Ethernet communication into 10 Gigabit Ethernet, so the Menlo card manages FCoE. We use cards manufactured by both Emulex (called the Menlo-E) and QLogic (called the Menlo-Q). The Menlo costs more than the Oplin, but you don't have to worry about using an OS with the FCoE stack; all you need are the appropriate drivers. Menlo is called the "Compatitility" adapter in this slide because it has a Fibre Channel chip and a Data Center Ethernet chip on it, along with the Menlo chip that combines the two.

Palo is a virtual interface card (VIC). It differentiates itself from Oplin and Menlo in that it can create what appears to the OS as hardware Host Bus Adapters (HBAs) and Ethernet. It is a virtualized network adapter. Whereas the Menlo card limits you to two Ethernet uplinks and two HBA uplinks, Palo can create up to 128 virtual NICs. A customer could easily create a bladed server that looks like it has 40 separate Ethernet and/or Fibre Channel cards in it and, of course, that configuration requires reduced wiring and far less power. Palo is cutting-edge technology and, ergo, is the most expensive of the three options.

image

Having examined each component of the UCS, we will now look at how it all comes together. The compute chassis offers flexibility in the number and type of blades you wish to install. It can accommodate up to eight half-width blades, or four full-width blades. Half-width blades feature a single dual port adapter, while full-width blades feature a pair of dual port adapters. These are virtualized adapters for a single OS and hypervisor systems.

The chassis can also house a pair of fabric extenders, which allow for flexible bandwidth allocation and are transparent to the user. These are managed by the Cisco UCS Manager and provide an uplink for traffic engineering.

At the integrated access layer are the fabric switches. These switches feature either 20 or 40 10 Gigabit Ethernet ports, with one or two expansion modules, respectively.

image

In this diagram, we see the connectivity between a chassis and the system's management nodes. You can theoretically connect 40 chassis to the management nodes, depending on the size of the blade.

Here, the number of links from the I/O modules going up to the management nodes changes. Where we first saw only one cable going up per I/O module, a customer can connect up to four cables per I/O module to the management nodes. If we do that, then more of the ports on the management nodes are occupied per chassis, which reduces the number of chassis we can use from 40 to 10.

image

Traditionally, server deployment has involved a great deal of management and duplication of effort. The storage administrator is required to configure logical unit numbers (LUNs), access and switch settings for zoning, VSANs, and quality of service (QoS). The server administrator must configure management for the LAN; upgrade firmware versions for the chassis, BIOS, and adapters; and configure BIOS settings, network interface card (NIC) settings, host bus adapter (HBA) settings, and boot parameters. The network administrator must configure LAN access, including uplinks and VLANs, and configure policies for QoS and access control lists (ACLs).

The problem with this approach is that tasks need to be performed for each server. This inhibits the flexibility of a “pay-as-you-grow” deployment, as admin effort is required every time you add a device. This also may involve system downtime. Under this approach, the server replacement process is complex, and many of these tasks need to be repeated for the replacement server.

image

Server deployment using the UCS approach eliminates many of the inefficiencies with traditional server deployment. Using this approach, you plan and pre-configure once. There is a collaborative storage, server, and networking planning and configuration phase.

Hundreds of servers are configured up front by creating server profiles, which specify the firmware bundle, BIOS and adapter settings, boot order and other parameters, and LAN and SAN connectivity. This approach allows for a “pay-as-you-grow” incremental deployment. A server admin can deploy servers any time, and in any increment, without disrupting network activity. Server replacement is as easy as replacing the physical blade.

image

The UCS server deployment approach cuts down on required management effort through the creation of server profiles. To upgrade or replace a server, you only need to disassociate the server profile from the existing server and then re-associate the profile to the new server. The existing server can then be retired or re-purposed by simply creating or associating an appropriate profile.

image

The traditional server deployment approach has resources provisioned for peak capacity in each vertical. A spare node per workload is allocated to handle spikes in activity.

Using server profiles, resources are provisioned only as needed. The same availability is provided, but there are fewer unused spare devices. This allows for reduced hardware needed, less power being used, and less management effort.

image

In a UCS architecture, disaster recovery across sites involves the full infrastructure stack. This includes application failover and dependency matching, OS configuration compatibility, data replication, and LAN/SAN connectivity matching.

One of the most critical aspects of disaster recovery is server compatibility, which requires configuration matching for devices such as NICs and HBAs, firmware version matching, parameter settings matching, and LAN/SAN connectivity matching. Server incompatibility across sites can cause application launch failures at the disaster recovery site.

image

Server virtualization and the hardware state abstraction work independently of one another. The hypervisor, or operating system, is unaware of the underlying hardware state abstraction.

image

UCS takes advantage of the provisioning and high-availability features associated with VMware and vMotion. This includes the capability for live migration of virtual machines across ESX hosts. Policy-based virtual machine migration across ESX hosts is also supported.

The ability to deal with a failed host is also critical. In such an event, virtual machines on the failed device can be restarted on other hosts.

image

The Virtual Desktop Interface (VDI) could be compared to thin client computing with the concept of taking the processing off of the end user's device and bringing it onto a server. However, the difference with VDI, unlike the thin client approach, is that the virtual desktop is dedicated to a single end user or mapped to provide the desktop OS and applications to a single client-viewing device.

The VMware VDI packaged solution uses VMware ESX as the underlying virtualization product. In a basic VDI architecture, client devices connect to the data center through a VDM connection server, which may also involve an Active Directory® server. In a more advanced deployment, there may be a load balancer and a demilitarized zone (DMZ) between client devices and the data center. The data center may feature infrastructure servers, application virtualization servers, and boot image management services.

image

This example describes how the typical firm may use several database applications running on different categories of databases. OnLine Transaction Processing (OLTP) applications handle the up-front transactions. As we move left to right, the Online Data Store (ODS), Data Warehouse, Data Marts, and Analysis represent different machines and apps that further extract and transform the data into information that management uses to make forecasts, look for trends and patterns, etc. Companies often use different compute systems at these various levels of the database solution. Understanding where in this system the UCS might fit requires an understanding of the criteria of each of the compute elements at the various levels, and what kinds of machines and applications the client currently uses

image

In a three-tier system, the web servers face the customers near the top of the network, application servers support the business logic, and the database machines provide back-end support. Where in this architecture can the UCS system fit?

A firm likely will not spend significant money on web servers. Many companies use inexpensive rack-mounted machines to work as web servers—some for production, some for failover, etc. Instead of buying and racking all these machines, a firm could install VMware on a UCS blade and create a series of virtual machine web servers.

At the middle level, the UCS can act as a transactional server. A low-end blade would do very well here. The client can run VMs or not, depending on their needs.

At the backend, performance and I/O are critical. The UCS offers very low latency, so it makes very good sense to run the database on these boxes. Often times the client uses Big Iron systems, or more recently Oracle racks at the back end, but the sales person should not get discouraged when the client does not want to migrate to UCS at the back end, because for every database server there are often many App and Web servers, and the UCS system fits very well there.

没有评论:

发表评论