2013年4月1日星期一
Comparison of Cisco UCS C-Series Rack Mount Servers
Cisco UCS C220
URL : http://www.cisco.com/en/US/docs/unified_computing/ucs/c/hw/C220/install/replace.html
DCNSS Exam - Cisco Data Center 3.0
Next Gen Data Center
Cisco has identified three networking environments in which the intelligent network can play a vital role. These are data networking, storage networking, and application networking. These represent three functional areas which in some cases overlap.
The traditional data network infrastructure, the LAN and the WAN, are well established with IP protocols being ubiquitous. The storage area network (SAN) is increasing in the data center, leveraging both Fibre Channel and IP protocols. Application networking is a less well known term in that it refers to embedding specific application platforms in a network that take on low-level, repeatable, and unchanging functions that remove a burden from the application running in the server. An example of this may be security where the requirement goes beyond the typical firewall deliverables.
Cisco’s Intelligent Information Network (IIN) strategy defines a three-to-five year roadmap for increasing value through embedding greater intelligence in a ubiquitous network. Cisco’s Service-Oriented Network Architecture (SONA) provides the roadmap to build an SOI using Cisco’s Data Center Network Infrastructure (DCNI) products in three main phases: Consolidation, Virtualization, and Automation. These three phases will run in parallel and will all evolve at different speeds for distinctive technology areas and are part of Cisco’s Service Oriented Data Center (SODC) initiative.
Within many data centers, the responsibility for different technology and functional areas is split. You may find different teams dealing with servers, storage, and the data network.
This lack of a holistic approach to the data center can mean that each technology environment deals with scalability, virtualization, management, and security without reference to the other areas in the data center. This can lead to inefficient use of both capital expenditure and expensive data center staff as well as limiting the ability to deploy and redeploy resources, flexibly, across the data center.
Cisco’s strategy of having a converged, intelligent network as a foundation for the data center facilitates taking a holistic, cross environment, view of the data center using network-embedded functions that can be effective across all the data center environment. This can mean reduced OpEx for the data center as the network offers a single point of management for the whole data center. Reducing the number of functions duplicated within different environments can also mean improvements in data center CapEx. As functions become embedded in the network there is also the added value of having highly resilient, director-class network devices which can not only consolidate the functions, but also result in improved service availability.
This concept is an important element of maximizing the sales potential in the data center.
Application networking is a term used to describe network-embedded applications which are shared or networked between different devices and applications. This provides for a single instance of the function within the network to be made available, end to end, across the data center.
Cisco has two main categories of Application Networking Solutions. Data center solutions provide secure, reliable, high-performing data center infrastructure for application traffic within or across data centers. Branch office and WAN solutions provide LAN-like performance for applications delivered over the WAN, and thus enable IT to consolidate servers/storage within the branch office. Cisco branch solutions for application delivery are deployed symmetrically in both the data center and branch offices.
Data Center Challenge
Today’s enterprise networks face common challenges and problems across the board when looking to deploy application solutions. The lowest common denominator for each category gives us four distinct challenges. The first of these is availability and reliability. Is the application available when needed? Is it reliable or do I have to constantly monitor it and “nurse” it? The second challenge is performance. Does the application perform well or are the end users spending too much time at the water cooler? Security is always a challenge. Who can access the data? The last challenge is cost. How much does it cost to deploy, operate, and maintain the application?
Data Center 3.0
When you consider virtualization across the data center, you always have to start with a physical device or resource. These include servers, switches, and storage. These physical devices and resources are dependent upon a foundation network to communicate. It is this network that Cisco is unifying into a single fabric based on Ethernet, which is a core element of Cisco’s Data Center 3.0 strategy. Within this unified fabric, Cisco has embedded its network services.
In order to virtualize the data center, it is necessary to consolidate the data center resources and then deploy them as logical devices and resources. Server virtualization is accomplished through techniques such as VMWare, Windows Virtual Server, and Virtual Iron, each with their own features, functions, benefits, and price points. Virtualizing the data network can be handled in more than one way. Typically, VLANs allow the network to be logically segmented, but with the advent of the Nexus 7000, Cisco can now also deploy virtual domains of switch functionality—effectively virtual switches—also known as virtual device contexts, or VDCs. The Nexus 7000 supports a maximum of 4 VDCs in NX-OS release 4.1.
In the storage networking environment, Cisco’s MDS switch has a well established technique, VSANs, which allow virtual domains to be created from within a physical Fibre Channel fabric with each domain having all the capabilities and functions supported by the physical switch. Data storage can be virtualized as logical units of capacity. The SSM, which can be fitted into an MDS switch, facilitates storage capacity virtualization working with partners such as EMC Invista.
Having virtualized the devices and resources, the next challenge is to provision these to applications.
The provisioning and management of a virtualized data center is an important component in Cisco’s Data Center 3.0 strategy.
The last element of a coherent, holistic strategy for the virtualized data center is the automation of provisioning. Having consolidated the data center resources, Cisco can contribute to and facilitate virtualization and is in a position to deliver a resource provisioning solution through ecosystem partner products such as Bladelogic.
The integration of the three platforms creates an area of overlap that has challenged historical organizational boundaries in dealing with the additional complexity throughout the entire lifecycle.
From a site cost perspective, this has translated into improved environmentals—power, cooling, and space—due to the improved hardware utilization and reduced footprint for a static compute demand.
Platform costs have stayed relatively flat. Storage costs have increased due to the need for SAN and/or NAS solutions. Network costs have increased due to the additional network requirements. Software costs have gone up overall from the addition of the virtualization layer required in the stack. Server consolidation, however, has reduced overall quantity and cost.
Organizational costs have gone up. This is because of increased organizational roles, such as virtualization administrators and development roles, and also because of the coordination due to high complexity and high touch created by the overlaps.
By setting up and managing virtual connections, a UCS helps solve the most vexing virtualization performance issues. The Cisco approach to virtualization also expands the utilization rate of each processor in the system. In effect, the UCS extends the concept of virtualization out to create pools of shared memory, storage, and compute engine resources.
Greater efficiencies are recognized as more VMs are added per server. This directly relates to lower power, cooling, and cost per VM.
This chart illustrates how effective a UCS is with regard to cost efficiency. The bar on the left represents legacy rack-mounted end-of-row deployment. Fibre Channel over Ethernet (FCoE) allows for some cost savings by leveraging 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol. However, with a UCS, the combined features of a unified fabric, elimination of unused resources, and ease of management allow for far greater cost efficiency. The end result is a price/performance ratio for supporting virtual machines that is unrivalled in the industry.
Cisco UCS Data Strategy
To better understand how UCSs operate, it is important to take a look at the building blocks.
At the core of the system is the Cisco UCS Manager, which is embedded in the fabric switch. It provides centralized management for the entire system. Fabric switches are installed in pairs, and are available with 20 or 40 Port 10 Gigabit FCoE. Fabric Extenders are logically part of the fabric switch, and are inserted into the blade enclosure, two per blade. These devices contain a service processor. The blade chassis is also a logical part of the fabric switch, and allows for flexible bay configurations.
Next are the blades themselves. There are two blade types, and these types can be mixed within the enclosure. The final component is the mezzanine adapter. There are three adapter options, and these can be mixes within the blade as well.
The simple design of the enclosure makes switches and blades very easy to install and replace. 1U and 2U fabric switches are supported and can be mixed within the enclosure. The enclosure can accommodate up to eight half-width server blades or up to four full-width server blades. Ejector handles make it easy to switch out blades. Hot-swappable SAS drives are optional as well.
In addition, these switches feature redundant, hot swappable power supplies and fans.
In the back of the fabric switch, a series of 10 Gigabit Ethernet ports are available for connecting the chassis, and there is an expansion bay for added flexibility. Large fans provide a large amount of coverage in the rear of the system, and they are both redundant and hot-swappable. In addition, a pair of fabric extenders can be found. These allow for high availability and redundancy.
The UCS Fabric Switch family is available in two versions. There is the 28-port L2 Fabric Switch, which features 20 fixed 10 Gigabit Ethernet and FCoE ports and one expansion module. The 56-port L2 Fabric Switch has twice the capacity, with 40 fixed 10 Gigabit Ethernet and FCoE ports and two expansion modules.
The UCS Fabric Switch family is built from Nexus 5000 technology but is specialized. It features additional RAM, Flash, and embedded Cisco UCS Manager software. These components are for use within a UCS deployment only and will not support connectivity to other 10 Gigabit Ethernet adapters. The chassis cannot plug into other switches. Note that you cannot upgrade a Nexus 5000 to a UCS switch.
Shown here are the three types of fabric switch expansion modules. The first is a Fibre Channel-only module that features eight 1/2/4 Gigabit Fibre Channel connectivity. The second is an Ethernet-only module that has six 10 Gigabit Ethernet SFP+ ports. There is also a module that combines both. This module features four Ethernet ports and four Fibre Channel ports that offer the same connectivity as the other two modules.
For environments requiring heavy use of Fibre Channel, you can use all Fibre Channel expansion modules, and then flip all switch ports to fabric mode for external networking.
At the heart of the UCS is the Cisco UCS Manager. The Cisco UCS Manager provides a single domain of device management, including adapters, blades, chassis, and LAN and SAN connectivity. It is an embedded manager that can be accessed either through a GUI or basic CLI commands.
The Cisco UCS Manager features standard application programming interfaces (APIs) for systems management, such as XML, SMASH-CLP, WSMAN, IPMI, and SNMP. The software development kit (SDK) is intended for commercial and custom implementations.
Another critical UCS component is the fabric extender. The fabric extender contains an IO Mux, which provides flexible bandwidth allocation between the blades and the fabric switch. It is transparent to the user and is managed completely by the CAM.
The fabric extender contains a Chassis Management Controller (CMC). This features a service processor that controls power supply and fan speeds as well as system monitoring. Algorithms in the CMC determine how fast the fans run, through data collected from the blades. The CMC supports high availability through another fabric extender and CMC. The CMC also plays a key part in the hardware discovery process.
As mentioned earlier, blades are available in half-width and full-width forms. Common to both blade types are a pair of Intel® Nehalem-EP processors, two optional single attachment station (SAS) hard drives, a blade service processor, blade and hard drive hot plug support, stateless blade design, and 10 Gigabit CNA and 10 Gigabit Ethernet adapter options. The difference, however, is that a half-width blade features 12 dual in-line memory module (DIMM) slots and a single dual port adapter. The full-width blade features four times the memory with 48 DIMM slots, and it has two dual port adapters.
Network connectivity for a server blade is provided one of three ways.
The Oplin adapter, an Intel card, is the least expensive option, but provides only 10 Gigabit DCE (Data Center Ethernet). If you wish to use FCoE, then you have to use an operating system that offers an FCoE software stack. Currently only SUSE 11, a Linux OS, offers these tools. This is the least expensive but perhaps the least flexible and powerful of the three choices.
Menlo is our first Converged Network Adapter (CNA). It's called converged because it contains both an application-specific integrated circuit (ASIC) for Fibre Channel and an ASIC for Data Center Ethernet as well as a third ASIC onboard, called the Menlo ASIC. The Menlo ASIC combines Fibre Channel and Ethernet communication into 10 Gigabit Ethernet, so the Menlo card manages FCoE. We use cards manufactured by both Emulex (called the Menlo-E) and QLogic (called the Menlo-Q). The Menlo costs more than the Oplin, but you don't have to worry about using an OS with the FCoE stack; all you need are the appropriate drivers. Menlo is called the "Compatitility" adapter in this slide because it has a Fibre Channel chip and a Data Center Ethernet chip on it, along with the Menlo chip that combines the two.
Palo is a virtual interface card (VIC). It differentiates itself from Oplin and Menlo in that it can create what appears to the OS as hardware Host Bus Adapters (HBAs) and Ethernet. It is a virtualized network adapter. Whereas the Menlo card limits you to two Ethernet uplinks and two HBA uplinks, Palo can create up to 128 virtual NICs. A customer could easily create a bladed server that looks like it has 40 separate Ethernet and/or Fibre Channel cards in it and, of course, that configuration requires reduced wiring and far less power. Palo is cutting-edge technology and, ergo, is the most expensive of the three options.
Having examined each component of the UCS, we will now look at how it all comes together. The compute chassis offers flexibility in the number and type of blades you wish to install. It can accommodate up to eight half-width blades, or four full-width blades. Half-width blades feature a single dual port adapter, while full-width blades feature a pair of dual port adapters. These are virtualized adapters for a single OS and hypervisor systems.
The chassis can also house a pair of fabric extenders, which allow for flexible bandwidth allocation and are transparent to the user. These are managed by the Cisco UCS Manager and provide an uplink for traffic engineering.
At the integrated access layer are the fabric switches. These switches feature either 20 or 40 10 Gigabit Ethernet ports, with one or two expansion modules, respectively.
In this diagram, we see the connectivity between a chassis and the system's management nodes. You can theoretically connect 40 chassis to the management nodes, depending on the size of the blade.
Here, the number of links from the I/O modules going up to the management nodes changes. Where we first saw only one cable going up per I/O module, a customer can connect up to four cables per I/O module to the management nodes. If we do that, then more of the ports on the management nodes are occupied per chassis, which reduces the number of chassis we can use from 40 to 10.
Traditionally, server deployment has involved a great deal of management and duplication of effort. The storage administrator is required to configure logical unit numbers (LUNs), access and switch settings for zoning, VSANs, and quality of service (QoS). The server administrator must configure management for the LAN; upgrade firmware versions for the chassis, BIOS, and adapters; and configure BIOS settings, network interface card (NIC) settings, host bus adapter (HBA) settings, and boot parameters. The network administrator must configure LAN access, including uplinks and VLANs, and configure policies for QoS and access control lists (ACLs).
The problem with this approach is that tasks need to be performed for each server. This inhibits the flexibility of a “pay-as-you-grow” deployment, as admin effort is required every time you add a device. This also may involve system downtime. Under this approach, the server replacement process is complex, and many of these tasks need to be repeated for the replacement server.
Server deployment using the UCS approach eliminates many of the inefficiencies with traditional server deployment. Using this approach, you plan and pre-configure once. There is a collaborative storage, server, and networking planning and configuration phase.
Hundreds of servers are configured up front by creating server profiles, which specify the firmware bundle, BIOS and adapter settings, boot order and other parameters, and LAN and SAN connectivity. This approach allows for a “pay-as-you-grow” incremental deployment. A server admin can deploy servers any time, and in any increment, without disrupting network activity. Server replacement is as easy as replacing the physical blade.
The UCS server deployment approach cuts down on required management effort through the creation of server profiles. To upgrade or replace a server, you only need to disassociate the server profile from the existing server and then re-associate the profile to the new server. The existing server can then be retired or re-purposed by simply creating or associating an appropriate profile.
The traditional server deployment approach has resources provisioned for peak capacity in each vertical. A spare node per workload is allocated to handle spikes in activity.
Using server profiles, resources are provisioned only as needed. The same availability is provided, but there are fewer unused spare devices. This allows for reduced hardware needed, less power being used, and less management effort.
In a UCS architecture, disaster recovery across sites involves the full infrastructure stack. This includes application failover and dependency matching, OS configuration compatibility, data replication, and LAN/SAN connectivity matching.
One of the most critical aspects of disaster recovery is server compatibility, which requires configuration matching for devices such as NICs and HBAs, firmware version matching, parameter settings matching, and LAN/SAN connectivity matching. Server incompatibility across sites can cause application launch failures at the disaster recovery site.
Server virtualization and the hardware state abstraction work independently of one another. The hypervisor, or operating system, is unaware of the underlying hardware state abstraction.
UCS takes advantage of the provisioning and high-availability features associated with VMware and vMotion. This includes the capability for live migration of virtual machines across ESX hosts. Policy-based virtual machine migration across ESX hosts is also supported.
The ability to deal with a failed host is also critical. In such an event, virtual machines on the failed device can be restarted on other hosts.
The Virtual Desktop Interface (VDI) could be compared to thin client computing with the concept of taking the processing off of the end user's device and bringing it onto a server. However, the difference with VDI, unlike the thin client approach, is that the virtual desktop is dedicated to a single end user or mapped to provide the desktop OS and applications to a single client-viewing device.
The VMware VDI packaged solution uses VMware ESX as the underlying virtualization product. In a basic VDI architecture, client devices connect to the data center through a VDM connection server, which may also involve an Active Directory® server. In a more advanced deployment, there may be a load balancer and a demilitarized zone (DMZ) between client devices and the data center. The data center may feature infrastructure servers, application virtualization servers, and boot image management services.
This example describes how the typical firm may use several database applications running on different categories of databases. OnLine Transaction Processing (OLTP) applications handle the up-front transactions. As we move left to right, the Online Data Store (ODS), Data Warehouse, Data Marts, and Analysis represent different machines and apps that further extract and transform the data into information that management uses to make forecasts, look for trends and patterns, etc. Companies often use different compute systems at these various levels of the database solution. Understanding where in this system the UCS might fit requires an understanding of the criteria of each of the compute elements at the various levels, and what kinds of machines and applications the client currently uses
In a three-tier system, the web servers face the customers near the top of the network, application servers support the business logic, and the database machines provide back-end support. Where in this architecture can the UCS system fit?
A firm likely will not spend significant money on web servers. Many companies use inexpensive rack-mounted machines to work as web servers—some for production, some for failover, etc. Instead of buying and racking all these machines, a firm could install VMware on a UCS blade and create a series of virtual machine web servers.
At the middle level, the UCS can act as a transactional server. A low-end blade would do very well here. The client can run VMs or not, depending on their needs.
At the backend, performance and I/O are critical. The UCS offers very low latency, so it makes very good sense to run the database on these boxes. Often times the client uses Big Iron systems, or more recently Oracle racks at the back end, but the sales person should not get discouraged when the client does not want to migrate to UCS at the back end, because for every database server there are often many App and Web servers, and the UCS system fits very well there.
2013年3月11日星期一
Cisco UCS Sales Specalty Training notes
Data Center Components
B230 M1
* Stateless Computing – Server Profile – MAC, WWN, IP, Firmware, BIOS, VLAN, ACL etc
Service Profile – Personality – MACS, WWN, UUIDs
Service Profile – Server Resources – vNIC and VHBAs
Service Profile – Behavior – Qos, Firmware etc – abstract hardware
Nexus 2K – Fabric Extender to support 1G Connection for FCOE
Nexus 2K & 5K acts single virtual Switch
CNA – Convergency Network Adpater ?
Nessus7K – HA, Support VDC up to 4 par
UCS VIC M81KR with FI ?
Remove software virtual switch, Replace software based switching on esx with asic based on FI, VN-Link, VIC can be both hw or virtual but CNA only work with Nessus 1000v
UCS P81E Virtual Interface with UCS Server
Support up to 128 PCIe vNIC, dynamic provisioning of vNIC and vHBA, VN-Link
Standard Mode: 1-1 VMC –> PCIe by VIC. All traffic sent to upstream FI for switching
High Performance Mode: by passing VM IO. Direct connect VIC, eliminate a complete memory copy in hypervisor. Performance Gain 50% more
VN-LINK – Move Data Link to VM layer
Issues addressed by Cisco UCS
UCS Management
Front Panel : Video USB Serial
Ethernet Connects Cisco Integrated Controller CIMC runs on Baseboard Management Controller BMC
Inband -
Outband – IPMI2, CLI, Web interface
LOM : Lan on Motherboard
N+1 vs Grid Redundancy : twice the non-redundancy-survive the loss of an entire power grid within dc
Inbound and Outbound: inbound not redudant, Outbound redudant N+1
UCS Fabric Interconnect / Fabric Switches
not for non-UCS hardware, cannot upgrade to Nexus 5000
Fabric Extender
IO MUX – flexible bandwidth alloc between blade and fabric switch, transparent to user, managed by CAM.
Chase Management Controller CMC – service processor, HA with another fabric extender/CMC, discover hw
Storage Management
Top of Rack, End of Row or Middle of Row
Summary of Top of Rack advantages (Pro’s):
- Copper stays “In Rack”. No large copper cabling infrastructure required.
- Lower cabling costs. Less infrastructure dedicated to cabling and patching. Cleaner cable management.
- Modular and flexible “per rack” architecture. Easy “per rack” upgrades/changes.
- Future proofed fiber infrastructure, sustaining transitions to 40G and 100G.
- Short copper cabling to servers allows for low power, low cost 1oGE (10GBASE-CX1), 40G in the future.
- Ready for Unified Fabric today.
Summary of Top of Rack disadvantages (Con’s):
- More switches to manage. More ports required in the aggregation.
- Potential scalability concerns (STP Logical ports, aggregation switch density).
- More Layer 2 server-to-server traffic in the aggregation.
- Racks connected at Layer 2. More STP instances to manage.
- Unique control plane per 48-ports (per switch), higher skill set needed for switch replacement.
10GE CNA
CISCO UCS Power Calculator & TCO Advisor
URL: http://www.myciscocommunity.com
Position of Rack: http://bradhedlund.com/2009/04/05/top-of-rack-vs-end-of-row-data-center-designs/
Q/A
Support Boot from SAN ?