Skip navigation

We had an interesting meeting from Cisco today and they showed us a picture of Cisco UCS. I spent a little bit time to dig around and I would like to share my understanding to you. As usual, the post should be easy to comprehensive.

What is Cisco UCS?

Yes, you can google it with this keyword. But essentially, Cisco UCS is part of vBlock components and CISCO decide to sell it separately. With VCE(Vmware, Cisco,EMC) union surfs up, it’s clear that Cisco play parts of blade servers and network role.

In this UCS system, Cisco will have a new VIC (Virtual Interface Card, it’s actually a physical card!) for your blade servers (or your rack servers!) , a fibre switch (Cisco 6100 Fabric Interconnects, which transmit both network and SAN information), a chassise and blades servers (Good bye, HP&IBM), one management software (I’m pretty sure it can manage EMC SAN as well, if you have license or models).

If you good enough to throw your EMC SAN into it, load with Vmware on the blade. Ding! You got your own vBlock!

What can Cisco UCS do for you?

It’s surprising that the first selling point from today meeting is not saving (they do mention saving after you buy at least 2 chassies, 6 blades…..), is not performance improvement (well, I will mention later). The first selling point is you will have less cables in the Datacenter!! Interesting, isn’t it? Well, let me elaborate those points one by one.

Less cables in the DC

Because you are using Blades servers, You would expect less cables since everything should go through the backbone (FC). Cisco did push out a new physical card, VIC (Virtual Interface Card, a very confusing name, isn’t it? ). You suppose to have these cards (well, load balance, should we?) in each your blade server or Rack server(Still need to confirm whether you can install on other brand servers). You should use this card for both network and SAN traffic.

VMDirectPath kicks in with VIC

This is interesting thing. VMDirectPath is a feature Vmware ESX 4  allows you to directly access a PCIe hardware on the host from your VMs. With this VIC in your ESX(need to download a special Cisco OEM version of ESXi 4), you would be able to directly mapping your vmxnet3 to your Cisco 6100 FC switch which will create a dynamic port to do 1:1 port mapping for your data traffic. So you basically ditch the vss(local traditional local vSwitch) and start to use fancy vDS. Wait a second, not only you need to buy a Enterprise Plus for all your ESX host, you actually need to purchase Cisco Nexus 1000 vDS so you will be able to let your Vmware to manage your network I/O and storage I/O since they are going through the same card. According to Cisco diagram, you will have 30% Network I/O performance increase, if you are using Vmware (bye,bye, Hyper-V and Citrix). But yes, that’s network I/O only, why? because Vmware hypervisor layer handles Storage I/O.

with vSphere 5.0 release later on, you will be able to vMotion via VMDirectPath. so it means Vmware Hyper layer would understand VIC and acting like a vm and transfer vMotion.

One Management Software

oh, yeah. UCS management software. A one stop for everything if you buy everything as what Cisco suggests. A basic version of vBlock software which, I’m pretty sure, has capability to control your EMC SAN as well. If you purchase the correct models and license. VCE claims they have a unique team to handle all call support. It’s not bug free software, but it does help you deploy VMs and locate issues.

Money, Money, money

Well. At the end of day, it’s cost which decides everything. Cisco UCS ain’t cheap. You need to buy Blade and chassies, that alone is going to cost you arm and leg. I still need to confirm whether VIC will work on other servers. But all those cost (assume you have already got your servers, SAN, FC switch) are actually for 30% Network I/O performance in VM and aggregates all your cables which most Blade servers do anyway. I haven’t compared cost between normal Blades and Cisco UCS. But that’s pretty much what it is.

Conclusion:

Well, if it is time for you to upgrade your ESX hosts, if you have plans to buy Blade and Chassies, Cisco UCS can be an option for you. Well, yeah, almost forget these 30% network I/O and extra Vmware Enterprise Plus license and Cisco Nexus 1000…..

Advertisements

One Comment

  1. The UCS looks like a decent solution on the surface. I did some digging a while back and I couldn’t find good VMmark numbers. The full disclosure for the UCS systems on the VMware site show them set up with raid0 arrays. Next point of contention. The Cisco 1000v. It doesn’t scale, it takes cpu away form virtuals that may need it, most likely not, but maybe. It has some interesting features but I am not sure how close to my servers I want the network admins to get. With the 1000v they will need to configure vlans for the chassis in two places instead of one.

    For the cost, IBM, HP or Dell make superior products. I have not seen a UCS in production. I like how cisco overcomes the Intel memory barrier. I have seen more Vmware running on Sun hardware. I would like to kick the wheels on one of these.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: