Skip navigation

Tag Archives: ucs

What is UCS VIC failover.

Put it into a simple way, each blade can have a VIC card. Each VIC card has 2 10gbit/s ports like the one we are using, CISCO UCS M81KR.

This VIC card will handle all network/SAN traffic from this blade to both IOMs. When there is outage on one path of uplevel, VIC can automatically redirect traffic to another working interface without outage.

For more details, please refer to reference document.


Why we need to disable UCS VIC failover.

According to UCS design document,

All Connectivity May Be Lost During Upgrades if vNIC Failover and NIC Teaming Are Both Enabled All connectivity may be lost during firmware upgrades if you have configured both Enable Failover on one or more vNICs and you have also configured NIC teaming/bonding at the host operating system level. Please design for vailability by using one or the other method, but never both.
To determine whether you have enabled failover for one or more vNICs in a Cisco UCS domain, verify the configuration of the vNICs within each service profile associated with a server. For more information, see the Cisco UCS Manager configuration guide for the release that you are running.


UCS VIC failover will have MAC conflict with Host level Nic teaming including Vmware vNic Teaming.

Comparing two solutions of nic teaming failover, Vmware nic Teaming is also providing network load balance and much more controlling over Cisco VIC failover. Hence, we need to disable VIC failover.

How to disable VIC failover

If really depends how you setup your system. In my UCS, I have deployed NIC template and therefore, I will need to modify nic template first.



Notice the nic template type is Updating Template even when service profile template is Initial template, it means the change I will make (untick the Enable Failover) will be push to blade immediately.

The good thing is we have setup our reboot policy ask “User Ask”, so UCS will reboot blade immediately. Instead, it will put request into pending Activity list for approve.


Change failover procedure





Now, you will be able to schedule to reboot your blade.








Cisco UCS B series firmware upgrade from 2.0(2q) to 2.0(4a)


Why do we upgrade UCS firmware

This is a post which describes upgrade Cisco UCS B series firmware upgrade from 2.0(2q) to 2.0(4a). The reason for this upgrade is simple. A bug.

There is a Cisco Bug in the system which prevent show tech to be generated. Without show tech file, I’m not able to diagnosis any issues. So it has been more and more critical for us.

According to Cisco, 2.0(4a) has fixed this issue. I have attached the pdf in the reference, so you will be able to download and take a look. Basically, the real upgrade is pretty close to this document with minor twist.


Download firmware

There is no drama here. Just log in with your cisco account, and follow instruction on document so you will be able to download the bundle file.

In my case, I only have UCS B series, so I only downloaded two files.




There ain’t much to do with preparation. My personal suggestion is:

make sure you have enough space on bootflash


Then, you can upload those two files into system easily from local Server.

Backup your current configuration.


You need to make sure you have filename written in the field otherwise it may not able to backup configuration.



Create Host Firmware Package

This package will delivery quite few firmware updates and will only be deployed to service Profile. In another word, your server must associate with service profile in terms of getting those firmware.


Now, with different environment, firmware package can contains different components.


In our system, UCS blade has one DCE which is M81KR. However, I didn’t include adapter firmware in the package according to PDF doc. But Cisco tech support said I should include it in the firmware.



BIOS is a must.


Storage Controller:

Because we use RAID-1 local disk for OS. so we need to upgrade that as well.

Board Controller:

Comparing with package version, there is no new version. so we don’t need to upgrade this one.


Disable Call Home Service




Update Firmware for Adapters, CIMCs,IOMs

Update firmware is just to load new version to backup Version slot. The new version will kick in as start up version once you restart components.


For just Update firmware, you can select ALL, it will not cause any harm.





Activating firmware on adapters and CIMCs

You need to do these steps in order. You can’t select adapters and select CIMCs settings and hope to click ok to apply both components at once. It will cause issue. If you somehow did select both, click Cancel.

DO NOT select ALL in the filter to activate everything in once!!

Activate firmware for Adapter.


Notice Active status is Pending Next boot



Activate CIMCs

CIMCs is separate component from data. so It will restart itself but no disruption for production data.


CIMCs will become 2.0(4a)


Activating UCS Manager Software

This will cause console,KVM to restart. No data disruption as well.


Activating IOM

IOM is important module and will cause data disruption. so this module will reboot when you reboot FI. If you have 2  FI as redundant, you can reboot one FI at a time. When you reboot FI-A, IOM-A will reboot as well. Therefore, we will only load new version to Startup version and wait for reboot.




Activate Fabric Interconnector Firmware

With fabric Interconnector, we need to identify which one is subordinator. We will update subordinate first, then switch role to new FI with primary and update another FI. You need to make sure your redundant system is working otherwise, you will experience downtime on blades.

In my personal experience, you can actually give FI (subordinator) a reboot before you update firmware so it will clean up lots of stuck issue and processes.






If FI come up with status like that, it means it’s all good for update another FI.


check all connections including network and VIFs


essentially, if you see connections on both FI-A and FI-B, then it means it is right. Just be aware that some command line has changed once you upgrade your version of UCS Manager.

You will do the same step for the other FI but remember to switch other FI to become subordinator first.

Update blade BIOS, SLI logic controller, and others

This is the last step. Before you do anything, you need to make sure you have management policy setup correctly like this.




then you need to make sure your host firmware packages is attached with template or service profile.


Once you made change, it should pop up to reboot or not.



Choose No to reboot at your own time.


Thank you for reading. Hope it helps



Recentely, we have finally got upgrade to new environment which is Cisco UCS 2.0. We are all excited with new toy but we ran into some design issue s which I would like to record here so you can avoid it in the future.


FC Uplink needs to be at right ports.

I think this is basic common knowledge but  clearly, we don’t know. With Fabric Interconnector, we need to configure FC port to connect to Uplink FC switch. At first, we put FC ports in the middle of switch and put Ethernet Uplink at the end of switch (like port 31/32). Then, we realize it’s not doable once we get into configuration.

click that to get into FC port configuration

Click Yes

We put FC link in the middle, which is wrong. Ethernet port at the end. As you can see, there is slide bar to slide to configure. Once you slide, you will see this.

All ports on right side of bar will be FC ports. So you can either put FC to expansion model or you have to change your ports.


UCS Memory is bigger than your hard disk

Well, this actually sounds ridiculous. But it’s one of reasons why we bought UCS. Our Blade has 196GB memory and we will put Vmware on them. We also bought 100GB SSD to increase swap file speed. Unfortunately, at that time we purchased, we didn’t realize that to put swap file of vms on local disk, we need at least same size 196GB as memory so vm swap file can use local disk rather than precious of SAN storage. Even with new vSphere 5 feature (Swap host cache in SSD), that function won’t help much only we have memory contention. So if we balance it out, we should buy some big size of SAS to cover that.

vMotion is No!

Well, maybe it’s just me that I’m get used to always vMotion everywhere. Once I installed new blade and join them to our vCenter. I tried to offload my vms to new host. Then, I got this error.

Of course, what you need to do is to turn it off and migrate. But then, that’ s outage or you have to EVC.

All those errors can be avoid easily but it’s matter of experience, I guess. Hope it helps.


We had an interesting meeting from Cisco today and they showed us a picture of Cisco UCS. I spent a little bit time to dig around and I would like to share my understanding to you. As usual, the post should be easy to comprehensive.

What is Cisco UCS?

Yes, you can google it with this keyword. But essentially, Cisco UCS is part of vBlock components and CISCO decide to sell it separately. With VCE(Vmware, Cisco,EMC) union surfs up, it’s clear that Cisco play parts of blade servers and network role.

In this UCS system, Cisco will have a new VIC (Virtual Interface Card, it’s actually a physical card!) for your blade servers (or your rack servers!) , a fibre switch (Cisco 6100 Fabric Interconnects, which transmit both network and SAN information), a chassise and blades servers (Good bye, HP&IBM), one management software (I’m pretty sure it can manage EMC SAN as well, if you have license or models).

If you good enough to throw your EMC SAN into it, load with Vmware on the blade. Ding! You got your own vBlock!

What can Cisco UCS do for you?

It’s surprising that the first selling point from today meeting is not saving (they do mention saving after you buy at least 2 chassies, 6 blades…..), is not performance improvement (well, I will mention later). The first selling point is you will have less cables in the Datacenter!! Interesting, isn’t it? Well, let me elaborate those points one by one.

Less cables in the DC

Because you are using Blades servers, You would expect less cables since everything should go through the backbone (FC). Cisco did push out a new physical card, VIC (Virtual Interface Card, a very confusing name, isn’t it? ). You suppose to have these cards (well, load balance, should we?) in each your blade server or Rack server(Still need to confirm whether you can install on other brand servers). You should use this card for both network and SAN traffic.

VMDirectPath kicks in with VIC

This is interesting thing. VMDirectPath is a feature Vmware ESX 4  allows you to directly access a PCIe hardware on the host from your VMs. With this VIC in your ESX(need to download a special Cisco OEM version of ESXi 4), you would be able to directly mapping your vmxnet3 to your Cisco 6100 FC switch which will create a dynamic port to do 1:1 port mapping for your data traffic. So you basically ditch the vss(local traditional local vSwitch) and start to use fancy vDS. Wait a second, not only you need to buy a Enterprise Plus for all your ESX host, you actually need to purchase Cisco Nexus 1000 vDS so you will be able to let your Vmware to manage your network I/O and storage I/O since they are going through the same card. According to Cisco diagram, you will have 30% Network I/O performance increase, if you are using Vmware (bye,bye, Hyper-V and Citrix). But yes, that’s network I/O only, why? because Vmware hypervisor layer handles Storage I/O.

with vSphere 5.0 release later on, you will be able to vMotion via VMDirectPath. so it means Vmware Hyper layer would understand VIC and acting like a vm and transfer vMotion.

One Management Software

oh, yeah. UCS management software. A one stop for everything if you buy everything as what Cisco suggests. A basic version of vBlock software which, I’m pretty sure, has capability to control your EMC SAN as well. If you purchase the correct models and license. VCE claims they have a unique team to handle all call support. It’s not bug free software, but it does help you deploy VMs and locate issues.

Money, Money, money

Well. At the end of day, it’s cost which decides everything. Cisco UCS ain’t cheap. You need to buy Blade and chassies, that alone is going to cost you arm and leg. I still need to confirm whether VIC will work on other servers. But all those cost (assume you have already got your servers, SAN, FC switch) are actually for 30% Network I/O performance in VM and aggregates all your cables which most Blade servers do anyway. I haven’t compared cost between normal Blades and Cisco UCS. But that’s pretty much what it is.


Well, if it is time for you to upgrade your ESX hosts, if you have plans to buy Blade and Chassies, Cisco UCS can be an option for you. Well, yeah, almost forget these 30% network I/O and extra Vmware Enterprise Plus license and Cisco Nexus 1000…..