Skip navigation

Tag Archives: CISCO

Recentely, we have finally got upgrade to new environment which is Cisco UCS 2.0. We are all excited with new toy but we ran into some design issue s which I would like to record here so you can avoid it in the future.


FC Uplink needs to be at right ports.

I think this is basic common knowledge but  clearly, we don’t know. With Fabric Interconnector, we need to configure FC port to connect to Uplink FC switch. At first, we put FC ports in the middle of switch and put Ethernet Uplink at the end of switch (like port 31/32). Then, we realize it’s not doable once we get into configuration.

click that to get into FC port configuration

Click Yes

We put FC link in the middle, which is wrong. Ethernet port at the end. As you can see, there is slide bar to slide to configure. Once you slide, you will see this.

All ports on right side of bar will be FC ports. So you can either put FC to expansion model or you have to change your ports.


UCS Memory is bigger than your hard disk

Well, this actually sounds ridiculous. But it’s one of reasons why we bought UCS. Our Blade has 196GB memory and we will put Vmware on them. We also bought 100GB SSD to increase swap file speed. Unfortunately, at that time we purchased, we didn’t realize that to put swap file of vms on local disk, we need at least same size 196GB as memory so vm swap file can use local disk rather than precious of SAN storage. Even with new vSphere 5 feature (Swap host cache in SSD), that function won’t help much only we have memory contention. So if we balance it out, we should buy some big size of SAS to cover that.

vMotion is No!

Well, maybe it’s just me that I’m get used to always vMotion everywhere. Once I installed new blade and join them to our vCenter. I tried to offload my vms to new host. Then, I got this error.

Of course, what you need to do is to turn it off and migrate. But then, that’ s outage or you have to EVC.

All those errors can be avoid easily but it’s matter of experience, I guess. Hope it helps.



We had an interesting meeting from Cisco today and they showed us a picture of Cisco UCS. I spent a little bit time to dig around and I would like to share my understanding to you. As usual, the post should be easy to comprehensive.

What is Cisco UCS?

Yes, you can google it with this keyword. But essentially, Cisco UCS is part of vBlock components and CISCO decide to sell it separately. With VCE(Vmware, Cisco,EMC) union surfs up, it’s clear that Cisco play parts of blade servers and network role.

In this UCS system, Cisco will have a new VIC (Virtual Interface Card, it’s actually a physical card!) for your blade servers (or your rack servers!) , a fibre switch (Cisco 6100 Fabric Interconnects, which transmit both network and SAN information), a chassise and blades servers (Good bye, HP&IBM), one management software (I’m pretty sure it can manage EMC SAN as well, if you have license or models).

If you good enough to throw your EMC SAN into it, load with Vmware on the blade. Ding! You got your own vBlock!

What can Cisco UCS do for you?

It’s surprising that the first selling point from today meeting is not saving (they do mention saving after you buy at least 2 chassies, 6 blades…..), is not performance improvement (well, I will mention later). The first selling point is you will have less cables in the Datacenter!! Interesting, isn’t it? Well, let me elaborate those points one by one.

Less cables in the DC

Because you are using Blades servers, You would expect less cables since everything should go through the backbone (FC). Cisco did push out a new physical card, VIC (Virtual Interface Card, a very confusing name, isn’t it? ). You suppose to have these cards (well, load balance, should we?) in each your blade server or Rack server(Still need to confirm whether you can install on other brand servers). You should use this card for both network and SAN traffic.

VMDirectPath kicks in with VIC

This is interesting thing. VMDirectPath is a feature Vmware ESX 4  allows you to directly access a PCIe hardware on the host from your VMs. With this VIC in your ESX(need to download a special Cisco OEM version of ESXi 4), you would be able to directly mapping your vmxnet3 to your Cisco 6100 FC switch which will create a dynamic port to do 1:1 port mapping for your data traffic. So you basically ditch the vss(local traditional local vSwitch) and start to use fancy vDS. Wait a second, not only you need to buy a Enterprise Plus for all your ESX host, you actually need to purchase Cisco Nexus 1000 vDS so you will be able to let your Vmware to manage your network I/O and storage I/O since they are going through the same card. According to Cisco diagram, you will have 30% Network I/O performance increase, if you are using Vmware (bye,bye, Hyper-V and Citrix). But yes, that’s network I/O only, why? because Vmware hypervisor layer handles Storage I/O.

with vSphere 5.0 release later on, you will be able to vMotion via VMDirectPath. so it means Vmware Hyper layer would understand VIC and acting like a vm and transfer vMotion.

One Management Software

oh, yeah. UCS management software. A one stop for everything if you buy everything as what Cisco suggests. A basic version of vBlock software which, I’m pretty sure, has capability to control your EMC SAN as well. If you purchase the correct models and license. VCE claims they have a unique team to handle all call support. It’s not bug free software, but it does help you deploy VMs and locate issues.

Money, Money, money

Well. At the end of day, it’s cost which decides everything. Cisco UCS ain’t cheap. You need to buy Blade and chassies, that alone is going to cost you arm and leg. I still need to confirm whether VIC will work on other servers. But all those cost (assume you have already got your servers, SAN, FC switch) are actually for 30% Network I/O performance in VM and aggregates all your cables which most Blade servers do anyway. I haven’t compared cost between normal Blades and Cisco UCS. But that’s pretty much what it is.


Well, if it is time for you to upgrade your ESX hosts, if you have plans to buy Blade and Chassies, Cisco UCS can be an option for you. Well, yeah, almost forget these 30% network I/O and extra Vmware Enterprise Plus license and Cisco Nexus 1000…..

Well, I had an opportunity today to attend EMC inform 2010 which holds at Crown building. This Inform is hold by Vmware, EMC, Cisco. As what they like to call themselves “VCE”.

This inform is about to push vPLEX technology, vBlock and 100% Virtualisation into market. They brought along few companies including MelbourneIT with them to testimony their new technology.

Shall we push ourselves to 100% virtualisation?

Basically, Vmware still try to push 100% virtualisation to public and more companies regardless the fact most of company only virtualize 75% of their production. The reason people don’t virtualize the rest of 25% is understandable. Virtualization needs to use certain amount of overhead resource to run their virtual kernel and that’s inevitable.

Some software like SQL, Exchange, and ERP system are still using physical boxes because major advantage of virtualization, which is consolidation, is not long goal you can get when you virtualize those servers. According to one of CTO who gave the speech during the inform, the best practice for virtualizing those servers is to give them a dedicated box to run. Meaning, you are not only pay for the MS license, you also need to pay for extra Vmware license cost. So Why would you virtualize the rest of 25% production?

The answer is DR system. According to EMC CTO, EMC is ready to implement VPLEX Local and VPLEX Metro technology between 2 SAN systems. (for now). Once you can duplicate your large caches of SAN in two different geographic location, Vmware Site Recovery Manager (SRM) can kick in and make Vmware DR easy and possible. Meaning if you want your Exchange, SQL and ERP to have a good DR, you need to virtualize 100% your production although you have to pay quite a bit for extra license, SAN and datacenter rent. So it’s really up to business and how far they want to cover their bottom.

Does VPLEX actually work?

Well, I believe it does. It doesn’t matter you call it virtual storage layer or vm teleportation or  VPLEX, but it does help you to storage vmotion a VM from one SAN to another SAN (distance less than 100km) in just 7 minutes. Yes, we can do it without VPLEX, but it will take up to 1.6 hours!

I personally reckon the best speech during this inform was not from EMC nor VMWARE, it was from CTO of Melbourne IT. Melbourne IT has more than 5000 servers and half of them are virtual machines. They are Web hosting company which generates 5TB Internet traffic per day. One story CTO of MelbourneIT gave to us was the website of CFA (Victoria Country Fire Authority). Victoria had very bad bush fire in 2009. When bush fire happened at first day, only few thousands people visited this website. After few days, it became national news so visitor numbers increased from few thousands to 10 thousands. After few more days, this actually become global news. It’s like most people in this world thought the whole Australia was on fire. The website had 20 million visitors in one day! This amount of visits crashed the hosting server easily because MelbourneIT never expected this kind of traffic. So after only 11 minutes, MelbourneIT was able to migrate the whole VM to another datacenter via SAN replication link. The VM was transferred to much large datacenter which is capable to handle large traffic and be alive since them. Once the traffic backed to normal, Melbourne IT was able to transfer this VM back to low cost, slow speed tier. Awesome, isn’t it? The link they have for SAN has 50ms latency and less than 100km distance.

What is vBlock?

Accelerate the journey to private cloud with integrated packages combining best-of-breed Cisco, EMC, and VMware technologies. These packages are jointly tested and supported to deliver optimal performance and reduce operational expenses.

Vblock 2 delivers high-performance, large-scale virtualization across data centers of large enterprise customers. Vblock 1 delivers performance and virtualization for large or midsized companies and is suitable for data centers of any size, including remote-office locations. Vblock 0 is an ideal platform for companies that want an entry level configuration for small data centers, remote locations, or as a test/development platform.

In short, vBlock is a central management of “VCE” system. It uses one application to management and control multilayers. As EMC (as storage), CISCO(as back bone) and Vmware kernal tried to talk to each other, we need a system to manage all running jobs. Also, vBlock can be used for private cloud to communicate with public cloud or 3rd party cloud. To prove this case, One of guys from Optus made presentation regarding one of their client (a university) to integrate with Oputs Cloud. But he didn’t give us more details as what exactly T1 applications running on it so I don’t really buy it.


EMC Inform is really a single company show (Vmware belongs to EMC) and they try to push VPLEX and VMWARE DR to market. For VPLEX, I believe you can use it to replicate Datacenter and VMWARE SRM is possible but not cheap. For the cloud, I guess we still have to wait and see how it goes in next 5 years.

Please leave your comments if you like.

P.S: I just got new domain for my blog so you can remember it. It is