Skip navigation

Tag Archives: EMC


I have thinking recently about what and where I would be in 3 yrs regarding my career path. If I go through the certificates I got, I have a strong feeling that I don’t have any cloud certificates. so here is question.

I have VCP certificates, do I need anything else?

First of all, you need to be aware that we are talking about cloud Certificates here. It’s not virtualization certificates. Virtualization is ground brick of Cloud, but it can’t represent the concept and IT as Service. We do need something in general concept and help us to convert business mode from normal traditional EA software license to become user consumption, department consumption mode as what I called now, IT as service.

The reason for that is we need to fully understand mode and details of each department and each software usage for business so in one day, we can break down some pieces and shift them to Public cloud.

Now, back to my own topic, Certificates.

EMC

So far, I have only found one set of Cloud certificates which is EMC CIS.

The path of getting all certificates are following:

 

Become an IT professional who demonstrates cross-domain expertise and focus on designing cloud-based IT service solutions that drive business transformations for the enterprise and service provider organizations. This course is for those assessing, architecting, and designing IT-as-a-Service solutions as part of the transformation and optimization of virtual data centers into cloud-based IT-as-a-Service environments. Prepare for your Expert-level Cloud Architect Certification.
Exam and Practice Test
Expert E20-918 (To be announced)
Specialist E20-018
Associate E20-002
OR
E20-001

 

 

 

To be honest, the cost of taking those training are huge. You are basically looking at $3000 just for video training and $5000 for Lab training with limited region. As parent company of Vmware, EMC believes it earns it’s place to issue it’s certificates as first in industry. But will  you really want to get EMC certificates?

 

Cisco

Cisco has been really pushing on Virtualization and working extreme well. The flag product is UCS which earns respect and become default Blade system any company would want to have. Cisco certificates are not new to I.T so here is something new from Cisco in Cloud side.

CloudVerse

This is new released by Cisco and that’s what Cisco picture itself in Cloud business. Since there is no doubt on network part, with help of Vmware, I’m very sure Cisco will become a true leader of Cloud certification.

However, there is one specific cloud certificate from Cisco yet, so UCS certificates should be the one you can get.

Interesting enough, not only you need to pass Cisco Exam, but also must own Vmware certificates. Hence, we know how strong the relation between Cisco and Vmware.

Vmware

Here we go. There is no reason I don’t mention Vmware here. But the tricky thing is even after so many effort from Vmware for Virtualization and Cloud, there is no Vmware Cloud certificate!

My guess is Vmware is still working on how to make Cloud Director really working as it should be. All other components are made but not mature yet. We should expect to see some sort of Vmware Cloud certificates in next 2 or 3 years.

Others:

Citrix:

I never be a fan of Citrix. In my mind, it’s complicated, not user friendly, consuming too much resource, overhead for administration and too expensive and too many on licenses. The only reason we are still hitting on Citrix is the XenDesktop which is great on low bandwidth. Apart from that, I don’t see any attractions.

No cloud certificates on Citrix but I believe it will kick in pretty soon.

Microsoft:

Microsoft is keeping it’s own way on Cloud definition. It seems it doesn’t like to share whatever technology it’s using. Hyper-v 3 is finally taking vDS into it but still lack of hardware vendor’s support. Windows Azure is slowly slowly moving forward with few companies doing DEV and test on it. Office 365 could be a good one but it’s charging too much and limited on customization. Leave your product into black box and you can’t manage and don’t know how it works is a scary strategy to take.

 

Well, as usual, drop a line to me and see what I have missed.

Reference:

http://education.emc.com/guest/certification/framework/ca/itasaservice.aspx

http://2and2is5.wordpress.com/2010/04/01/cisco-data-center-ucs-specialist-certification/

http://mylearn.vmware.com/portals/certification/

 

 

 

Advertisements

First of all, I need to point out that I ain’t work for EMC nor VMWARE. I won’t do anything like Chad in Virtual Geek to say, “Yes, it’s true. EMC really IS #1 for VMware.” What I’m going to to talk about is purely from a customer point of view. A tech who doesn’t favorite either EMC nor Netapps. I just want to draw a picture of EMC storage and Vmware related technologies in front of you so we can put everything on the table and discuss it.

As usual, any comments and discussions are very welcome here.

EMC Storage – Celerra, CLARiiON and Celerra Unifed Storage

Well, back in years ago, There are three production lines in the EMC storage. Celerra, CLARiiON and Symmetrix. After years of Virtualization development, the line between Celerra and CLARiiON is getting really blurry. Celerra used to dedicate to NAS file system. It provides NAS only. With CLARiiON(like CX3 series), it mainly focus on Block level storage like FC or iSCSI. Now, with new product line Celerra Unified storage coming out, I don’t really think anyone would buy old system any more. Because new Unified Storage provides both Celerra & CLARiiON in one box. EMC call them block enabled Celerra system (NS-120,480,960,etc). However, as you may know the technology using in NAS are quite different from technology using in the block storage. If you are as confused as me, please read rest of article and hope it can help you clear your mind. This article is focusing on Celerra Unified storage only.

EMC Deduplication and Compression

As everyone knows, one of key elements of storage is disk capacity. How to utilize disk space and tier down unused data and files and compress them to small space becomes the major reason when we select Storage. Even after years and years research, EMC still insists deduplication should only happens on file level instead of block level. so what does that mean to customers who bought Celerra Unified Storage? It means you can only use block compression if you use block storage (like FC, iSCSI, this is as part of CLARiiON technology) and you can only use file level Compression and file level deduplication when you use NAS (as part of Celerra, if you use NFS for VM or NFS,CIFS for file systems). In other word, how  you divide your LUNs and what kind of block or file system you use will dramatically impact your system. Let’s break down those technology and see what they are.

EMC compression

As what I mentioned above, depends on what kind of system (NAS or block) you use, Celerra will use different ways to compress the data. Let’s talk about block level compression first.

Block level compression

As what this name indicates, this compression should only work on FC or iSCSI LUNs. The block size compression works on is 64KB. Each block is processed independtly. The typical result of compression is as much as 2x while it use modest CPU resource.

Note:

In default, CX4-480 can have 8 concurrent threads on compression. When all threads running at same time, the consumption of CPU will be compression rate(speed) as Low (15% CPU), medium (30~50% CPU) and high(60%~80% CPU).

How block compression works

1.Initial compression- This occurs when you enable compression. It compress entire LUN and can’t be paused in the middle. But it can be disabled during the procedure. No damage will be done.

2. Compression of new data – When new data is written, it is written uncompressed and compressed asynchronously. It keeps doing that until you disable compression. In default, when 10% of user capacity of LUN or 10GB new data are written, and total amount new data is larger than 1GB, compress starts. It does use some of SP cache memory for swapping. When you compress a LUN, that LUN will automatically migrate to a thin LUN in different pool if LUN is a normal RAID lun. If it’s a think LUN, it will reminds in the same pool.

3. Decompression when compression is disabled- if the original LUN is a thin LUN, it will reminds thin LUN. If the original LUN is a thick LUN or RAID group LUN, it will write zeros to unallocated capacity till full while it reminds a thin LUN. System will pause at 90% and stop at 98% if the LUN has filled up too much.

Limits of block compression

  • The following cannot be compressed:
  • Private LUNs (including write intent logs, clone private LUNs, reserved LUNs, meta LUNs, and component LUNs)
  • Snapshot LUNs
  • Celerra iSCSI or file system LUNs ( Personally, I don’t think that’s right. I’m confirming with EMC now)
  • A LUN is already being migrated and expanding or shrinking.
  • A mirrored LUN replicating to a storage system running pre-29 FLARE code.

Interactive of compression with other functionalities

Basically, a compressed LUN is transparent to other operations like replication or migration. But by saying that, it’s better not migrating or copying while compress at same time. It’s always easy for SAN to enable compress after migration.

How to setup compression?

All what you need to do is to connect Celerra Unified storage control station with your Internet Browser. You will have Unisphere Manager running directly from SAN or you can install Unified Manager on a windows server and connect to your box. Compression function is a licensed feature and you should have it directly from console. Unlike Celerra NAS part, there is no VMWARE plug-in availabe for compression so you need to use Unisphere to do the job.

There is no Vmware plug-in?

It is very interesting that Celerra Unified NAS part got a vmware plug-in while CLARiiON reminds nothing. I reckon vSphere may use VAAI API to offload clone from host to SAN but why it doesn’t work for Celerra NAS part? If anyone can answer this question, it will be appreciated.

To be continued……

Reference:


I have chance to get involved into a EMC pre-sale meeting today. During the meeting, the EMC pre-sale Engineer introduced F.AS.T v1 and V2 to us. I did know what FAST it was before, but this presentation really opened my eyes and also Engineer was able to answer few of my questions abour Netapps Deduplication vs EMC Compression. I will bring details into this post. However, because there ain’t much available data in the Internet, I have to draw an ugly diagram to help me expressing my idea. I may make mistakes, please feel free to point out.

What is EMC FAST v1?

As you can see from full name of F.A.S.T, It’s about tiering your storage automatically. As you may know the transitional SAN storage contains FC disk and SATA disk. FC is fast and expensive and SATA is slow for random w/r and cheaper. As SAN administrator in the company, your job would be give right LUNs to appropriate servers to fit SLA requirement.

With F.A.S.T, it basically did following things:

1. Add EFD(Enterprise Flash Disk) layer.

As we all know, SSD (solid storage disk) is 100 times faster than FC. It has SLC(single layer cell) and MLC(multiple layer cell) two types. All SSD are short life product. So how does EMC manage to overcome these issues?

These EFD are made of  SLC SSD not MLC, meaning it’s faster than MLC SSD. As you may have heard, SSD is easy to be broken. The reason for easy damaged SSD(same as your usb flash disk) is rewrite same location repeatedly. For the normal system, you write first block of flash disk, and wipe out and write again. So the first block of flash disk is used too many times and easy to damaged. EMC EFD won’t use same spot twice until it has finished all other available spots in the SSD.

Each EFD has 3 components. Cache area (fastest area), normal storage area and hotspot area. All data will write to fast cache first and then, write to normal storage. The spot of normal storage will be discarded after few times reusing and it will start to use spot in the hotspot area to avoid potential bad spot. Same thing apply to cache area, if one of spot is damaged it will start to use spot in the normal area. According to EMC, the EFD has 5 years warranty.

2. It added a virtual LUN layer

Virtual lun can isolated Host and actually storage details. Host doesn’t need to know which physical LUNs (FC,EFD,SATA) it’s operating. With virtual LUN technology, the FAST true mean can work under SAN layer.

3. Auto moving LUNs between tiers

This is what FAST for. F.A.S.T can automatically (or manually) move your LUNs to different tier. Busy and high demanding LUNs will move to fastest tier (EFD) or FC. The low priority LUNs can be shift to SATA to safe fast speed tier for SLA requirement.

What is FAST v2?

We have briefly introduced FAST v1 system as above. After EMC push this technology to it’s customers, they discovered most of customers actually bought lots of FC disks instead of SATA disk. Because FAST v1 is operating on LUN level. Everytime it moves, it has to move whole LUN which is slow and inefficient. so FAST v2 comes to alive.

FAST v2 made some big changes.

1. Let’s making pool

Well, basically, you need to create pool first. A pool is combination of different tiers resource. For example, you can make a pool which has 3xEFD, 4x FC, 5xSATA with all RAID 5. Then, you can create LUNs on this pool. The LUN will be built cross all tiers instead of sitting on one.

2. Let’s move 1GB data segment.

From FAST v1, we move whole LUN which takes long long time and also may not be effective as well. With this version of FAST, we move data with 1GB data segment as smallest operation unit. Meaning if one LUN got hit very hard, the system will use fast cache hold the data and started to move that most busy segment from SATA to EFD. Then, it will move other segments later on according to utilization of LUNs.

EMC compression vs Netapps Deduplication

I have an interesting conversation with EMC Engineer. EMC has preach block level compression to all systems instead of deduplication like NetApps did. This compression and decompression can be done on the fly. It will add about 5% performance overhead which you may not notice. However, it gives you almost 50% compress ratio comparing with deduplication ratio which is only 30% most of time. For the SP utilization, the compression will cost 5% utilization and dedup will cost around 20% CPU.

EMC is very cautious about CPU utilization on Storage. They reckon the normal utilization should be around 25% of single CPU. If one of your SP failed, then, your load will be 50% on remain CPU. They don’t want to use deduplication cost too much cpu resource at this time. At least, not with current CPU horsepower. According to them, the CPU will be much powerful in 2 years which will not only allow to do deduplication, compression, it will also allow you to directly run VMs (like  WAN accelerate appliances) on it. In short, EMC is quite conservative company but it does provide awesome technology especially for long run.

Please leave your comments if you want.

-Silver


Well, I had an opportunity today to attend EMC inform 2010 which holds at Crown building. This Inform is hold by Vmware, EMC, Cisco. As what they like to call themselves “VCE”.

This inform is about to push vPLEX technology, vBlock and 100% Virtualisation into market. They brought along few companies including MelbourneIT with them to testimony their new technology.

Shall we push ourselves to 100% virtualisation?

Basically, Vmware still try to push 100% virtualisation to public and more companies regardless the fact most of company only virtualize 75% of their production. The reason people don’t virtualize the rest of 25% is understandable. Virtualization needs to use certain amount of overhead resource to run their virtual kernel and that’s inevitable.

Some software like SQL, Exchange, and ERP system are still using physical boxes because major advantage of virtualization, which is consolidation, is not long goal you can get when you virtualize those servers. According to one of CTO who gave the speech during the inform, the best practice for virtualizing those servers is to give them a dedicated box to run. Meaning, you are not only pay for the MS license, you also need to pay for extra Vmware license cost. So Why would you virtualize the rest of 25% production?

The answer is DR system. According to EMC CTO, EMC is ready to implement VPLEX Local and VPLEX Metro technology between 2 SAN systems. (for now). Once you can duplicate your large caches of SAN in two different geographic location, Vmware Site Recovery Manager (SRM) can kick in and make Vmware DR easy and possible. Meaning if you want your Exchange, SQL and ERP to have a good DR, you need to virtualize 100% your production although you have to pay quite a bit for extra license, SAN and datacenter rent. So it’s really up to business and how far they want to cover their bottom.

Does VPLEX actually work?

Well, I believe it does. It doesn’t matter you call it virtual storage layer or vm teleportation or  VPLEX, but it does help you to storage vmotion a VM from one SAN to another SAN (distance less than 100km) in just 7 minutes. Yes, we can do it without VPLEX, but it will take up to 1.6 hours!

I personally reckon the best speech during this inform was not from EMC nor VMWARE, it was from CTO of Melbourne IT. Melbourne IT has more than 5000 servers and half of them are virtual machines. They are Web hosting company which generates 5TB Internet traffic per day. One story CTO of MelbourneIT gave to us was the website of CFA (Victoria Country Fire Authority). Victoria had very bad bush fire in 2009. When bush fire happened at first day, only few thousands people visited this website. After few days, it became national news so visitor numbers increased from few thousands to 10 thousands. After few more days, this actually become global news. It’s like most people in this world thought the whole Australia was on fire. The website had 20 million visitors in one day! This amount of visits crashed the hosting server easily because MelbourneIT never expected this kind of traffic. So after only 11 minutes, MelbourneIT was able to migrate the whole VM to another datacenter via SAN replication link. The VM was transferred to much large datacenter which is capable to handle large traffic and be alive since them. Once the traffic backed to normal, Melbourne IT was able to transfer this VM back to low cost, slow speed tier. Awesome, isn’t it? The link they have for SAN has 50ms latency and less than 100km distance.

What is vBlock?

Accelerate the journey to private cloud with integrated packages combining best-of-breed Cisco, EMC, and VMware technologies. These packages are jointly tested and supported to deliver optimal performance and reduce operational expenses.

Vblock 2 delivers high-performance, large-scale virtualization across data centers of large enterprise customers. Vblock 1 delivers performance and virtualization for large or midsized companies and is suitable for data centers of any size, including remote-office locations. Vblock 0 is an ideal platform for companies that want an entry level configuration for small data centers, remote locations, or as a test/development platform.

In short, vBlock is a central management of “VCE” system. It uses one application to management and control multilayers. As EMC (as storage), CISCO(as back bone) and Vmware kernal tried to talk to each other, we need a system to manage all running jobs. Also, vBlock can be used for private cloud to communicate with public cloud or 3rd party cloud. To prove this case, One of guys from Optus made presentation regarding one of their client (a university) to integrate with Oputs Cloud. But he didn’t give us more details as what exactly T1 applications running on it so I don’t really buy it.

Summary:

EMC Inform is really a single company show (Vmware belongs to EMC) and they try to push VPLEX and VMWARE DR to market. For VPLEX, I believe you can use it to replicate Datacenter and VMWARE SRM is possible but not cheap. For the cloud, I guess we still have to wait and see how it goes in next 5 years.

Please leave your comments if you like.

P.S: I just got new domain for my blog so you can remember it. It is

http://geeksilverblog.com

Reference:

http://www.emc.com/solutions/application-environment/vblock/vblock-infrastructure-packages.htm


Just read tons of information about EMC World 2010 Boston. EMC has developed lots of new technology and provision for 2010. Let’s check out briefly.

1. VPlex active-active storage

This technology is basically design for distant SANs to synchronize each other. It makes VMWARE DR site runs parallel possible since the second SAN doesn’t do everything as the first SAN did. The second or other SANs will understand what have changed in the storage level and make change according that. And it supports asychronous replication.

Comparing with NetApps (it claims they can do remote distance SAN cluster as well), EMC said they can do more than 2 SAN synch.

2. EMC FAST 2 and block – Level compression, common management console

For the FAST 2 technolog, I didn’t see much new features as it just utilized Flash cash “better”. Block Level compression does sound interesting (essentially, it compress all “0” on block level) but how come EMC still doesn’t support block level deduplication as what NetApps does ? According to EMC, block-level compress can server better in daily production because it uses less time and easy. But block-decuplication is better in using backup.

Common management console is only design for user who owns Clariion and Celerra system. So you will have centralized management tool to work.

3. New Backup support (DD Boost).

Finally, this is a good news for Vmware owner we can speed up our backup speed up to 50%. By installing this software on your backup server (so far, backupexec and Netbackup from Symantec), your backup software will have metadata of your SAN data and understand where and how your data store. It can make deduplication level happening in SAN level (not VM Level).

4. EMC understand VMWARE more about disk operation

So this is another deep integration. It’s not only support vStorage API and it also supports thin provision stun (it hold vm if vm exceeds disk space, so other vm won’t crash), fast/full clones (SAN level clone instead VM clone), write zero/write same (SAN level snapshot or clone and SAN level eagerzerothick disk).

Conclusion:

From brief review, EMC started to integrate VMWare technology from core level regarding clone, copy, backup. It does make progress on distant SANs (more than 2) sych but most of users will still wait for best practice coming out. However, EMC still doesn’t support block level deduplication which I personally very confused. I guess that’s where NetsApp find crack.