Contemplating IT www.voicesofit.com/blogs/blog1.php?tempskin=_atom b2evolution 2012-03-22T14:38:08Z Storage 2012 Tony Asaro www.voicesofit.com/blogs/blog1.php/2012/01/18/storage-2012 2012-01-18T18:21:04Z 2012-01-18T23:58:42Z One of the most important changes in the storage world is that the last group of startups that "made it" had huge exits. 3PAR, Data Domain and Isilon all went public and then were acquired for over $2 billion each. Compellent and EqualLogic were in the billion range. And even BlueArc did well with about a $600 million buy out. Not too shabby. And there were a number of smaller storage acquisitions with very attractive valuations compared to their revenue traction.

So what does this mean? Well it is important for the end user because they now buy these innovative solutions from market leaders (in some cases this good and in some cases it sucks). It has also changed the value of the next crop of storage startups, many of whom are getting extremely attractive valuations from investors. Hey its only money and may be worth the risk if it leads to a billion dollar exit.

But are there still new categories and innovations to justify the next, next generation?

SSD Storage
One of the biggest areas of VC investment and new startups is for SSD storage system startups. On the surface you might just shrug your shoulders and conclude that the traditional storage system guys can just add SSD disk drives and match the value proposition of the startups. That is why it is essential that the SSD storage system vendors do more than just provide SSD and a menu of features. Instead they need to leverage SSD in order to provide value that the other storage vendors cannot.

What might this include? High performance is of course essential but it must also include a compelling price/performance metric. For example the price per IOPS should be less than $1.00. Most traditional storage systems with disk drives are far greater than this - $3.00; $5.00 and higher.

Capacity optimization is extremely important as well. SSD solutions need to drive down capacity costs so it is closer to spinning media in price per GB. Technologies such as thin provisioning are important but are pretty common amongst HDD-based storage systems. Primary deduplication plays a major role here. There is some controversy with using primary dedupe and impacting performance and even data integrity, so naturally it will be essential that both of these issues are addressed. Now keep in mind any new storage technology typically had these same controversies (e.g. differential snapshots and thin provisioning) but eventually these features became mainstream.

I am a big fan of tiering for SSD. Now the pure SSD players have a disadvantage in this regard. But data doesn't require high performance for its full life-cycle. Therefore having some way to move the data to a lower storage tier is vital. Now interpret this anyway you want. Just figure out a lower tier whether its HDD, a third-party storage system, the cloud or a dedupe tier but you have to make the economics work. It all comes down to money.

Quality of Service (QoS) is another capability that makes sense and can be combined with tiering. If you can assign performance SLAs on a per application basis then you can leverage SSD far more effectively. It is about using the resource and not wasting it. Again, this concept is not new but has not truly been embraced by the market. QoS + SSD can create a very visible value proposition that traditional storage systems can't match.

Cloud Storage - A Wide Open Landscape
I love it when the hype cycle is at the stage that cloud storage is presently. It is simultaneously over-hyped and under-hyped. Just when real solutions are making traction, it is losing its sex appeal.

Cloud Storage is real and is making incredible progress. And there are lots of approaches today. There is pure cloud storage solutions. There are cloud storage solutions that focus on unstructured data; others on transactional block storage; application-specific services and solutions; services that focus on individual users; consumer oriented solutions; etc.

PB Storage
There are medium-sized companies that have hundreds of TBs today. That means that in a year or two they will have over a PB. And of course there are large enterprises that have 10, 20 or more PB today. So we are now in the PB storage era. This is a massive amount of capacity that has its own set of challenges just based on sheer size. Storing all of this data on a single logical storage system is easier said that done. How do you maintain any reasonable performance and address the challenges of data integrity that come with massive scale? How do you protect it all? Where does it live? How much space and power does it require? And how the heck will you ever migrate all of that data between systems? And let's not forget cost - which always is a factor - especially with IT budgets being flattened or reduced.

Big Data
I was just at an event and the question was raised as to whether people were going to be using the cloud to store big data. I suggested that you first need to define big data. Big data means different things to different people just like "data" in of itself means different things depending on who you are talking to. To a movie studio data means video and audio. To a bank data means financial information. And on and on. Big Data is as varied as data. The importance of big data is not the adjective "Big" or the noun "Data" but the verb "Analysis". It is how we get the use out of that data that matters. Like most things in IT this is not really a revelation but a point of clarity that we are making. We have been analyzing certain data forever but now we need to turn to other forms of data (and there is lots and lots of it) in order to get better use out of it. Analyzing data can lead to a new product break through; improving services; streamlining finances; uncovering business risks; etc. Most importantly, it can make IT far more strategic than it is today. And that is a change that is long overdue.

So how does storage fit into all of this? There are vendors that have put together database and storage systems into a single solution in order to optimize performance to run analytics. There are storage systems that have been optimized for performance for database analytics (e.g. SSD storage systems). But Big Data analytics is still emerging and could stumble and fall. Because the value proposition has to be tangible and easily consumed. I know of very few big data analytic projects that have been landscape changing beyond improving query response times.

What needs to happen (and the process has sort of begun) is the convergence of worlds that are otherwise disparate. You need to bring the business analytic world together with the IT infrastructure world and make something that is best-in-class in both.

Unstructured Data
The constant and relentless trend has been that unstructured data far outstrips any other data type in capacity growth and this reality is perpetual. Yet most of the new storage startups are just focused on SAN-based solutions. Where are the next generation NAS guys? If you are out there I hope you are focused on more than just a new scale out file system or architecture. Because its not that important. If you want the recipe for success - BIG SUCCESS - I have it. Give me a call spacer

Data Protection and Recovery
Not only does backup suck but so does replication. This space needs total reinvention but here is a catch - it isn't friggen easy. It is friggen hard. The challenge is not just technological but also with business execution. You must overcome incumbency. You must be able to deal with educating a legacy mindset with customers. You most ensure that you don't break anything as you attempt to fix everything. Additionally, there has been no uber-big wins in this arena. But whomever can solve this problem and can execute in the market can be the next billion-dollar baby.

Note - I purposely left out vendor names in this blog entry.

Original post blogged on Voices of IT.

]]>
Nirvanix + IBM + Cerner = Enterprise Cloud Storage Validated Tony Asaro www.voicesofit.com/blogs/blog1.php/2011/10/17/nirvanix-ibm-cerner-enterprise-cloud-sto 2011-10-17T16:04:48Z 2011-10-17T16:04:48Z There has been a ton of press coverage on the Nirvanix and IBM OEM partnership including an extremely enthusiastic article from Forbes. They kind of imply that IBM's deal with Nirvanix has driven up their stock to a 52 week high. That is really powerful if that is the case. Each of the myriad of articles pretty much say the same thing: this is a BIG deal.

Here is what the IBM OEM with Nirvanix means in the real world:

- IBM Global Services is a powerhouse and is certainly able to drive some enormous opportunities. It will be interesting to see some big wins based on this relationship. IBM could potentially change the whole cloud storage game based on their execution and go-to-market.

- Perhaps more importantly this relationship validates Nirvanix to large corporations, organizations and government entities that might have otherwise have been interested in the solution but hesitant because of their size.

- Not only does this help to validate Nirvanix but Cloud storage as well! Let's not forget there is still a great deal of uncertainty out in the market. Again, IBM GSS lessens Cloud FUD a great deal.

- IBM Global Services, a well-respected and extremely successful business most likely did intensive due diligence and this should accelerate the decision-making process for other companies looking at Nirvanix.

- Additionally, it raises Nirvanix above the crowd and gives them visibility to new markets and customers that weren't paying attention to them before.

The next day after the IBM announcement there was another one with Cerner, the large healthcare services provider. Cerner will provide cloud storage for the healthcare industry. This is also a big deal focusing on a very strong vertical industry with a major player. It perhaps was overshadowed a bit but in many ways is just as important. The amount of unstructured content is massive in the healthcare industry and having a focused partner to drive this market is critical to being a leader.

No Competition.
You can argue all you want with me but Nirvanix really has no viable compeitition. And you really should pay attention to storage companies that provide unique value (e.g. Data Domain, Isilon). Yes, there are "ish" competitors - solutions that sort of appear as competitive products but really aren't. The cloud services by Amazon, Microsoft and Google are not competitive. Nirvanix is focused on the Enterprise and these other services are essentially focused on individuals and not companies. In other words, a techno-geek will use Amazon S3 for some project or service but an Enterprise-class IT department isn't going to use it for storing their unstructured data long term. In addition to whatever technology pros and cons with different cloud offerings there is an extremely valuable and essential issue of building relationships with their vendors that is important to the Enterprise. They want people they can work with, negotiate services and terms, that understand issues in the data center, provide personalized service and influence the direction and vision of the solution. You are not going to get that with Amazon, et al.

The large storage vendors for all intents and purposes are absent from this market. And I mean all of them - Dell, EMC, HDS, HP, IBM Storage, NetApp and Oracle have nothing really to speak of. Interestingly, one of the next big things in storage is going to be cloud and none of them have anything real and at best something that is some level of being cloud-ish.

Why This Matters
Cloud storage will not replace the data center. However, it is an essential part of the overall services that every IT professional should be considering and planning on going forward. Unstructured data is growing at a far more rapid rate than any other data type and it will continue to do so. And the majority of this data is dormant and is consuming endless amounts of Tier 1 and Tier 2 storage on premises or at some co-location facility. The impact on capital and operational expenditures is extreme and will only get worse over time. The is an unsustainable situation that requires a resolution. Additionally, cloud storage offers a great platform for replicated and backup data as well. Furthermore, there is an opportunity for new businesses to emerge using cloud storage as its basis - this is already happening but we are at the tip of the iceberg. And Nirvanix is the only cloud storage vendor that is building true momentum for this market.

Here is a list of all of the articles on the Nirvanix and IBM deal if you want to check them out:

www.forbes.com/sites/siliconangle/2011/10/12/ibm-upgrades-cloud-play-with-startup-nirvanix-stock-hits-52-week-high/

wikibon.org/blog/ibm-outperforms-even-apple-smarter-planet-big-data-and-cloud-power-the-future/

www.datamobilitygroup.com/IBM_Nirvanix_OEM.php

neovise.com/nirvanix-and-ibm-disrupt-the-storage-industry-with-enterprise-cloud-storage

techcrunch.com/2011/10/12/ibm-announces-new-smartcloud-services-partnership-with-nirvanix/

www.thebiggertruth.com/2011/10/ibm-oems-nirvanix-cloud-storage-and-why-everyone-should-care/?utm_source=dlvr.it&utm_medium=twitter

siliconangle.com/blog/2011/10/12/ibm-puts-amazon-google-on-notice-with-nirvanix-oem-partnership-cloud-leadership-is-about-scale-advantage/

www.ssg-now.com/ibm-global-services-integrates-nirvanix-cloud-storage-in-smartcloud-enterprise-service/

www.datacenterknowledge.com/archives/2011/10/12/ibm-beefs-up-public-cloud-services-for-enterprises/

www.dcig.com/2011/10/cerner-and-ibm-send-industry-message.html

www.storage-switzerland.com/Blog/Entries/2011/10/14_Calling_IT_-_Nirvanix_Keeps_Rolling.html

www.eweek.com/c/a/Data-Storage/IBM-Engages-Nirvanix-to-Supply-HighEnd-Enterprise-Cloud-Storage-514026/?kc=rss&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+RSS%2Ftech+%28eWEEK+Technology+News%29

www.theregister.co.uk/2011/10/13/ibm_oems_nirvanix/

www.infostor.com/backup-and_recovery/cloud-storage/nirvanix-deal-to-expand-cloudsmart-storage.html?utm_medium=twitter&utm_source=twitterfeed

Original post blogged on Voices of IT.

]]>
HDS and BlueArc Finally Tie the Knot Tony Asaro www.voicesofit.com/blogs/blog1.php/2011/09/09/hds-and-bluearc-finally-tie-the-knot 2011-09-09T15:02:28Z 2011-09-09T15:43:22Z One of the first things I thought of when I heard the news about HDS acquiring BlueArc was that it reminded me of a guy that has been dating the same woman for years and years and everyone always asks him - "when are you going to propose to her"? And for years he would always answer - "what's the rush?" Interestingly, it sometimes takes another guy to come around and show interest before the first guy realizes that he is about to lose a really good thing. Rumor has it that IBM made the first offer on BlueArc and that drove HDS to finally "propose". And the rest of us are shaking our head saying "it is about time".

You have to understand that file storage is growing far greater than block and there are only a finite number of players that have any credible solutions . NetApp is the king. EMC just become a lot more interesting with Isilon. And if NetApp is the king then Microsoft is the pope. And everybody else kind of sucks in terms of revenue and footprint.

BlueArc has a great file system and competitive NAS solution but their biggest challenge is having the resources to scale their business. HDS provides additional resources to make this happen. Additionally, HDS has been focusing on SAN storage primarily and adding the BlueArc folks gives them the best independent team in the NAS world.

So think about what I said earlier. File storage growth is eclipsing all other storage types. And there are fundamentally only three players. HDS just made a move that could make them the fourth player. BlueArc is an Enterprise-class NAS solution and that is who HDS sells to. What does IBM do now? HP? Oracle? Dell? They have no real answer to fastest growing segment in storage. HP, IBM and Dell all have file system solutions but none of them can address Tier 1 NAS.

Interestingly, from a bevy of vendors and startups only three NAS players have really been successful including NetApp, Isilon and BlueArc. And arguably only NetApp has been able to do it all on their own. Compare this to the massive number of SAN-based storage system vendors that have achieved amazing success. And yet, file storage is experiencing massive growth and will continue to do so for the foreseeable future.

This is the biggest acquisition by HDS and they have yet to prove they know how to leverage their resources including sales, marketing, operations and engineering to create a meaningful accelerated trajectory for BlueArc especially when compared to Dell with EqualLogic, EMC with Data Domain (and others) or even HP with LeftHand. And since HDS has OEMed BlueArc for several years, will owning the company change the dynamics in any substantive way? Hopefully HDS has an integration plan that will make this a home run for them. I think this is a smart and overdue move for HDS but you know what I'm saying.

I think Mike Gustafson did a great job and the purported $600 million acquisition price is a big win. BlueArc has a great file system and their solution is arguably the only Enterprise-class NAS, based on its scalability and performance. They really were the last man standing in the NAS marketplace and HDS did the right thing by finally becoming betrothed.

Original post blogged on Voices of IT.

]]>
Primary Dedupe: The Next Big Thing in Storage Tony Asaro www.voicesofit.com/blogs/blog1.php/2011/06/29/primary-dedupe-the-next-big-thing-in-sto 2011-06-30T03:13:12Z 2011-06-30T03:14:39Z I have been pounding the drum on dedupe in primary storage for a very long time and I am surprised that the market hasn’t acted more quickly. This capability is even easier to quantify than snapshots and thin provisioning and yet it’s adoption has been slow.

The reasons for implementing primary dedupe is as clear as day. Data growth is ridiculous and never ending. The math is simple to grasp:

• Primary storage is growing at a CAGR of 60%.
• If you have 10 TB of data today that means you will have 16 TB next year at this time.
• In five years this will turn into 104 TB.
• If you have 100 TB of data today you will have 1.04 PB in five years.
• And since most storage systems have about 40% capacity utilization then you are talking about 250 TB of capacity to store 100 TB of data and 2.5 PB of capacity to store 1 PB of actual data.

Let us do the dedupe math:

• If you just get a 4-to-1 ratio then 10 TB of data is reduced to 2.5 TB of data.
• Based on a 60% CAGR in one year you will have 4 TB and in five years it will be about 26 TB. Compare that to 104 TB in five years!
• If you get a 10-to-1 ratio then you will only have about 10 TB in five years versus 104 TB! That is a magnitude in the difference of actual data being stored. And those dedupe ratios are achievable in virtualized environments.

I know it sounds too good to be true but even with a modest dedupe ratio the economics are simple to quantify and justify.

The strange thing is that we really don’t have wide adoption of primary dedupe. It is a no-brainer technology that very few storage vendors have actually implemented. NetApp has a distinct advantage over other storage vendors and is actually winning business because of their dedupe technology. To be candid, NetApp dedupe does have a number of limitations and yet none of their major competitors have stepped up to answer the call.

There are signs that other storage vendors are stepping up. Dell acquired Ocarina and IBM bought StorWize. Additionally, Permabit is a vendor that has primary dedupe technology and there are a number of vendors they are working with. I predict they will be acquired shortly and that will leave every other storage vendor out in the cold. However, none of these technologies have made their way into the market yet. A startup called Nimble Storage is growing like crazy and while they don’t actually have dedupe they do have in-line data compression and even with that they have measurable cost per GB advantage over their competition. Data compression is good. Dedupe is better. And data compression combined with dedupe is the best.

I could be cynical and conclude that storage vendors don’t want to implement primary dedupe because it would cost them money. But I doubt that is the case because it is inevitable and it already is costing them money since they are losing business over it every day. I think it the reason is primary dedupe is really hard to implement. Therefore the vendor that does it best will have a clear advantage over all of the others.

NetApp gained leadership for many years in great part because of their snapshot technology. 3PAR was acquired for an unprecedented price in great part because of their thin provisioning technology. The jury is still out on which storage vendor will be the primary dedupe leader but whoever it is will inevitably experience great success. And it will change the industry for the better.

Original post blogged on Voices of IT.

]]>
Nirvanix: Cloud Storage for the Enterprise (For Real) Tony Asaro www.voicesofit.com/blogs/blog1.php/2011/02/14/nirvanix-cloud-storage-for-the-enterpris 2011-02-14T21:27:17Z 2011-02-14T21:27:17Z Nirvanix - the Enterprise Cloud Storage company has a new management team led by CEO, Scott Genereux. Interestingly, Scott spent much of his career selling storage systems to Enterprise customers. I think that is an important distinction since Nirvanix refers to itself as Enterprise Cloud Storage. It requires someone who understands the challenges of the Enterprise versus someone that is an expert at selling web services. Steve Zivanic is the new VP of Marketing and he worked shoulder-to-shoulder with Scott when they were at Hitachi Data Systems together. This team knows how to sell and market even when they are up against a powerhouse like EMC.

True to form, Steve Zivanic has been on a passionate campaign to get the word out about Nivanix. There are a wave of articles and blogs about how great they are doing all over the web. You can go to the Nirvanix website (www.Nirvanix.com) to see they have 700 customers – some of which are big, big companies storing lots and lots of data. That is all good stuff but you have a number of sources to read about that information. Here is my take on what are some of the coolest and most valuable things about Nirvanix:

1. They have end-to-end solution that makes it extremely easy for companies and organizations to use Cloud Storage. Nirvanix provides an easy to manage complete solution with front-end usability, security, reliability, performance, multi-tenancy and global access combined with back-end controls, reporting, management and analytics.  It isn’t just a storage system that scales and supports HTTP – which so many other vendors tout as their Cloud Platform. Rather, Nirvanix provides a holistic solution designed specifically for businesses to utilize without being cloud experts. This unique and what I believe the market really needs to deploy Cloud Storage for the Enterprise.

2. You can utilize the Nirvanix public cloud storage or you can take their software and actually implement your own private cloud storage service. There are a number of large companies and organizations that want to enable their own private clouds and this meets that need. Nirvanix is also a great solution for managed service providers and this could be a whole new channel for them. The world will be divided in three ways - private cloud storage usage; a hybrid of public and private cloud storage usage; and pure public storage cloud usage. Nirvanix supports all three models.  

3. What is also very powerful is Nirvanix can be integrated with heterogeneous file systems. Essentially it can work with third-party storage systems – which is great for users that want to use existing storage. It is also great for storage vendors – they can partner with Nirvanix and still use their own storage systems. Nirvanix does have its own storage technology but their real value is in the Cloud Storage application – the web services that live above the storage.

4. Nirvanix is field proven.  They have been in the market for a number of years and those 700 plus customers are evidence that it works. This minimizes the risk that every IT professional needs to consider when making a decision to go forward with emerging technologies like Cloud Storage.

Another trend in Cloud Storage for the Enterprise is using open source file systems to build your own. The more technology minded IT professionals working for large organizations are considering Hadoop and Gluster and other extensible, scalable file systems. These companies could leverage Nirvanix instead of building the services, tools and overall applications needed to implement Cloud Storage.

Cloud Storage is a next generation IT infrastructure that is already changing the landscape.  This will continue to happen in obvious and non-obvious ways.  The obvious ways include using cloud storage for backup, DR and archiving. The non-obvious ways include new business models and applications being developed specifically for the cloud.  

Nirvanix is on the curve up and will only get stronger over time.  They have new leadership in place and a board and investors that are charged up and passionate.  

Since there is a climate of acquisition one can only assume that Nirvanix should be of real interest to a number of different major IT vendors. Anyone that is going against EMC should consider partnering with Nirvanix since it can take on Atmos both from a public and private cloud perspective. What should be of concern to the various “big guys” is when Nirvanix gets bought out it will be a two horse race in a market that is inevitable and massive.

Original post blogged on Voices of IT.

]]>
The End of NAS Tony Asaro www.voicesofit.com/blogs/blog1.php/2011/01/09/the-end-of-nas 2011-01-10T03:19:01Z 2011-03-24T14:27:11Z All indications are that file storage will consume the vast majority of capacity in the coming years. IDC research recently forecasted that file data will eclipse all other data types by a six-to-one ratio in terms of capacity consumption by the year 2014. My work with large IT organizations verifies this – they have with petabytes of file storage and the growth rates are alarming. This growth will temporarily drive the increase of NAS but will ultimately lead to a shift in how we implement, manage and protect file storage.

It is unrealistic to believe that having dozens, hundreds and in the near future thousands of NAS systems is sustainable. One company I’m working with has over 600 NAS systems and based on their growth this will double in the next couple of years. Think about the millions of dollars they spend on hardware, software, backup, maintenance fees, migrations, professional services and operations. And the costs will just continue to rise inevitably and perpetually.

Scale-out NAS will play a major role in the new NAS landscape but it is not a panacea. What is needed is better file management that provides control and capabilities that live “above” and independently of the storage infrastructure.

The demise of NAS will not be complete nor will it occur over night. Rather, it will happen in time and in stages.

The first, “easiest” and biggest inefficiency and money-sucker in file storage is dormant data. Universally IT professionals agree that 60% to 80% of their file data has not been accessed in a year or more. What is that stale, unused data doing on Tier 1 NAS? Migrating dormant files to Tier 3 NAS could save millions of dollars and reduce operational complexities. If only 20% of file data is active at any given time then reducing your Tier 1 NAS storage by 60% to 80% would have an immediate and major impact on your data center. If you have 100 NAS systems you can reduce this number to 20 to 40 instead. Yes, you still have to acquire Tier 3 NAS, but these systems are significantly lower in cost than Tier 1 and you can change your protection and management policies further reducing cost.

Another important issue is not all file data is stored on NAS. There are a growing number of files residing within content management systems like SharePoint. Additionally, there is a massive amount of file data stored on SAN storage either front-ended by file servers or with local file systems. There are also files being stored on DAS. And as virtual desktop infrastructure (VDI) grows in popularity, there will be an increasing number of “local” client files residing on block storage. All of the issues of file storage inefficiencies apply to these environments as well including dormancy.

What is needed is a file management solution that can discover and identify dormant data and move it from Tier 1 to Tier 3 storage transparently, reliably and quickly. Additionally, it must be able to replicate and recover data at a file level.

I am not proposing an archiving solution - IT professionals don’t want stubs, which consumes addressing space and break during NAS migrations. What is needed is a solution that migrates files and requires a remount with new shares. Yes, this is a little inconvenient but well worth it. Especially if you are moving dormant files – no one is accessing them anyway!

How does this lead to the end of NAS? Migrating dormant files to lower tiers reduces the investment in Tier 1 NAS by up to 80%. That organization with 600 Tier 1 NAS systems can get down to 120 Tier 1 NAS systems resulting in millions of dollars in savings and ends the ridiculous amount of NAS systems they buy and manage going forward.

As time goes on the strategic significance of Tier 1 NAS will be reduced and less and less file data will be stored on them. NAS will become NFS and CIFS appliances focused more on scalability, ease of use and cost effectiveness. Additionally, file servers with local storage, SAN-attached file servers, SAN-attached local file systems, VDI clients and CMS systems will grow in size. And intelligent file management platforms will play an increasingly valuable role in managing file data across all of these systems for the entire data center and the cloud.

What is missing is the file management solution but they are coming. More importantly we need a new way to think about solving the growth of file storage because throwing more storage infrastructure at the problem is untenable.

Original post blogged on Voices of IT.

]]>
Dell and Compellent: The Implications Tony Asaro www.voicesofit.com/blogs/blog1.php/2010/12/15/dell-and-compellent-the-implications 2010-12-15T19:19:14Z 2010-12-16T20:53:47Z I believe the impact of Dell and Compellent will be significant. What Dell probably knows that many of their competitors are unaware of is how good the Compellent storage system is and how loyal their customers and channel partners are.

There are a number of misunderstandings around Compellent storage. Here is my take on Compellent after talking to a number of their customers:

- Not Just for SMB. Compellent isn't just an SMB storage system. It competes with the high-end of the EMC CLARiiON and the low-end of the DMX. I am working with them and ESG to show performance numbers that I believe will surprise and impress a lot of people that don't know how high Compellent scales. Obviously performance is only one factor in addressing the high end but I think it is well accepted that Compellent is HA and has a rich set of features that meets the needs of mission-critical applications.

- Cloud Storage. Compellent is a great utility storage solution - aka cloud storage - and is being used by a number of service providers. I would argue that Compellent has a better cloud storage system than others because of its Data Progression technology that offers the best price/performance for IOPS and Capacity. This is essential in a cloud or utility offering!

- Very Little EqualLogic Over Lap. Customers that want iSCSI will probably lean towards EqualLogic. However, the FC market is actually much bigger than iSCSI. Additionally, Compellent supports FCoE - which is the future of FC. That is the obvious difference. Compellent is more flexible in terms of capacity options as well. Additionally, Compellent does scale higher than EqualLogic, and is actually replacing DMX and USP-V systems. I doubt EqualLogic is doing that anywhere. I would say that the overlap between these two products is less than 20%.

- Scale Out. It turns out that Compellent has its own unique strategy for scaling out that takes advantage of a new technology they have called Live Volume. It creates active mirrors of data across multiple Storage Centers. As a result, they can scale performance in a true linear fashion regardless of the type of I/O being generated. I haven't found a cache coherent cluster architecture that can make the same claim.

This was a smart and necessary move by Dell. Oftentimes customers buy storage and servers as a bundle. HP and IBM have very few sales people focused on selling storage and rely on channel partners and large customers that buy servers and storage as a package. And even though both vendors have very little storage DNA they still sell billions of dollars of the stuff. Dell does this as well but they have to split the profit with EMC in a large number of these transactions. Now Dell will be able to sell storage and server bundles using products they own and as a result will get a lot more margin and account control. Additionally, having a great storage system gives Dell a competitive advantage when selling their servers. And make no mistake - Compellent has a great SAN storage system.

And what about the high-end of the market? As I said above, I think Compellent already is selling into part of this market and with its scale-out strategy will address even more of the high-end market. However, EMC and HDS will continue to dominate the top of the pyramid. By the way that is a place that 3PAR doesn't compete today either (if you think otherwise you are mistaken). Dell will be able to address 70% of the SAN market with the combination of EqualLogic and Compellent - both great products within their segments. Dell won't be able to address the 5% of the market that is uber high-end or at the low-end with these two product lines. But who cares about the uber high end - it is a pain in the ass and it isn't growing.

However - other than NetApp and now EMC with Isilon - most of the other storage vendors have yet to figure out their NAS strategies. And if they don't soon then everyone but these two vendors will have a rude awakening in 2012. However, with HP / 3PAR and Dell / Compellent - the SAN storage market just got a lot more competitive and interesting.

Original post blogged on Voices of IT.

]]>
What's In Store 2011 Tony Asaro www.voicesofit.com/blogs/blog1.php/2010/12/01/what-s-in-store-2011 2010-12-02T01:16:28Z 2010-12-02T01:16:28Z How do we top 2010 in the storage universe in the coming year? 3PAR and Isilon were the late-2010 big stories with their multi-billion dollar acquisitions. The battle in the storage arena is afoot and there is a great deal at stake.

Storage is arguably the stickiest infrastructure in the data center and users will always buy more and more. There is no dominant leader with a majority share of the market, and there’s still measurable product differentiation. 3PAR isn’t competitive with EqualLogic, and both are very different from Isilon. But even when you compare competitive products like 3PAR and EMC DMX, there are still major differences. 3PAR is far easier to use and has better capacity optimization technology, while EMC DMX scales to higher levels of performance, supports mainframe and has an interoperability matrix that would break your foot if it fell on it.

As we enter 2011 it’s important to leave behind the heady enthusiasm of the major events in storage and get back to reality. I talk to IT professionals all the time and their number one priority isn’t scale-out architectures or any other new-fangled technology. Their job #1 is making sure their storage systems work all the time without blowing up, slowing down or losing data. It turns out that even in this day and age not all storage systems fulfill those fundamental standards all the time.

It’s also important to understand that cache-coherent clusters or scale-out architectures aren’t panaceas. They’re like any other technology that comes with its own set of pros and cons. All cache coherent architectures have trade-offs that invariably impact performance and management. If you implement a scale-out storage solution for performance make sure it’ll work for your I/O workloads because it isn’t a black-and-white proposition. For example, Isilon is great for large files and streaming data but not nearly as good for smaller transaction-oriented I/O. The overhead created when having a shared “brain” across lots of nodes requires rapid communication and the more transactions that occur the slower the system will respond. That’[s not a knock on Isilon by any means, but it’s important to understand what it’s great at and what it’s not so great at.

One of the most important trends affecting storage is the unbridled growth of file data. In a recent report, IDC predicted that file data is going to eclipse all other data types by several factors in the next few years. I agree with this based on what I’m seeing out in the field. I’m working with companies that literally have petabytes of file storage and new files continue to surface like the BP oil spill. That puts NetApp in the driver’s seat and leaves Dell, Hitachi, HP, IBM and Oracle at a major disadvantage. EMC has much more of a fighting chance with Isilon in its portfolio. Which begs the question: Is Isilon worth $2.2 billion? If you believe that the lion’s share of all networked storage capacity will be file, you bet it is.

How we deal with all that growth is an unavoidable issue. Throwing more storage at the problem isn’t sustainable. That’s why in 2011 storage optimization is going to play an increasing role in how we manage storage. Tried and true technologies such as thin provisioning need to be implemented to a greater degree. Data compression and data deduplication will find their way into primary storage systems. And perhaps the most compelling “new” capability is automated tiering of storage at a sub-LUN level. EMC and HDS have announced it and 3PAR released their version in 2010. However, since Compellent is the only storage system vendor that has years of experience and thousands of customers supporting this technology, they’re well positioned to be the new coolest kid on the block (pun intended). Automated tiering, if done efficiently and reliably, can significantly change the economics of storage.

I also predict that now that storage is “cool” so are the people that write about it. Okay, maybe that’s pushing it.

Original post blogged on Voices of IT.

]]>
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.