Category Archives: ioDrive

Solid State Storage: Enterprise State Of Affairs

Here In A Flash!

Its been a crazy last few years in the flash storage space. Things really started taking off around 2006 when NAND flash and moores law got together. in 2010 it was clear that flash storage was going to be a major part of your storage makeup in the future. It may not be NAND flash specifically though. It will be some kind of memory and not spinning disks.

Breaking The Cost Barrier.

For the last few years, I’ve always told people to price out on the cost of IO not the cost of storage. Buying flash storage was mainly a niche product solving a niche problem like to speed up random IO heavy tasks. With the cost of flash storage at or below standard disk based SAN storage with all the same connectivity features and the same software features I think it’s time to put flash storage on the same playing field as our old stalwart SAN solutions.

Right now at the end of 2012, you can get a large amount of flash storage. There is still this perception that it is too expensive and too risky to build out all flash storage arrays. I am here to prove at least cost isn’t as limiting a factor as you may believe. Traditional SAN storage can run you from 5 dollars a Gigabyte to 30 dollars a Gigabyte for spinning disks. You can easily get into an all flash array in that same range.

Here’s Looking At You Flash.

This is a short list of flash vendors currently on the market. I’ve thrown in a couple non-SAN types and a couple traditional SAN’s that have integrated flash storage in them. Please, don’t email me complaining that X vendor didn’t make this list or that Y vendor has different pricing. All the pricing numbers were gathered from published sources on the internet. These sources include, the vendors own website, published costs from TPC executive summaries and official third party price listings. If you are a vendor and don’t like the prices listed here then publicly publish your price list.

There are always two cost metrics I look at dollars per Gigabyte in raw capacity and dollars per Gigabyte in usable capacity. The first number is pretty straight forward. The second metric can get tricky in a hurry. On a disk based SAN that pretty much comes down to what RAID or protection scheme you use. Flash storage almost always introduces deduplication and compression which can muddy the waters a bit.

Fibre Channel/iSCSI vendor list

Nimbus Data

Appearing on the scene in 2006, they have two products currently on the market. the S-Class storage array and the E-Class storage array.

The S-Class seems to be their lower end entry but does come with an impressive software suite. It does provide 10GbE and Fibre Channel connectivity. Looking around at the cost for the S-Class I found a 2.5TB model for 25,000 dollars. That comes out to 9.7 dollars per Gigabyte in raw space. The S-Class is their super scaleable and totally redundant unit. I found a couple of quotes that put it in at 10.oo dollars a Gigabyte of raw storage. Already we have a contender!

Pure Storage

In 2009 Pure Storage started selling their flash only storage solutions. They include deduplication and compression in all their arrays and include that in the cost per Gigabyte. I personally find this a bit fishy since I always like to test with incompressible data as a worst case for any array. This would also drive up their cost. They claim between 5.00 and 10.00 dollars per usable Gigabyte and I haven’t found any solid source for public pricing on their array yet to dispute or confirm this number. They also have a generic “compare us” page on their website that at best is misleading and at worst plain lies. Since they don’t call out any specific vendor in their comparison page its hard to pin them for falsehoods but you can read between the lines.

Violin Memory

Violin Memory started in earnest around 2005 selling not just flash based but memory based arrays. Very quickly they transitioned to all flash arrays. They have two solutions on the market today. The 3000 series which allows some basic SAN style setups but also has direct attachments via external PCIe channels. It comes in at 10.50 dollars a Gigabyte raw and 12 dollars a Gigabyte usable. The 6000 series is their flagship product and the pricing reflects it. At 18.00 dollars per Gigabyte raw it is getting up there on the price scale. Again, not the cheapest but they are well established and have been used and are resold by HP.

Texas Memory Systems/IBM

If you haven’t heard, TMS was recently purchased by IBM. Based in Houston, TX I’ve always had a soft spot for them. They were also the first non-disk based storage solution I ever used. The first time I put a RamSan in and got 200,000 IO’s out of the little box I was sold. Of course it was only 64 Gigabytes of space and cost a small fortune. Today they have a solid flash based fibre attached and iSCSI attached lignup. I couldn’t find any pricing on the current flagship RamSan 820 but the 620 has been used in TPC benchmarks and is still in circulation. It is a heavy weight at 33.30 dollars a Gigabyte of raw storage.

Skyera

A new entrant into this space they are boasting some serious cost savings. They claim a 3.00 dollar per Gigabyte usable on their currently shipping product. The unit also includes options for deduplication and compression which can drive the cost down even further. It is also a half depth 1U solution with a built-in 10GbE switch. They are working on a fault tolerant unit due out second half of next year that will up the price a bit but add Fibre Channel connectivity. They have a solid pedigree as they are made up of the guys that brought the Sanforce controllers to market. They aren’t a proven company yet, and I haven’t seen a unit or been granted access to one ether. Still, I’d keep eye on them. At those price points and the crazy small footprint it may be worth taking a risk on them.

IBM

I’m putting the DS3524 on a separate entry to give you some contrast. This is a traditional SAN frame that has been populated with all SSD drives. With 112 200 GB drives and a total cost of 702908.00 it comes in at 31.00 a Gigabyte of raw storage. On the higher end but still in the price range I generally look to stay in.

SUN/Oracle

I couldn’t resist putting in a Sun F5100 in the mix. at 3,099,000.00 dollars it is the most expensive array I found listed. It has 38.4 Terabytes of raw capacity giving us a 80.00 dollars per Gigabyte price tag. Yikes!

Dell EqualLogic

When the 3Par deal fell apart Dell quickly gobbled up EqualLogic, a SAN manufacturer that focused on iSCSI solutions. This isn’t a flash array. I wanted to add it as contrast to the rest of the list. I found a 5.4 Terabyte array with a 7.00 dollar per Gigabyte raw storage price tag. Not horrible but still more expensive that some of our all flash solutions.

Fusion-io

What list would be complete without including the current king of the PCIe flash hill Fusion-io. I found a retail price listing for their 640 Gigabyte Duo card at 19,000 dollars giving us a 29.00 per usable Gigabyte. Looking at the next lowest card the 320 Gigabyte Duo at 7495.00 dollars ups the price to 32.20 per useable Gigabyte. They are wicked fast though 🙂

So Now What?

Armed with a bit of knowledge you can go forth and convince your boss and storage team that a SAN array fully based on flash is totally doable from a cost perspective. It may mean taking a bit of a risk but the rewards can be huge.

 

Fusion-io, Flash NAND All You Can Eat

Fusion-io has announced general availability of the new Octal. This card is the largest single flash based device I’ve ever seen. The SLC version has 2.56 terabytes of raw storage and the MLC has a whopping 5.12 terabytes of raw storage.  This thing is a behemoth. The throughput numbers are also impressive, both read at 6.2 Gigabytes a second using a 64KB block, you know the same size as an extent in SQL Server. They also put up impressive write numbers the SLC version doing 6 Gigabytes a second and the MCL clocks in at 4.4 Gigabytes a second.

There is a market for these drives but you really need to do your homework first. This is basically four ioDrive Duos or eight ioDrive’s using a single PCIe 2.0 16x slot. It requires a lot of power, more than the PCIe slot can provide. It needs an additional three power connectors two 6 pin and one 8 pin. EDIT: According to John C. You only need to use ether the two 6 pin OR the single 8 pin. These are pretty standard on ATX power supplies in your high end desk top machines but very rarely available in your HP, Dell or IBM server so check to see if you have any extra power leads in your box first.

Also, remember you have to have a certain amount of free memory for the ioDrive to work. They have done a lot of work in the latest driver to reduce the memory foot print but it can still be significant. I would highly recommend setting the drive up to use a 4K page instead of a 512 byte page. After that, you will still need a minimum of 284 megabytes of RAM per 80 gigabytes of storage. On the MLC Octal that comes to 18 gigabytes of RAM that you need to have available per card. To be honest with you, if you are slotting one of these bad boys into a server it won’t be a little dual processor pizza box. On the latest HP DL580G7’s you can have as much as 512 gigabytes of RAM so carving off 18 gigabytes of that isn’t such a huge deal.

Lastly, you will actually see several drives on your system each one will be a 640 gigabyte drive. If you want one monster drive you will have to stripe them at the OS level. The down side of that is loosing TRIM support which is a boon to the overall performance of the drive, but not a complete deal breaker. EDIT: John C is correct. You don’t loose TRIM for striping with the default Windows RAID stripe on Windows Server 2008 R2. I’m waiting for confirmation from Symantec if that is the case with Veritas Storage Foundation as well since that is what I am using to get software RAID 10 on my servers.

I don’t have pricing information on these yet, but I’m guessing its like a Ferrari, if you have to ask you probably can’t afford it.

Fusion-io, What It Takes To Be On The Cutting Edge

 

I recently had the privilege to talk with David Flynn, former CTO, Founder and newly minted CEO about Fusion-io. How Fusion-io was born. What they have built and the future of the company. Fusion-io is a new comer to the enterprise storage space and has exited the gates in a flash. In the last two years they have shown up with some impressive hardware, managed to draw Steve Wozniak into the fold and show some explosive growth, touting IBM, DELL and HP as adopters of the ioDrive.

Fusion-io is in its 4th year now, employing around 250 people. The first two years they were in design and build mode. On their first year of revenue Fusion-io did well into the double digit millions. They recently closed out their second year of sales at over 500% growth.

Wes Brown – “How did Fusion-io and the ioDrive come about?”

David Flynn – “The product is something that came out of a hybrid of my work building large scale, high performance computing systems, at one point we had three of the fastest computers in the world, based on Linux commodity clustering. During that time, this was early 2000, I recognized that memory was the single most expensive part of these super computers. It was around that same time the DRAM density growth stalled, missed a whole cycle, and has been growing at a much slower rate since then. Memory kind of reached power density limit, You can lithograph a smaller transistor but you can’t cool them. Memory reached a capacity density barrier due to the thermal limitations. Next, I went to another company and met Rick White co-founder of Fusion-io. We went and built a tiny security device that ran Linux on a tiny CPU. The curious thing about this device was we were using a new kind of memory for the storage, NAND flash. It was the darndest thing that this little CPU and system running Linux actually felt faster in many ways than these big super computers. It boiled down to the storage being on NAND flash, the idea for Fusion-io came out of that combination, and a realization that NAND flash as a new type of memory could offset memory and solve the problem of RAM density growth. So, while everybody else is thinking of NAND flash as a way of building faster disk drives, we said lets integrate NAND flash where it’s so fast it can offset the need for putting in large capacity memory, so not a faster disk drive but a higher density, higher capacity memory device.”

WB – “Why did you and Rick wait so long to bring these ideas to market?”

DF – “In 2006 Fusion-io was born. It wasn’t possible until that time frame. DRAM was the density king and the price king. You could get higher performance and capacity than you could from NAND flash before then.”

WB – “You have had several rounds of venture capital funding, is Fusion-io planning on another round or is the cash and sales pipeline good enough?”

DF – “We don’t expect to have to raise another round of financing.”

David and I talked about the role of CEO at Fusion-io and the previous people to hold that post. I was curious why a co-founder and very technical guy would assume the mantle of CEO at this point.

Don Basile, first CEO at Fusion-io, led them through their A and B series funding rounds and went on to become CEO at Violin Memory Systems. This left a vacuum and David Bradford was promoted from within to fill that role, bringing in Steve Wozniak as Chief Scientist. He has also overseen the phenomenal growth during this last year. David was recommended by Bradford after a stint as CTO and managing quite a bit of the day to day operations at the company. David went on to say that Marc Andreessen, who is now an investor through Andreessen Horowitz, was one of the tipping points that lead him to the CEO chair. David pointed out that part of Marc’s model for their investment is backing founder-CEO’s for various reasons, he believes they have the moral authority and know where all the moving parts are and are generally very good taking that role.

We then talked about what was coming down the product pipeline from Fusion-io.

WB – “Last year double density was promised but delayed, what was the hold up in expanding the product line beyond the ioDrive Duo?”

DF – “It would have to be limited resources in the company; we were just overwhelmed with growing the company. We are at 250+ people today, this time last year we were at 70 people. We have made a large investment engaging OEM’s like IBM and HP and partners like Dell. ”

WB -“So, how did Fusion-io get these major OEM’s to include Fusion-io in their server line?”

DF – “This is a good way to put it, Performance was the way to get people’s attention, capacity is a good thing. But what seals the deal and makes it an enterprise product isn’t the performance, or capacity, it is reliability of the product. That it doesn’t corrupt your data, it doesn’t fail and lose the data and doesn’t wear out too quickly, That is what allowed us to win the major OEM relationships.”

WB -“Fusion-io did a big test with the Octal at the end of last year, is this something that will see the light of day as a product?”

DF – “The ioDrive Octal is set to go into general production and availability soon. Last year we announced it as a science project because it was custom built for some specific applications, but we have decided to productize it. It will have five Terabytes of capacity, one million IOP’s and the equivalent bandwidth of sixteen FC4 ports.”

There is no pricing available on the ioDrive Octal, The new high density ioDrive or ioDrive Duo yet. There are servers on the market that are rated to handle up to four cards in a single server. If you need capacity and speed, I can’t imagine a better way to get it.

WB – “Is Fusion-io planning to go public?”

DF – “We’ve been building a company to be a self standing company. We believe our go to market strategy sales force direct enterprise along side with OEM’s, we do direct sales but fulfill through OEM’s.”

DF – “We view ourselves, just to give you the simplest way to describe what Fusion-io is, we are to flash chips what EMC is to disk drives. We aggregate flash chips to build infrastructure usable and valuable to enterprise customers, because they are flash chips it allows us to miniaturize it and go inside the box instead in a whole rack of boxes. We are building a new subsystem not a memory subsystem in the traditional since and not a storage subsystem, but a fusion of the two. It is deployed through an OEM strategy because it does have to be in the box to offer the best density metrics. At the end of the day our value is to take the cheapest flash chips and make it into the highest value infrastructure for folks to build on. That’s not just performance or capacity density it’s also the reliability and manageability of it. ”

WB – “With that said, is Fusion-io planning an IPO or not?”

– laughter from David and I-

DF -“We are here to build a successful company and won’t speculate about an IPO at this time.”

In our second part of the interview David gets deep down and technical about the ioDrive, what it is and isn’t and how the magic is made.