Category Archives: SQLServerPedia Syndication

Pliant Technology, Enterprise Flash Drives For Your SQL Server: Part 2

Adding In Others For Contrast

In our first part we introduced Pliant and the LS 300 drive. In part 2 we get down to the details. To give a better idea where you stand with the setup described last time I’m throwing in two other storage setups. A RAID 10 array made up of 12 500GB 7200 RPM drives attached via SATA II controllers In a RAID 0 configuration I was able to get 800MB/sec in sequential throughput so it isn’t horrible, just not “enterprise” worthy. A Patriot Torqx 128GB based on Indilinx Bigfoot SSD controller, not the greatest SSD on the consumer market but Indilinx was the king of the previous generation. I will be using the LSI controller just like I did for the Pliant LS 300.

Patriot Torqx Specifications:
Available in 64GB, 128GB and 256GB capacities
Interface: SATA I/II
Raid Support: 0, 1, 0+1
256GB and 128GB: Sequential Read: up to 260MB/s Sequential Write: up to 180MB/s
MTBF: >2,500,000 Hours
Data Retention: 5 years at 25°C
Data Reliability: Built in BCH 8, 12 and 16-bit ECC
10 Year Warranty

RAID support? I’m not sure what they are saying here other than don’t put this drive in a RAID 5 or RAID 6 setup at all. Mean time between failures(MTBF) is a pretty useless number, I would have rather seen a maximum write life or writes per day metric. It has ECC error checking, since this is an MLC based drive that doesn’t surprise me at all. 10 year warranty, yep 10 YEARS! This was one of the reasons I bought this drive. And I’m glad I did, it has already been replaced once.

The Setup

Since we are just testing storage systems I’m not as concerned with the host machine. It is more than up to the task of generating IO’s. I used Iometer 2008.06.18-RC2 for testing and my trusty
Iometer SQL Server IO Patterns File. After the test runs I used my other tool the Iometer output parser and importer to process the results and import them into a SQL Server table. The tests consisted of two different patters. These two patterns are close to what I’ve seen in the real world and loosely based on the Intel database test pattern. I run these test at different queue depths with a single worker
OLTP Heavy Read:
A mix of 8KB and 64KB size request with 90% of them being read request and 10% being write request. This test is 100% random access.

OLTP Moderate Read:
A mix of 8KB and 64KB size request with 65% of them being read request and 35% being write request. This test is 100% random access.

Lots And Lots of Graphs

This first set is OLTP Heavy Read at a queue depth of 1. Average Response Time is in milliseconds (ms).

Interesting to see the Torqx drive actually performing better than the Pliant drive. Since this is an extremely light load and mostly read only we can assume that the Torqx is tuned more towards that kind of workload. The hard disks put in a respectable showing, for hard disks.

OLTP Heavy Read at a queue depth of 4. Average Response Time is in milliseconds (ms).

As soon as we put some kind of load the Pliant drive just walks away from the other two drives. The Torqx is still five times faster than the RAID 10 setup.

OLTP Heavy Read at a queue depth of 8. Average Response Time is in milliseconds (ms).

Again, as the workload ramps up the Pliant really just ends up in a category all its own. We are still in a decent zone for the RAID setup but the single Torqx drive still is four to five times faster.

OLTP Heavy Read at a queue depth of 32. Average Response Time is in milliseconds (ms).

Now we are pushing past the bounds of the SATA based Torqx and the SATA based RAID setup. The Pliant drive just keeps getting faster jumping from 13,000 IO/sec to 22,000 IO/sec. Response times are still very impressive as well.

OLTP Heavy Read at a queue depth of 128. Average Response Time is in milliseconds (ms).

This is what we would call a “worst case scenario” for the RAID setup. With only 12 drives we are at a queue length of 10 for each drive. Response times are showing it too with the average being 110ms. Even the Torqx drive can’t shed the IO load at this point while the Pliant drive drives past 26,000 IO/sec and inches up on 500MB/sec as well. That last statement is accurate. Since this is a dual-port drive even though its a SAS 300 drive it is able to use both ports for read and writes. I did run the test up to 256 outstanding IO/sec but the Pliant drive was capped out and was starting to add some to the response time. The RAID array and the Torqx drive were getting so slow that the Pliant drive was hard to see on the average response time graph.

This second set is OLTP Moderate Read at a queue depth of 1. Average Response Time is in milliseconds (ms).

This workload is much more write intensive and the Pliant LS 300 jumps out in front very quickly. Even at 1 queue depth it is shaming the Torqx on write performance. The RAID array is performing pretty well with lower than expected response times.

OLTP Moderate Read at a queue depth of 4. Average Response Time is in milliseconds (ms).

Quickly the Pliant drive starts to walk away with this contest. It clearly has much more capacity for write workloads than the Torqx or RAID array.

OLTP Moderate Read at a queue depth of 8. Average Response Time is in milliseconds (ms).

Here we are again at the end of the road for the RAID array. The Torqx drive is holding on but response times are getting long. It is only managing to pull a two fold increase in performance over the RAID array.

OLTP Moderate Read at a queue depth of 32. Average Response Time is in milliseconds (ms).

Now things are just embarrassing for the RAID array and the Torqx drive. Both showing that write heavy workloads aren’t the best fit. Again, the Pliant drive is starting to get response times in the millisecond range but at 320MB/Sec and 18,000 IO/Sec I would have to call that a fair trade.

OLTP Moderate Read at a queue depth of 128. Average Response Time is in milliseconds (ms).

At last we have hit a wall with the RAID array and the Torqx drive. With the Torqx drive posting up numbers that are less than two times the RAID array it is starting to show its real weaknesses. The Pliant drive however is pulling a solid 22,ooo IO/Sec and creeping up on 43oMB/Sec of throughput. All of this from a single SAS 3.5″ drive.

Final Thoughts

I’ve had the Pliant LS 300 in my lab for quite a while now. I’ve also had the Patriot Torqx and this particular RAID array setup. All three have been running hard during the last three months. The Pliant drive did show some signs of slowing down as it settled into the workloads. The RAID array lost three drives total and as I stated earlier, the first Torqx drive I had gave up the ghost in the first month. I’ve said it before, and I will say it again. If you need an enterprise drive then buy an enterprise drive! Don’t get a drive that has a SATA interface and is dressed up like it is ready for the big show. I can say without a doubt the the Pliant LS 300 is one of the finest solid state disk I’ve ever worked with.

SQLDIY: Statistics Information Including Last Update

One of the most important aspects of keeping your system running well is updating and managing your statistics. Since the optimizer relies on statistics to determine the best execution plan it is imperative that you keep an eye on them and maintain them just like you would indexes. One of the disappointing things in SQL Server 2005/2008 is the deprecation of some of the functionality of the sysindexes system view. In particular the rowmodctr column, since there is no equivalent in any of the new sys schema DMV’s. With that said, the GatherStatisticsInformation stored procedure listed here is missing that bit of information.What I do include is the last time the statistics were updated for a particular index. It is a rare thing to see information from SQL Server take a step backwards. Hopefully they will correct this mistake and add the modified row count back in to a DMV that is accessible.

Series to Date SQLDIY: Manage and Monitor SQL Server Yourself

Link to the script Gather Information On Statistics

I’m hosting all my scripts from this series on GitHub as I do with all my open source projects.

As always, if you find any bugs please let me know and I will correct them!

Your Homework:

Statistics Used by the Query Optimizer in Microsoft SQL Server 2008

SQL Server 2008 Statistics Jose Barreto (blog|twitter)

Lies, Damned Lies and Statistics Elisabeth Redei

How Stale are my Statistics? Kendra Little (blog|twitter)

And everything on SQL Skills

SQLDIY: Gathering A Data Dictionary

In this installment we are looking at assembling a basic data dictionary from the column level meta data stored in SQL Server. This is a little different from the rest. You can still do quite a bit with the information provided but to leverage this to the fullest requires a bit of developer buy in. Or a bit more work for whomever is your schema designer. Extended properties have been available to you since the stone ages of SQL Server. Two fine fellows Mark Chaffin and Brian Knight(blog|twitter) expound on the power of extended properties in the article on Managing Metadata in SQL Server 2005. Well worth the read. Using a couple of stored procedures you can store some very rich metadata about every object in your database.

Here are some examples of how to add descriptions to your objects.

–Adding a description to a table
EXEC sp_addextendedproperty
@name = N’MS_Description’,
@value = N’New Description for the Server table’,
@level0type = N’SCHEMA’, @level0name = dbo,
@level1type = N’TABLE’, @level1name = Server;

GO

–adding a description to a column
EXEC sp_addextendedproperty
@name = N’MS_Description’
,@value = N’Login is trusted’
,@level0type = N’Schema’, @level0name = dbo
,@level1type = N’Table’,  @level1name = Server
,@level2type = N’Column’, @level2name = Trusted;
GO
–updating that column description
EXEC sp_updateextendedproperty
@name = N’MS_Description’
,@value = N’Login is windows authenticated’
,@level0type = N’Schema’, @level0name = dbo
,@level1type = N’Table’,  @level1name = Server
,@level2type = N’Column’, @level2name = Trusted;

Pretty simple, yet I’ve always received push back from both developers and database administrators. To me knowledge is power and building the documentation directly into the database metadata is an extremely powerful tool to have handy. At any point I can query sys.extended_properties on an object and find its usage, without having to guess by the object name what it may, or may not actually represent.

A second use case for having the metadata and a history of changes pretty straight forward, checking for schema changes. If you don’t want to implement DDL triggers or use a third party tool to watch for schema changes this is another handy tool in your tool belt. I’ve used this data to track schema deployed across servers to make sure that they are always in sync and if they are out of sync to send an alert.

Couple of quick notes for you. When you are querying sys.extended_properties minor_id is the column_id if you are looking at a class of 1. See sys.extended_properties on technet.

This one is a two in one enjoy!

Series to Date SQLDIY: Manage and Monitor SQL Server Yourself

Link to the script Gather Table Level Metadata

Link to the script Gather Database Level Metadata

I’m hosting all my scripts from this series on GitHub as I do with all my open source projects.

As always, if you find any bugs please let me know and I will correct them!

The Austin Fall Classic, SQLSaturday is Coming To Austin, TX

Come Get Your Learn On!

You heard right, on October 1st 2011 you need to be in Austin, TX. We have finalized our location and spot on the SQLSaturday list. After a rough start to the year I’ve managed to get my core team together and in action, settle my personal life down a bit and get this train back on the tracks. The call for speakers is open. This is your chance to start your speaking career. If you wish to volunteer for any part of the event please send an email to Michael.Kaplan@sqlposse.com. If you want to sponsor this event take a look at the sponsor page and let us know. You can email directly at wes.brown@sqlposse.com For more information please go to the SQLSaturday #97 page.

Expanding In Austin

To serve the needs of the SQL Server community in Austin, we are starting a second meeting time and place. We are talking with Experis, a leading technology recruiting firm, to use their offices at 301 Congress Ave. Response to the second meeting space and time has been very positive and I look forward to watching this monthly event grow. As soon as we are confirmed I will send out the inaugural meeting notification.

Watch out! Here Comes the POSSE

The SQL POSSE that is. To support our efforts to grow the SQL Server community in central Texas and the surrounding area I am proud to announce the formation of the Professional Organization of SQL Server Educators. Our purpose is to support local chapters with things like SQLSaturday or just helping out at the local user group level. Currently, CACTUSS and SALSSA are affiliated with POSSE and in the coming weeks I will be reaching out to other user groups to see if they are interested in pooling some resources. POSSE doesn’t replace your user group, it is a support facility to help get things done. POSSE is currently filing for federal non-profit status and state non-profit status in Texas. We should be fully recognized around the time of our SQLSaturday event. Once we have gone through this I will post full details on what we did, how we did it and what it cost to get it done. After that I’ll be more than happy to schedule a conference call with chapter leaders to discuss the details and answer questions. I’ve asked for years for anyone who has done this to share and got back cricket noises so I figured it was time to put up or shut up.

Its been a busy summer and it will be a busy fall too!

SQLDIY: Tracking Space Usage At the Index Level

I am a fan of information, the more the merrier. Anytime I’m tasked with tracking a production server over the long haul I try and gather as many data points as I can. You can capture database sizes but when your database blows up in size its nice to be able to drill down and find out if it was a new index or correlates with a change in code at a point in time. In this script I’m falling back on sp_spaceused to gather that information. I could query several DMV’s to get the same or more information. I could also put in DDL triggers for tracking things like schema changes if I need to the minute and alerting to go along with it. This particular view is more of a “close enough” look at your database in some detail. As we get into more detail index tracking you will see some more specific numbers on row counts, index size and other index specific details. You may see a pattern emerging here. My goal is to gather data in a timely and contestant fashion. We will also gather more detailed meta data on schema later as well.

Series to Date SQLDIY: Manage and Monitor SQL Server Yourself

Link to the script Gather Table Statistics

I’m hosting all my scripts from this series on GitHub as I do with all my open source projects.

As always, if you find any bugs please let me know and I will correct them!

UPDATE:

Aaron Bertrand sugested that I take a look at sys.dm_db_partition_stats instead of sp_spaceused and as usual he was spot on. By moving to sys.dm_db_partition_stats I was able to cut the table loop out completely and sped up the performance quite a bit. New version is up.