We have covered the Hard Disk and the System Bus. This time around we will cover disk controllers and host bus adapters.
In The Beginning…
There were three distinct components to your IO subsystem, the disk, controller, and the host bus adapter. Today there are still three distinct components but the arrangement has changed. The physical disk we have covered and you know about. What you may not realize is the disk controller is actually the circuit board on the back of the hard drive. In the past this board may have been an add-in card, a back plane that the drives plugged into or even an add-in card with the hard disk mounted on it! It took a little time for the configuration we take for granted today to settle out. Once the form factor for a hard drive and the controller was done there was still the issue of what a host bus adaptor was suppose to do. Some of you may remember the old days of MFM, RLL, and proprietary disk layouts. Having to do a low level format, setting the interleave, even having to park the drive when you were done with the computer. Those days are long gone. Now, low level formatting is done at the factory, there is no need for interleaving, and all drives auto-park. Whoa, what a time warp. All of these things were eliminated mostly due to the advancement in disk controllers and host bus adapters.
The Disk Controller
The card that slots into your system and is connected via cable to your hard drive isn’t the disk controller. The disk controller resides on the hard drive and handles all the low level operations. From spinning the disk, moving the heads and transferring the data the disk controller does most of the heaving lifting. Once the data has been read it finally makes its way down the wire to the host bus adapter.
Interface Protocols
There have been several data encoding and signaling schemes over the years. We have touched on MFM and RLL as the first wide spread standards used early on. The two standards that have stood the test of time are IDE/ATA and SCSI. These standards can be implemented on top of other protocols like IP and Fibre Channel both network protocols.There are ATA implementations on FC and IP but nether are as popular as SCSI. Fibre Channel is pretty much the domain of Storage Area Networks(SAN) which we will cover in a future article.
A breakdown of speeds.
Bus Type | Speed MB/Sec |
ATA/133 | 133 |
SATA 150 | 150 |
SATA/SAS 300 | 300 |
SATA/SAS 600 | 600 |
SCSI U160 | 160 |
SCSI U320 | 320 |
Alternate SCSI/ATA implementations
Fibre Channel 1GFC | 106 |
Fibre Channel 2GFC | 212 |
Fibre Channel 4GFC | 425 |
Fibre Channel 8GFC | 850 |
iSCSI Gigabit Ethernet | 125 |
iSCSI 10 Gigabit Ethernet | 1250 |
A modern spinning disks would have a hard time using even ATA/133’s available bandwidth all by itself. The older parallel ATA (PATA) and SCSI standards are giving way to their newer serial counterparts SATA and SAS. The previous generation had several marked differences between them. ATA could only have two drives per channel while SCSI could have up to 15. ATA was unidirectional, only able to read or write, to the drive while SCSI was bidirectional. This has carried over to the new standards as well.
If you have a SAS HBA it will accept both SAS and SATA drives. Another great feature is the reliability of the connectors. Both ATA and SCSI relied on large ribbon cables and in the case of SCSI termination to the cable chain. I have been kept up at night troubleshooting faulty SCSI cabling running down the chain to try and figure out which drive was causing the problem or if it was a termination issue. The new cables are much smaller and are all point to point, no daisy chaining or termination issues to worry about. The last boon added was the idea of using expanders in the case of SAS or port multipliers for SATA only arrays. The old SCSI standard with 15 drives in a single chain was limiting. You also had the issue that 15 drives could easily saturate a single U320 channel. The biggest SCSI RAID HBA’s usually shipped with 4 channels. In contrast, the new SAS HBA’s may have 4 times that amount. With the SAS expanders you can aggregate SAS channels and have more drives in a single chain. With the SAS 300 standard you could have 4 drives saturate a single channel. With a single 4 drive expander you could have 4 drives on that single channel making the most use of the available bandwidth. You can also have up to 128 drives on a edge expander and up to an astounding 16,384 SAS devices in a single SAS domain. This gives you a lot of flexibility when it comes to configuring your storage and utilizing the bandwidth available.
As you plan your configuration you must be mindful of how many channels you have, what kind of bus the HBA uses and how much bandwidth is available through the entire stack. For example, If you have a PCIe RAID controller with 28 ports that is a theoretical throughput of 8.4 gigabytes a second of available bandwidth via the SAS 300 protocol. The drive may be able to deliver 80 megabytes a second if you only use one drive per port and no expanders that is 2.2 gigabytes a second. If the HBA isn’t plugged into a PCIe 2.0 x8 slot or PCIe 1.0 x16 slot you aren’t going to get that 2.2 GB/Sec of throughput. You should still get the IO’s available but sustained throughput will be limited. Just because an HBA says it can support 108 drives doesn’t mean you will get all the throughput of those drives. You may have an HBA that only supports PCIe 1.0 and only has 4 lanes for a total of 1GB/sec of throughput to the system. Again, you get the IO increase and for SQL Server sometimes that is exactly what you are after.
Host Bus Adapter
This is what most people think of as the disk controller or controller card. In its simplest form it transfers data to and from the system board to the hard disk controller. Of course there are other things that can happen on the HBA. It can have intergraded RAID functions, additional caching, or other things that are not appropriate to do at the disk controller level. There are several types of HBA’s from the ones built into your computers motherboard to high end SAS RAID controllers.
Cache, Disk Controllers, and HBA’s
Almost all enterprise class HBA’s usually have caching as an option or built into the card. This is a particular interest to us and SQL Server. Your data will be safe, SQL Server guarantees this over all else. In my post on capturing IO patterns I discuss why and how SQL Server does this and the concept of stable media. SQL Server assumes that it is talking to a single physical disk and opens the data files in such a way that write caching isn’t used even if it is available. SAS and SCSI drives honor this request normally. But, one of the options more advanced HBA’s offer you are the ability to use the cache and still have stable media. This is usually accomplished through a battery backup unit mounted on the card that keeps the cache memory active during a system failure. Some controllers will gladly let you shoot yourself in the foot by letting you turn the write cache on without a battery and also enable the local write cache on the disk drive as well. In this situation if you have a sudden power failure, data loss is going to happen if there are any writes at that time. Currently, there isn’t a disk drive on the market with a battery backed cache that I know of. There is a new possibility of using fast NAND flash instead of DRAM to act as the cache on drives and HBA’s. Since NAND is non-volatile it doesn’t need to have constant power. To make up for the slower speed of NAND over DRAM caches, they are making them two or more times the size.
Just in case you haven’t had a chance to peek into your servers here is an assortment of HBA’s from yesterday and today.
Until Next Time
I hope you know a little more about HBA’s now and have a better understanding what they are and what they do.