Team with the best
Db2® LUW Performance Tools
company in the World

DB2 LUW Performance: I/O Optimization and Solid State Disk (SSD)

February 2, 2010, 2:28 pm
Posted by Scott in DB2 Performance Best Practices
IBM Data Management Magazine Q1 2010 arrived today and I was excited to find an article on Unleashing the value of Solid State Drives. Of course, I was thrilled to see DBI's new ad as well (grin). The article puts forth several good ideas about I/O tuning and begs for greater details. Whether you have SSD or not, the principle ideas of the article still apply to "faster storage" versus "slower storage". Let's take a deeper look...
First, let's be clear. Using SSD with DB2 does not mean that you will no longer have to tune your database. As organizations adopt SSD, we will see a shift of databases becoming more CPU bound instead of I/O bound - and we already work with plenty of customers that have CPU bound databases due to large bufferpools masking physical design flaws. ( Read more about bufferpool hit ratios and folly ). CPU saturation will cause some organizations to needlessly waste cash resources on unnecessary hardware upgrades, and pinched bottom lines will send more DBA jobs to the cheapest labor markets... but I digress.

The author, Sunil Kamath, says that DBAs and application developers "spend tremendous amounts of time and money designing and optimizing applications for efficient I/O." I'm not sure I agree, though I would agree that efforts should be put forth. DB2 Automatic Storage and SAN Storage systems have made it more challenging, if not impossible, to pursue I/O fine tuning - we frequently find that DBAs are "given" one or two large file systems by the SAN administrator and told "not to worry about performance" - hope for separation of I/O for indexes, tablespaces, hot data, cold data, cold beer (just checking to make sure you are reading), tempspace(s), and recovery log files is lost. Well, maybe SSD drives will renew inspiration to do real I/O tuning.

I find the statistics in Figure 1 fascinating - it shows how much faster SSD drives are than 15K HDD drives. I think I need to get a SSD drive for my laptop. I do know a number of customers and clients that could put their entire database on a 200GB SSD drive, but there are plenty of larger databases out there as well. For your convenience, here is a recap of Figure 1:

Appx metrics as seen from Application 15,000 rpm HDD Solid State Disk (SSD)
IOPS 150-300 1,000-20,000
Response Time (for read) 5-7ms 1-3ms
I/O Bandwidth 30-60MB/Sec 200-700MB/Sec
IOPS/GB 1-3 20-100
Power (watts) 4-6 watts 5-8 watts (+25%)

For the sake of argument, let's suppose your organization is able to procure a limited quantity of SSD drive space - but not enough storage to house the entire database. Or, if SSD isn't in your near term future, let us just hope, then, that you might have access to two tiers of HDD storage - faster HDD (call it FHDD) and slower HDD (SHDD). What database objects (tablespaces and indexspaces) should you place on the SSD or FHDD, and which objects should be left to suffer the fate of SHDD?

Figure 2 (not shown here - refer to the article) shows how identifying high-use data can help maximize the performance of SSDs (or FHDDs). Most interestingly, with respect to SSDs, the author indicates that an 8X performance improvement was achieved by having only 30% of the database data on SSD. If the remaining 70% of database data (and indexes) are placed onto SSD, transaction throughput increases by 12X. So, clearly the "art of the game" is to figure out which 30% of the database to put on SSD to achieve the best performance at the lowest cost.

Using range partitioning to isolate current "hot" data in tablespaces separate from older "cooler" data makes intuitive sense, but not all data lends itself to range partitioning. (Note: the author references stored procedure SYSPROC.ADMIN_TABLE_MOVE for moving table data from one tablespace to another, but be aware that this new stored procedure isn't available until you reach DB2 9.7).

Here is a very important point that I will repeat just in case you skimmed this part of the article:

SSDs are best suited for random I/Os, where they can improve performance over 15K rpm HDDs by more than 100X (measured in IOPS), compared to 10X improvements for sequential I/O over HDDs.

So, here we are again, talking about the importance of differentiating random I/O from sequential I/O. For the longest time, with respect to OLTP databases, it has always been - and remains - a critically important best practice to separate random I/O from sequential I/O in buffer pools to achieve memory optimization and the best possible performance. Random I/O gets along with Sequential Prefetch I/O about as well as Democrats get along with Republicans in Washington DC, or about as well as smokers get along with non-smokers, or about as well as pro-life advocates get along with ... again, I digress.

Cold, warm, and hot: identifying in-demand data

I feel vindicated. I'm excited. For years I've been blogging and talking about about Tablespace Metrics to measure I/O performance characteristics, and finally - at last - it seems someone at IBM was listening. Thank you Sunil for acknowledging and documenting KPIs that I've been preaching about for last ten years and incorporating into DBI Performance Management Solutions for DB2 LUW.

But, of course, in the usual IBM way, I duly note that you made up some new names for the old metrics just to make things interesting. "Sequentiality Ratio" - that's a beauty. My spell checker doesn't like it, but what you are talking about here is the Asynchronous Read Percentage (affectionately named ARP for short), which is the percentage of prefetch asynchronous reads over total physical reads. When we subtract ARP from 100%, we obtain the Synchronous Read Percentage (or SRP) which is the percentage of reads that are performed synchronously (which is synonymous with "randomly"). Yes, I first blogged about the SRP KPI metric way back in 2005 on www.db2mag.com - back when DB2 had an identity of its own. Please read the blog post on SRP if you haven't yet done so. The SRP and ARP can be, and should be, computed for the database, for each bufferpool, and for each tablespace.

This begs another important question - if "Page velocity" (Physical Reads per Second) indicates that a tablespace object is very warm or hot, Why? Are there scans occurring (high ARP or Sequentiality Ratio) that are driving high I/O rates because Indexes are missing? If SSD drives deliver 100X performance improvement for randomly read data (high SRP), does anyone agree that it might be worthwhile to add some missing indexes to mitigate costly scans?

Page velocity (or Physical Reads per Second) is a rate. Rates provide great entertainment value, but they are rarely helpful, at best, for performing performance tuning. Rates can vary by hour of the day, day of the week, or other business cycles. One cannot master database performance optimization by watching a speedometer. Since the article gives a shameless marketing plug for IBM Performance Expert, it should also be noted that this "tool" is monopolized by "rates" and, in my opinion, comes up quite short in the meaningful, helpful KPI department as it is missing dozens of important averages, costs formulas, and ratios. See our DB2 LUW Performance Tools Comparison.

Access Latency = (Access Time) / (Total Physical I/Os). Since the article's formulas are focused on Read I/O operations, then "Access Latency" is yet another name for Overall Read Milliseconds, or ORMS for short. Read the blog post on ORMS and you will find a helpful SQL Snapshot command. ORMS should be computed for the database overall, for each bufferpool, and for each tablespace.

The Weighting Factor suggests that (Access Density - influenced by Rates) should be multiplied by (Access Latency), and that the higher the weighting factor then the greater the preference, or need, for SSD (or FHDD).

To the contrary, rather than fall victim to the volatility of rates, instead I assert that I/O costs should be considered over extended periods of time. Costs are computed by dividing raw counts (such as Buffer Pool data Physical Reads) by the number of transactions (Commits Attempted + Rollbacks Attempted). The Weighting Factor should be given by the formula:

  • WF = ((Buffer pool data, index, xda, + temporary reads) / (Commits + Rollbacks)) X (Access Latency (aka ORMS))
Further, since randomly/synchronously read objects should be given preferential treatment for SDD or FHDD, then the politically adjusted Weighting Factor should be multiplied by the Synchronous Read Percentage (SRP) to arrive at an optimized ranking for preferential SDD treatment.

Yes, it is probably also a good idea to use SDD or FHDD for TEMPSPACE - on this point we agree, and it remains desirable to have TEMPSPACE isolated from data and index containers by using separate drives.

Oh heck. Never mind. Just go out and blow a few million bucks on SDD drives and CPU upgrades, plus a side helping of 512GB memory, plus double your database license costs to cover all of those new CPUs. It'll be fast, and you can put the machine in a closet and forget about it. DB2 does, after all, allegedly tune itself. If it fails to tune itself such that business needs are met, just write your hardware and database vendor(s) another big check. And, as noted in the article's Figure 1, Power consumed by SDD drives is 25% higher than HDD drives, but the biggest Power consumers in a system are the CPUs - so get ready to increase your energy bills by 24-44% as well. Watch this YouTube Video to see the correlation between CPU busy and energy consumption.

icon_rolleyes

A Shameless Marketing Moment

Using DBI's Brother-Panther®, you can have access to ALL of the IMPORTANT tablespace KPI metrics with just one mouse click:

Brother-Panther Tablespace Performance Analysis

All of the KPIs you need to make optimized SDD and FHDD decisions

With just one more mouse click, you can discover all of the Tables contributing I/O to a tablespace and quickly learn which tables are the I/O heavy hitters along with which tables have I/O problems:

Brother-Panther Table Performance Analysis

OLTP Tables with Table Rows Read per Transaction (TBRRTX) > 10 have I/O problems

Give us a third mouse click, and we will show you which SQL/XML statements are driving I/O to any selected table: Statements, and their aggregate and relative costs, driving I/O to a selected table

This is Statement Cost Aggregation showing total costs for statements contributing I/O to the selected table plus relative costs. Please notice the absence of rates (bah-humbug).

Grant us just one more mouse click, and we'll give you a physical design solution or solutions that will likely reduce your overall workload cost by 30-90%, lower your CPU utilization, increase your SRP and lower I/O costs, and measurably improve application response times and SLA attainments.

That's it. If you work with DBI, we'll help you lower your IT costs and improve application performance in fewer than a half dozen (6) mouse clicks. If we made database tuning any easier or more fun, we'd have to reclassify our company with the Internal Revenue Service (IRS, or taxing authority for non-US readers) as a shrink wrapped video game provider. Look for the Nintendo Wii Edition of our software next year.

Get started with a Proof of Concept. CONTACT DBI or give us a call toll free in the US at 1-866-773-8789 or international at 1-512-249-2324.

Thanks for reading. Hope this helped you learn about DB2 and I/O performance. This blog post took eight hours to write.

Kind regards,
Scott Hayes
President & CEO, DBI
IBM GOLD Consultant and Data Management Champion
Host of The DB2Night Show

Printer friendly