The ARSS for this database was 4.08 meaning that the typical SELECT statement returns four rows. This is indicative of an OLTP database, and the participant confirmed that indeed this database was OLTP.
The IREF for this database was 180 meaning that DB2 picks up and evaluates 180 rows to return just one row on average. This is indicative of scans, and scans in an OLTP database are bad (or evil as my daughter would say).
The SRP for this database was a dreadfully low 7.09%. This further indicates that there is a great deal of asynchronous prefetch I/O occuring ( 93% prefetch! ). When the SRP is low in an OLTP database, this means that indexes are missing or sub-optimally defined.
Number of Transactions Completed (TXCNT ) : Over 7,000,000
The average transaction was running 3.88 Selects for each transaction. This is on the low side for worldwide averages in my experience. Typical SELTX for OLTP range from 3-15. The value of 3.88 means the transactions are small in work scope.
The average transaction was running 1.41 DML per transaction. This is within the worldwide normal range of .5 to 4. 3-4 Select statements are being accompanied by 1-2 Insert/Update/Delete statements, on average. This is good because the units of work are small.
On average, each transaction in this database performed two sorts. Here is another indicator of potential "opportunity for improvement". This cost metric can be used to verify tuning success.
On average, each transaction performed 0.0022 sort overflows. The good news here is that very few of the sorts (2 per TX) are overflowing SORTHEAP memory. Still, SORT is a four-letter word and sorts are EVIL - they burn up CPU cycles even if performed in memory. This cost metric can be used to verify tuning success.
The average transaction was reading 2,853.78 rows to fetch just 15.86 rows on average. This is very inefficient as indicated by the IREF metric. CPU consumption on this machine is much higher than it needs to be.
The average transaction is performing 626.86 logical bufferpool page reads. As noted in an earlier blog post, this cost metric is one of the best cost measurements for verifying your tuning success. Bufferpool logical page reads equate in direct proportion to CPU time consumed.
This database is performing 249.42 logical index page reads for every transaction. Since 249.42 is much greater than SELTX x 4 x 1.5 (23.29), this indicates that there are a significant number of leaf page scans occurring (DB2 is using an index but MatchCols = 0). Leaf page scans are expensive and CPU costly. BPLITX provides another excellent cost measurement to verify your tuning success.
The average time to perform a physical read for this database was 0.151ms. This is very good, and perhaps one of this database's saving graces. However, 93% of the I/O is prefetch asynchronous I/O which tends to be much faster than synchronous I/O.
The average time to perform a physical write for this database was 1.84ms. This is good. 97.29% of writes are being performed asynchronously. This is also good.
The average time to complete a sort was 0.28ms and each transaction spent 0.569ms sorting. While these sorts are small and fast, they may be burning up CPU cycles unnecessarily if they can be eliminated. 0.11% percent of Sorts are overflowing the SORTHEAP. As a percentage, this is good.
The average lock duration was 60.6ms (Total Lock Wait Time divided by number of Lock Waits), and each transaction suffered 0.085ms of lock wait time (Total Lock Wait Time divided by number of Transactions). While this isn't bad, zero time spent waiting on locks is ideal. I expect that some of this lock wait time could be reduced if better indexes were available to reduce the number of scans. Locks are a symptom of poor performing SQL, not a problem.
The CATHR was 99.98%. This is excellent. The Catalog Cache is not an AUTOMATIC tuning (STMM) participant.
The PKGHR was 99.04%. This is excellent.
The database had closed 4,648 files. The participant had increased MAXFILOP from the default value of 64 up to 4096, but 4096 is not large enough. Other participants in the class still had the default value of MAXFILOP 64 in place and were, no surprise, victims of files being closed. Closing files burns up CPU cycles and slows down your SQL performance.
The bufferpool index hit ratio was 98.58% and the overall bufferpool hit ratio was 86.52%. While these hit ratios look good, remember that bufferpool hit ratios can be very misleading (give you a false sense of security and tuning success) when scans are occurring in the bufferpools.
The APPR for this database was 26.01. APPR cannot exceed the EXTENTSIZE for tablespaces, and the average/typical EXTENTSIZE for tablespaces in this database was 32. Therefore, the value of 26 is good and means that prefetch is working effectively. THE BAD NEWS, HOWEVER, is that this OLTP database is satisfying its read requests with asynchronous prefetch I/O 93% of the time!!!
This database contains hundreds of tables. We did a quick scan of the Tables snapshot and found more than 15 tables having Rows Read per Transaction greater than 10. Some of the TBRRTX values exceeded 100 or more. This is further evidence that SCANS are prolific in this database.
Summary This database is suffering from several mosquito swarm infestations. The numbers indicate that many needed indexes are missing, and some of those that exist could be better defined. The absence of an optimal index physical design is contributing to excessive scans, leaf page scans, unnecessary sorts, and lock wait time. Based on evaluations of hundreds of databases from around the world, my instinct tells me that CPU utilization is about 50-80% higher on this machine than it needs to be. While response time performance is reported to be fairly good presently, it could be improved 10-25% or more by reducing CPU consumption with an improved physical design. Furthermore, if more application users are added, the abnormally high CPU consumption may cause this database to fail to scale.
The participant needs to perform SQL Cost Aggregation to identify and isolate problematic costly statements that are performing the scans, then cure these opportunities for improvement with high quality indexes. Also, read this blog post about Index Cardinality.
More Interesting Reading
The Shameless Marketing Moment
DBI customers typically enjoy 30-90% CPU savings and improved response times within just a few hours of using DBI's integrated performance management tools for DB2 LUW.
|If you find all of the measurements and analysis above to be helpful, you don't have to crunch the numbers by hand. DBI's Brother-Panther™ provides industry leading, best of breed, statement workload analysis capabilities plus computes all of the metrics discussed in this blog post. You will quickly see the statements that have high aggregate costs, and you can pass individual statements or entire costly workloads to the IBM Design Advisor for rapid solutions. After you have implemented a physical design change with the intent of improving performance, Brother-Panther's Performance Trend Charts with integrated, correlated, and plotted change event histories make it easy to verify your performance improvement success. No tool on the market makes the full performance management life cycle of identify, isolate, cure, and verify easier, nor can any other tool deliver better performance results.|
Just for Fun
I found a web site where I can get my Credit Score for FREE. Yes, truly FREE. There are no hidden costs, no monthly charges, and they don't even ask for a credit card number. Visit: http://www.creditkarma.com/. My score is 786.
With kindest regards,
President & CEO, DBI
IBM GOLD Consultant
DBI is an IBM Advanced, Industry Optimized, Business Partner
DBI is an Oracle Technology Network Partner
DBI provides products and services to over 2,000 customers worldwide
Your Performance IS Our Business