In case you are wondering (many have asked), yes, I will be attending the IBM Information on Demand Conference in Las Vegas next week. DBI is not attending as a vendor this year because last year we gave away 10,000 T-shirts to obtain a few dozen good leads. IDUG provides a much more targeted forum for DBI, and we'll be supporting IDUG's 20th Anniversary in Dallas TX in a big way. Meanwhile, you will find me in DB2 LUW sessions and DBI is presenting at the "Demo 'Til You Drop" event Wednesday night. I hope to see you there.
Now, on to those very good questions...
What approaches to tuning would reduce [the BPLRTX] metric?
To reduce BPLRTX, you need to improve the physical design so that DB2 can retrieve result sets while doing less work (or, the same work more efficiently). To achieve this, you need to understand which statements in the workload have the highest costs of execution (costs include CPU time, sort time, I/O time, rows read, and rows written). Once the statements with the highest aggregate and relative costs are known, you should implement Indexes (or MQTs or MDCs) that will enable DB2 to find result sets using fewer Logical Read instructions. The only "cure" for high BPLRTX is an improved physical design that better supports the workload.
A Quick Commercial Break
Be sure to join DBI's free Webinar on Tuesday October 30th at 10:30am CST. The Webinar "DB2 LUW Index Physical Design & DBI Performance Solutions: Your Roadmap to Becoming a Performance Hero" covers How to Analyze Statement Workloads and DB2 LUW Index Design Techniques and Best practices. To register, visit DBI's web site: www.Database-Brothers.com.
Next Question: Would increasing the number of block pages in the BPs help [improve BPLRTX]?
No. But, if your bufferpools have a lot of scans occurring in them, then setting the number of block pages to about 3% of the bufferpool size could help make your page scanning more efficient. Of course, in this case, you are addressing the symptom and not the problem - You don't want scans if you can avoid them, especially in an OLTP database! This takes us back to the Synchronous Read Percentage (SRP) metric. If the SRP is less than 80% for a bufferpool, then this bufferpool would benefit from NUMBLOCKPAGES being set to approximately 3% of the bufferpool size. Do not over allocate NUMBLOCKPAGES as performance degradation can actually occur!
Greg Marino's Comments include this important thought...
The reality is, reducing Logical I/O begins at the individual SQL statement level. While there are server level parameters that can impact the query plans generated by the optimizer (such as BP size, Sort Heap size, Optimization Class, and more), the best approach to reducing Logical I/O (IMHO) is to start by gathering a list of the most expensive SQL statements sorted by Logical I/O cost and TUNE THEM. SQL tuning is an art and would require an entire book to explain the different ways of HOW to Tune SQL.
Thanks Greg - you're spot on correct. In this blog, we're covering the contents of a 1-2 day class- a few key points at a time. First, we look at what to measure and how to identify problems and efficiencies. Once we've assessed a database's health and efficiency, and possibly identified problem areas, then we'll be moving more into solving the identified "opportunities for improvement". SQL analysis and tuning is an extensive topic that will probably consume dozens of blog posts in the future. In the interim, here are two quick tips:
1) Be sure to attend DBI's Webinars. These cover, in part, SQL workload analysis and physical design tuning.
2) In the absence of any fancy tools that do comprehensive aggregation and workload analysis, the easiest "poor man's method" for finding inefficient SQL is to compute the Index Read Efficiency (IREF) for each statement. The ratio of Rows Read to Rows Selected/Fetched reveals quite a bit about a statement's efficiency. Statements with high IREF cause high BPLRTX.
Marco Bartolli asks the next question: What value or range should we aiming for on the BPLRTX?
Greg Marino added some good comments in his reply:
It should be driven down to the minimum possible. What is the minimum? That depends on the SQL being run. You should strive to reduce Logical I/O per SQL statement. How? Via SQL tuning. We've had SQL that runs in the millions of Logical I/O that we've been able to get down to just 4-digit Logical I/O.
To get an idea of your "best case goal" for BPLRTX, we need to look at a few data points:
1) The average number of SELECTS per Transaction (SELTX) 2) The average number of Insert, Update, and Deletes per Transaction (DMLTX) 3) The average number of Statements per Transaction (STMTTX) = SELTX + DMLTX 4) The average number of Index NLEVELS in the database: "Select avg(NLEVELS) from SYSCAT.INDEXES" 5) Add 1 to the average NLEVELS giving AVGNLEVEL
Finally, your "best case goal" for BPLRTX is computed by the formula: BPLRTX GOAL = STMTTX x (AVGNLEVEL + 1)
This goal provides the minimum possible, best case, value for BPLRTX by multiplying the average number of statements per transaction by the (rounded up) average Index NLEVELS plus 1 to cover a logical read to the data page.
By way of example, if an average transaction does 8 SELECT statements and 2 DML statements, and if the average NLEVEL is 3, then best case BPLRTX GOAL = (8+2) x (3+1) = 40 Bufferpool Logical Reads per Transaction.
Now, let's be honest with each other. The world isn't perfect, the application's SQL is probably far from perfect, and the database's indexes aren't perfect either. SO, a realistic goal would include adding a 50% slop factor to the ideal goal - this would give us BPLRTX 60 in the previous example.
When should you sound the BPLRTX fire alarm? If your BPLRTX value is more than twice the realistic BPLRTX goal (120 or greater in this example) and your database is OLTP, then you've probably got a tuning crisis brewing - don't upgrade your hardware! Tune the SQL with a physical design that supports the database workload instead!
The shameless marketing moment... When your back is against the wall, and you don't know who to call, give DBI a ring, and we'll make your databases sing. This little poem was not approved by our marketing department, but I hope you enjoyed it...
DBI is pleased to provide a new service offering Emergency Performance Help. In 4 hours or less, we'll work with you remotely (via GoToMeeting, NetMeeting, Webex, or equivalent - there's no time to get on a plane to diagnose your database's "opportunities for improvement" and develop solutions for immediate, measurable, performance improvements. This service is guaranteed. In the highly unlikely event that we don't deliver, you don't pay. I suggest you bookmark the Emergency Performance Help link so you can easily find us when you need us.
Just for Fun...
My wife is a health nut and she's starting to rub off on me a bit. We've recently learned that avocados are supposed to be good for you. I tried eating one and thought it was gross, but I have found that I do enjoy fresh home made Guacamole! Here's the best recipe I've found so far, with a few of my own tuning tweaks that make it healthier and yummier:
- 6 avocados
- 1/2 teaspoon extra virgin olive oil
- 2 teaspoons of lime juice
- 1/2 - 1 teaspoon of Tabasco® depending on your taste
- 3 tablespoons of finely minced white onion
- 1/2 - 3/4 cup of fresh chopped cilantro
- 1/2 teaspoon seasoned salt, such as Lawry's®
- 3 plum tomatoes, diced
Until next time,