National and International Data Insecurity

In the early 1990s, Client Server computing became popular. Organizations linked multiple smaller computers together in a network to improve speed and access to data, and to lower their costs. Data Warehouses and Data Marts were built on open systems using distributed databases. Organizations raced to acquire virtual mountains of data to improve the quality and speed of decisions. Data has been made widely available throughout organizations - it is as if some even hang neon "Hot Now" lights in their hallways to alert privileged data consumers of the availability of new data that is ready for decision analysis. The problem is, during this Information Technology race to make data widely available to privileged users and decision makers, sufficient attention wasn't paid to security and accountability. The data buffet has been opened and no one is watching what data consumers consume.

Read more...

[ more.. ]


DB2 Magazine Blog and DB2 Performance

Hello readers,

Have you visited DB2 Magazine online lately? They've made several improvements and enhancements to their online content. Not only can you find the current print articles, but also extra "online only" articles, plus a new community wiki, and new blogs.

Speaking of blogs, I have become one of the contributors to the DB2 Magazine blog. In my first set of posts, I am teaching excerpts from my IDUG Education Seminar "DB2 LUW Performance Diagnostic Lab". If you're interesting in learning about DB2 LUW performance and tuning, please visit the DB2 Magazine blog.

Want to follow along here? First you'll need to prepare for the class by collecting some performance data from your DB2 database. You can find the class preparation instructions at www.database-brothers.com/db2mag/GettingReady-IBMDB2LUW-Performance-Diagnosis-Lab.pdf.

What type of database do you have?

You might think you have an OLTP transactional database. Or, you might think you have a Data Warehouse database. But what does your database think? How is it, or the queries within it, really performing?

The Average Result Set Size

Transactional databases tend to process small result set sizes (the actual number of rows retrieved for a given SELECT statement). Data Warehouse databases tend to process large result set sizes - often returning hundreds or thousands of rows for any given SELECT statement. My rule of thumb, or the tipping point between OLTP and Data Warehouse, is an average result set size (ARSS) of 10. If ARSS is less than or equal to 10, then the database is behaving like an OLTP database. If the ARSS is greater than 10, then your database is behaving like a Data Warehouse database. If the ARSS is just a little bit greater than 10, then you may have an OLTP database with some concurrent decision support (DW) queries running.

The ARSS Formula

Using a database snapshot (e.g. dbsnap2.txt), simply divide the number of rows selected by the number of SELECT statements (ROWS_SELECTED / SELECT_SQL_STMTS).

You can learn more about the Average Result Set Size (ARSS) here from Brother-Eagle's Advice, or you can download, install, and run Brother-Eagle Standard Edition ( get it FREE ) to have this and several other metrics computed automatically for you.

FREE is a very good price. icon_biggrin

Until next time,
Scott


Page :  1