TechTip: Database Blocking Made Simple PDF Print E-mail
Tips & Techniques - Database
Written by Kent Milligan   
Friday, 01 April 2011 00:00

Support MC Press - Visit Our Sponsors


Forums Sponsor





Search Sponsor




Take advantage of record block size special values to boost application performance during sequential data processing.


One of the most common recommendations to improve the performance of database-intensive applications is to increase the amount of database blocking. This performance advice especially holds true when the application program is performing sequential data processing.


For example, an application that needs to process all of the orders in a table will read the first order in the table, perform the required processing for that order, and move on to the next order, repeating the same process until all of the orders have been handled. If there are 1,000 orders in the file, this application will end up submitting 1,000 individual requests to the DB2 for i engine to retrieve a record.


While this single-record-at-a-time approach works perfectly fine from a functional point of view, performance can be lacking, particularly as your database tables grow in size. When you know the application is going to perform processing on a set of rows, it would be much more efficient to have DB2 retrieve a block of those rows on a single request instead of submitting individual requests.


Luckily, IBM i developers have made database blocking relatively easy with the Override Database File (OVRDBF) command as shown by the following example:




This OVRDBF command specifies that when inserting or retrieving records in a sequential fashion from the specified field, TABLE33, DB2 should attempt to transfer 100 records on a single request. This database blocking override applies to both native record-level access and SQL requests. Update and delete requests are the only operations that cannot benefit from blocking


While the command itself is relatively easy for developers to use, there is the complexity of computing the value of the optimal number of records for the SEQONLY parameter. Due to the fact that the internal buffers used by DB2 for i have a page size of 4K, the ideal number of record values was a value that results in a group of records that had a total size that was a multiple of 4K. Before using the command, the developer first had to retrieve the record length of the table and then determine the number of records that would result in a record block size that was a multiple of 4K. If a table had a record width of 512 bytes and the developer wanted DB2 to use a blocking size of 32K, then the developer would divide 32K by 512 to determine that 64 records would be the value passed on the SEQONLY parameter. This calculation would have to be re-executed each time a different file was accessed or a different blocking factor was desired.


Recent IBM PTFs move the calculation responsibility from the developer to DB2. With this new functionality, the developer specifies a special record block size value on the SEQONLY parameter, which causes DB2 to calculate the number of records that are needed to meet the requested block size. These new values are available for the SEQONLY parameter:


  • *BUF32K                  
  • *BUF64K                  
  • *BUF128K
  • *BUF256K


If developers would like an application to use a record block size of 128K when sequentially accessing the ORDERS file, they no longer have to waste time with record size calculations; they simply issue the following command:




The new record block size special values are available on the IBM i 6.1 and 7.1 releases by loading the following PTFs:


  • IBM i 6.1 Database Group PTF—SF99601 Version #16 & PTF SI41423
  • IBM i 7.1 Database Group PTF—SF99701 Version #5 & PTF SI41478


Utilizing database record blocking to boost the performance of applications with sequential data processing has never been easier.

as/400, os/400, iseries, system i, i5/os, ibm i, power systems, 6.1, 7.1, V7, V6R1

Kent Milligan
About the Author:

Kent Milligan is a senior DB2 consultant on the DB2 for i Center of Excellence team within the IBM Lab Services Power Systems Delivery Practice.  His primary responsibility is helping software developers use the latest DB2 technologies and port applications from other databases to DB2 for i. After graduating from the University of Iowa in 1989, Kent spent the first eight years of his IBM career as a member of the DB2 development team in Rochester. He speaks and writes regularly on various DB2 for i relational database topics.

Last Updated on Wednesday, 30 March 2011 10:38
User Rating: / 2