Benchmark test execution

In the most common type of benchmark testing, you choose a configuration parameter and run the test with different values for that parameter until the maximum benefit is achieved.

A single test should include repeated execution of the application (for example, five or ten iterations) with the same parameter value. This enables you to obtain a more reliable average performance value against which to compare the results from other parameter values.

The first run, called a warmup run, should be considered separately from subsequent runs, which are called normal runs. The warmup run includes some startup activities, such as initializing the buffer pool, and consequently, takes somewhat longer to complete than normal runs. The information from a warmup run is not statistically valid. When calculating averages for a specific set of parameter values, use only the results from normal runs. It is often a good idea to drop the high and low values before calculating averages.

For the greatest consistency between runs, ensure that the buffer pool returns to a known state before each new run. Testing can cause the buffer pool to become loaded with data, which can make subsequent runs faster because less disk I/O is required. The buffer pool contents can be forced out by reading other irrelevant data into the buffer pool, or by de-allocating the buffer pool when all database connections are temporarily removed.

After you complete testing with a single set of parameter values, you can change the value of one parameter. Between each iteration, perform the following tasks to restore the benchmark environment to its original state:

  • If the catalog statistics were updated for the test, ensure that the same values for the statistics are used for every iteration.
  • The test data must be consistent if it is being updated during testing. This can be done by:
    • Using the restore utility to restore the entire database. The backup copy of the database contains its previous state, ready for the next test.
    • Using the import or load utility to restore an exported copy of the data. This method enables you to restore only the data that has been affected. The reorg and runstats utilities should be run against the tables and indexes that contain this data.
In summary, follow these steps to benchmark test a database application:
Step 1
Leave the Db2® registry, database and database manager configuration parameters, and buffer pools at their standard recommended values, which can include:
  • Values that are known to be required for proper and error-free application execution
  • Values that provided performance improvements during prior tuning
  • Values that were suggested by the AUTOCONFIGURE command
  • Default values; however, these might not be appropriate:
    • For parameters that are significant to the workload and to the objectives of the test
    • For log sizes, which should be determined during unit and system testing of your application
    • For any parameters that must be changed to enable your application to run

Run your set of iterations for this initial case and calculate the average elapsed time, throughput, or processor time. The results should be as consistent as possible, ideally differing by no more than a few percentage points from run to run. Performance measurements that vary significantly from run to run can make tuning very difficult.

Step 2
Select one and only one method or tuning parameter to be tested, and change its value.
Step 3
Run another set of iterations and calculate the average elapsed time or processor time.
Step 4
Depending on the results of the benchmark test, do one of the following:
  • If performance improves, change the value of the same parameter and return to Step 3. Keep changing this parameter until the maximum benefit is shown.
  • If performance degrades or remains unchanged, return the parameter to its previous value, return to Step 2, and select a new parameter. Repeat this procedure until all parameters have been tested.