18
Thu, Apr
5 New Articles

TechTip: Biting the Bullet: Options for DB2 for i Automated Testing

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

This tip brainstorms a few approaches for automatically testing various aspects of DB2 for i code.

 

In the first tip, I showed various ways to test DB2 functions, procedure results, and triggers for logic accuracy and security. However, the tip never covered how these tests, once written, are to be processed. In particular, questions such as these need further discussion:

  • What coding habits create easily testable code?
  • How do I set up test data?
  • How do I automatically run the tests?

This tip's title phrase is "Biting the Bullet" for good reason; testing requires quite a bit of undesirable work. This tip assumes that you don't have a testing framework in place and makes suggestions for how to create one (although it is possible to use a Java- or .NET-based framework to do this kind of testing.)

What Coding Habits Create Easily Testable Code?

Some code segments are easy to test, and other code segments aren't. What can be done to make code as easy as possible to test?  Here are a few suggestions:

 

Parameterize Your SQL Objects Instead of Relying on Special Registers

When coding something to be testable, don't depend on special registers (e.g., CURRENT_DATE or CURRENT_SERVER) within the code segment because they can vary. Rather, parameterize the code so that parameters can be assigned these values when needed. When testing, passing hardcoded values to the parameters helps to control the anticipated result.

 

Consider the following simple function that relies on the CURRENT_DATE register to get a daily shipment total:

 

CREATE OR REPLACE FUNCTION ADVWORKS.GET_CURRENT_SHIPMENT_VALUE()

RETURNS DEC(17,4)

BEGIN

RETURN (

SELECT SUM(SUBTOTAL)

 FROM SalesOrderHeader

WHERE CAST(SHIPDATE AS DATE)=CURRENT_DATE);

END;

 

The problem with this is the value will change every day, making it difficult to test (especially if your test data is old!). The correct way to code is shown here:

 

CREATE OR REPLACE FUNCTION ADVWORKS.GET_CURRENT_SHIPMENT_VALUE

(@SHIPDATE DATE DEFAULT CURRENT_DATE)

RETURNS DEC(17,4)

LANGUAGE SQL

BEGIN

RETURN (

SELECT SUM(SUBTOTAL)

 FROM SalesOrderHeader

WHERE CAST(SHIPDATE AS DATE)=@SHIPDATE);

END;

 

In this case, the application can always use CURRENT_DATE to invoke the function, and the testing code can test for one or more specific known dates as a hard-coded value. When choosing data to test, pick a ship date in which the data may be difficult to calculate based on what has happened during the day (things such as canceled shipments, credits, etc.). The simple function here doesn't show anything like that, but you all know true application functions are more complex.

 

Making values parameterized will help you write "deterministic" codethat is, the output of the code will always be the same for a given set of inputs. Deterministic code is ideal for testing.

 

Write Modular Code

Writing code to do a small, specific, understandable task always makes testing easier because the inputs and outputs are well understood. For example, I'd much rather test individual subprocedures in an RPG service program rather than try to test a behemoth monolithic OPM RPG program that has subroutines and variables that can't be isolated. The same holds true with SQL: I'd rather test several smaller procedures and functions than test one huge procedure. When too much code is lumped into one large program, it's hard to assess whether various logic pieces are able to do their job correctly under various circumstances.

 

The only problem with modular coding in SQL is that it often suffers from performance problems. Using user-defined functions (UDFs) and complex "reusable" views and are two primary examples where performance penalties are incurred. Regarding UDFs, when the logic is relatively simple, an expression performs better than a UDF (even though using expressions can be more difficult to maintain in the long run). Regarding reusable views, they often include many table references that are not consistently required by the host query. I generally avoid them because DB2 can end up wasting time doing unnecessary joins, etc. Instead, I often prefer to code queries without views, choosing instead to take a “code maintenance hit” by specifying base tables and criteria over and over in various application queries. There are no easy answers, so choose carefully!

 

Also, code that requires temp tables can be difficult to test, so make sure the temp table is well-documented so a tester can create and populate it before running the code. For example, if a stored procedure expects a temp table to be present, I document the temp table definition in the procedure's header comments.

 

I'll digress to RPG and client/server for a moment here. Many times, the click of a GUI button or press of 5250 function key is tied directly to specific code. For testing, it's important that a UI function is not tied directly to code. Instead, the UI function should call a subprocedure or method (passing all relevant parameters) so that the code it's running is testable by a mechanism other than a user. It's more work, but it'll be worth it when your testing is underway. The point is that code that's directly callable is the easiest to test. Where possible, make sure your methods/procedures/functions/subprocedures are independent and callable as standalone units.

 

Don't Write Code That Depends on Hard-Coded Library/Schema References

Relying on a specific library/schema reference can often derail testing by subjecting code to the wrong data schema (say, a production schema instead of a test schema) or the wrong code object schema. In particular with DB2 for i, when using the system naming convention, make sure the library list is set up correctly at the beginning of the job (using code, a particular user profile or job description, etc.). 

 

Make sure the SQL CURRENT_SCHEMA and PATH registers are set correctly so that unqualified objects can use either test or production data, depending on the library list. When writing code using the *SQL naming convention, remember that unqualified data object references are bound at compile time, so you will have to maintain multiple copies of the compiled object (e.g., one object per dev, test, and production environment).

 

Likewise, IFS file and folder paths should not be hardcoded in an application. A table, data area, global variable, environment variable, etc. can be used to store this information.

How Do I Set Up Test Data?

I suspect that for most application teams, getting test data is by far the biggest stumbling block for testing. Many times, continually updating a test environment with live data is infeasible due to lack of disk space or other resources. Further, once the test data is in place, a bad code change or a test that purposely modifies data can create the need to refresh the data all over again. In fact, for testing to be feasible, a frequent and timely restoration of data is a requirement.

 

Remember this: when testing, you don't need an entire copy of your application data. What you need is a subset of the static data. If your application has 20 years of history from 1994–2014, chances are most of the data prior to 2014 will be unchanging. Further, the years 2012–2013 are likely to have many examples of the data scenarios you want to test against.

 

For a given application, a first step to writing reusable tests is to obtain a subset of the application's data. This usually involves all of the master data (item master, customer master, etc.) and a subset of the transaction data (purchase orders, customer orders, transaction history, etc.). Further, I clear out many auditing/application log tables, which tend to grow large over time but usually aren't needed. It isn't an easy task to extract a useful slice of data.

 

Once the test data has been isolated, the libraries should be backed up to tape or save files, where it can be easily accessed again in the future. Remember, before running tests, this data will need to be restored to make sure it is in a known state.

 

Yes, the test data will grow old over time, but it's always possible to enhance the data with a new piece of data (such as a new sales order or two) to help satisfy new testing requirements. In other words, test data can be enhanced as needed, but it will need to be backed up as part of the base data.

Alternatively, a test script is free to add new data as required for a test. For example, a script can add a special type of expedited shipment request to the data, process the shipment (for example, examining a tracking number), and evaluate the results. Whether the data can be added "on the fly" by the unit test or be placed in the base data set depends on the complexity of the data.

 

Another way to test in DB2 (when journaling is active) is to run a test within a transaction that can be rolled back. That way, the changes can be made, validated and reset to its initial state.

 

One drawback to developing and testing with a small subset of data is that performance problems in the code will often slip by. A query that runs well against 15,000 rows in a test library may die a horrible death against millions of rows in production. Performance tests aren’t the goal here, but it is something to beware.

How Do I Run the DB2 Tests I Write?

Testing can be done in many ways, ranging from a manual operation of running your own scripts after you've made a code change to full-blown automated testing that runs nightly or even when a code change is moved from development to test. In the absence of using a framework, you're on your own to write a processing engine (which is relatively painless) that can run all of the tests and return results.

 

If you're writing your own processor, listed below are some major features and requirements to consider:

  • A minimal yet complete set of data that encompasses the set of business rules to be tested must be created. For IBM i applications, this would typically be one or more libraries and perhaps some IFS files. As discussed in the prior section, this is by far the biggest challenge, but it's extremely rewarding when in place.
  • The first task done by the engine would be to restore the test schemas and IFS folders so that the data is in a known state when the tests begin.
  • SQL scripts (or embedded SQL programs or both) that execute a test must be written (some sample scripts were shown in Part 1.
  • An engine for processing the scripts or programs is required. This would typically be a program that reads through a list of SQL source members or embedded SQL programs to execute. The order of the tests should be controllable by name (alphabetical order), category, or some user-defined ordering scheme (because you may find you want certain tests to run before others). Further, the engine must record the status of the test when it completes (pass/fail). Finally, when the tests are completed, the engine should be able to report the status of each test along with a description for each failure.
  • Some tests will need to modify data. One test's data modification can ruin another test's expectation for the state of the data! For this reason, a test that changes data should be able to give some kind of feedback to the processing engine, indicating that the data should be restored when it's done. Whether a single table or the whole data set should be restored is up to the developer (assuming the processing engine can handle this level of complexity).
  • Be mindful about the order the tests are executed in. Ideally, tests that perform read-only operations should be executed together while the data is in main storage. Restoring data will necessitate the data to be dropped from main storage and reloaded.
  • Depending on the testing load, the engine can be started each night or directly after developers move changes into the test environment.
  • The test processing engine should have its library list established correctly before starting the processing.
  • In the event of an unhandled exception, such as a decimal data error, the engine should be able to capture and respond to any message inquires on the job’s message queue in order to keep the process going (counting the test as a fail).

More Acknowledgements on the "Ugly" Side of Testing

If you think this is starting to sound like too much work, your intuition serves you well. Here are some more things to think about:

  • Accept the fact that some important code in a legacy application will need to be refactored as modular before it is considered testable. In DB2, you may need to break apart procedures into multiple pieces (again, carefully watching for performance problems). In procedural programming, there are often embedded subroutines that just need to be ripped out and coded as a subprocedure so that it can be tested externally.
  • To get started, select the most important business applications that have code conducive to testing.
  • Since unit tests require careful planning, you won't be able to create them all at once. Unit test libraries build up over the years. As the test count grows, the execution time can take several hours to run.
  • It may take some effort to convince management that writing unit tests is a worthwhile endeavor. It comes down to this: a developer can spend valuable time fixing bugs and data after an application fails (and costs the business money) or spend the time up front writing tests. These tests may seem like a waste at first, but as they grow and start identifying bugs, you'll realize they're like a good insurance policy that protects an import asset: the business app(s). Moreover, once a large number of tests are in place, it becomes easier to make difficult changes to the application. How many times have developers just said, "No way. That change is too drastic." Testing can help!
  • Write tests while developing code when the possible scenarios and problems are fresh in your mind.
  • Unit and quality test failure messages should be as specific as possible.
  • Sometimes a legitimate application change will cause test scripts to fail. In these cases, the tests will have to be reviewed and modified to match the new logic changes.

The Impetus for Change

I used to scoff at the idea of taking time to set up a data environment and writing tests for code, especially database code. In my current position as a SQL Server DBA for a large ERP application, my mind was changed about this. There were several times I made seemingly safe, innocent changes that the unit tests (written by other developers) revealed to be disaster.

 

These particular changes probably would've made it past QA testing and landed in our clients' production environment. The problem was that some areas of the application had been recently changed and consequently conflicted with what I did, but neither of us knew it.

 

Now that we have many unit tests implemented, I have much greater confidence about making complicated database changes. I recently dared to overhaul a primary and poorly designed large table within the application. I looked through code and thought I completed all necessary changes. Sure enough, the unit tests caught about five things I missed. Without the tests, I probably would've shied away from this dangerous yet needful improvement.

 

Modern applications are usually too complicated to tackle without some sort of assistance. Implementing unit testing safeguards against application changes and also allows developers unfamiliar with the app to participate in making code changes because there will often be feedback if something is broken. Tests can also be a crude training tool as they demonstrate how code is expected to run.

 

The same techniques illustrated here for DB2 for i can also be applied to high-level languages (HLLs). Implementing RPG, C, COBOL, and DB2 unit tests doesn't alleviate the need for QA and UI testing. Unit tests are just one piece of the testing pie that developers should get in the habit of doing. Software is never cheaper to maintain and more pleasurable to use than when it's relatively bug-free (and can be safely modified).

Michael Sansoterra is a DBA for Broadway Systems in Grand Rapids, Michigan. He can be contacted at This email address is being protected from spambots. You need JavaScript enabled to view it..


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: