This tip brainstorms a few approaches for automatically testing various aspects of DB2 for i code.
In the first tip, I showed various ways to test DB2 functions, procedure results, and triggers for logic accuracy and security. However, the tip never covered how these tests, once written, are to be processed. In particular, questions such as these need further discussion:
- What coding habits create easily testable code?
- How do I set up test data?
- How do I automatically run the tests?
This tip's title phrase is "Biting the Bullet" for good reason; testing requires quite a bit of undesirable work. This tip assumes that you don't have a testing framework in place and makes suggestions for how to create one (although it is possible to use a Java- or .NET-based framework to do this kind of testing.)
What Coding Habits Create Easily Testable Code?
Some code segments are easy to test, and other code segments aren't. What can be done to make code as easy as possible to test? Here are a few suggestions:
Parameterize Your SQL Objects Instead of Relying on Special Registers
When coding something to be testable, don't depend on special registers (e.g., CURRENT_DATE or CURRENT_SERVER) within the code segment because they can vary. Rather, parameterize the code so that parameters can be assigned these values when needed. When testing, passing hardcoded values to the parameters helps to control the anticipated result.
Consider the following simple function that relies on the CURRENT_DATE register to get a daily shipment total:
CREATE OR REPLACE FUNCTION ADVWORKS.GET_CURRENT_SHIPMENT_VALUE()
RETURNS DEC(17,4)
BEGIN
RETURN (
SELECT SUM(SUBTOTAL)
FROM SalesOrderHeader
WHERE CAST(SHIPDATE AS DATE)=CURRENT_DATE);
END;
The problem with this is the value will change every day, making it difficult to test (especially if your test data is old!). The correct way to code is shown here:
CREATE OR REPLACE FUNCTION ADVWORKS.GET_CURRENT_SHIPMENT_VALUE
(@SHIPDATE DATE DEFAULT CURRENT_DATE)
RETURNS DEC(17,4)
LANGUAGE SQL
BEGIN
RETURN (
SELECT SUM(SUBTOTAL)
FROM SalesOrderHeader
WHERE CAST(SHIPDATE AS DATE)=@SHIPDATE);
END;
In this case, the application can always use CURRENT_DATE to invoke the function, and the testing code can test for one or more specific known dates as a hard-coded value. When choosing data to test, pick a ship date in which the data may be difficult to calculate based on what has happened during the day (things such as canceled shipments, credits, etc.). The simple function here doesn't show anything like that, but you all know true application functions are more complex.
Making values parameterized will help you write "deterministic" code—that is, the output of the code will always be the same for a given set of inputs. Deterministic code is ideal for testing.
Write Modular Code
Writing code to do a small, specific, understandable task always makes testing easier because the inputs and outputs are well understood. For example, I'd much rather test individual subprocedures in an RPG service program rather than try to test a behemoth monolithic OPM RPG program that has subroutines and variables that can't be isolated. The same holds true with SQL: I'd rather test several smaller procedures and functions than test one huge procedure. When too much code is lumped into one large program, it's hard to assess whether various logic pieces are able to do their job correctly under various circumstances.
The only problem with modular coding in SQL is that it often suffers from performance problems. Using user-defined functions (UDFs) and complex "reusable" views and are two primary examples where performance penalties are incurred. Regarding UDFs, when the logic is relatively simple, an expression performs better than a UDF (even though using expressions can be more difficult to maintain in the long run). Regarding reusable views, they often include many table references that are not consistently required by the host query. I generally avoid them because DB2 can end up wasting time doing unnecessary joins, etc. Instead, I often prefer to code queries without views, choosing instead to take a “code maintenance hit” by specifying base tables and criteria over and over in various application queries. There are no easy answers, so choose carefully!
Also, code that requires temp tables can be difficult to test, so make sure the temp table is well-documented so a tester can create and populate it before running the code. For example, if a stored procedure expects a temp table to be present, I document the temp table definition in the procedure's header comments.
I'll digress to RPG and client/server for a moment here. Many times, the click of a GUI button or press of 5250 function key is tied directly to specific code. For testing, it's important that a UI function is not tied directly to code. Instead, the UI function should call a subprocedure or method (passing all relevant parameters) so that the code it's running is testable by a mechanism other than a user. It's more work, but it'll be worth it when your testing is underway. The point is that code that's directly callable is the easiest to test. Where possible, make sure your methods/procedures/functions/subprocedures are independent and callable as standalone units.
Don't Write Code That Depends on Hard-Coded Library/Schema References
Relying on a specific library/schema reference can often derail testing by subjecting code to the wrong data schema (say, a production schema instead of a test schema) or the wrong code object schema. In particular with DB2 for i, when using the system naming convention, make sure the library list is set up correctly at the beginning of the job (using code, a particular user profile or job description, etc.).
Make sure the SQL CURRENT_SCHEMA and PATH registers are set correctly so that unqualified objects can use either test or production data, depending on the library list. When writing code using the *SQL naming convention, remember that unqualified data object references are bound at compile time, so you will have to maintain multiple copies of the compiled object (e.g., one object per dev, test, and production environment).
Likewise, IFS file and folder paths should not be hardcoded in an application. A table, data area, global variable, environment variable, etc. can be used to store this information.
How Do I Set Up Test Data?
I suspect that for most application teams, getting test data is by far the biggest stumbling block for testing. Many times, continually updating a test environment with live data is infeasible due to lack of disk space or other resources. Further, once the test data is in place, a bad code change or a test that purposely modifies data can create the need to refresh the data all over again. In fact, for testing to be feasible, a frequent and timely restoration of data is a requirement.
Remember this: when testing, you don't need an entire copy of your application data. What you need is a subset of the static data. If your application has 20 years of history from 1994–2014, chances are most of the data prior to 2014 will be unchanging. Further, the years 2012–2013 are likely to have many examples of the data scenarios you want to test against.
For a given application, a first step to writing reusable tests is to obtain a subset of the application's data. This usually involves all of the master data (item master, customer master, etc.) and a subset of the transaction data (purchase orders, customer orders, transaction history, etc.). Further, I clear out many auditing/application log tables, which tend to grow large over time but usually aren't needed. It isn't an easy task to extract a useful slice of data.
Once the test data has been isolated, the libraries should be backed up to tape or save files, where it can be easily accessed again in the future. Remember, before running tests, this data will need to be restored to make sure it is in a known state.
Yes, the test data will grow old over time, but it's always possible to enhance the data with a new piece of data (such as a new sales order or two) to help satisfy new testing requirements. In other words, test data can be enhanced as needed, but it will need to be backed up as part of the base data.
Alternatively, a test script is free to add new data as required for a test. For example, a script can add a special type of expedited shipment request to the data, process the shipment (for example, examining a tracking number), and evaluate the results. Whether the data can be added "on the fly" by the unit test or be placed in the base data set depends on the complexity of the data.
Another way to test in DB2 (when journaling is active) is to run a test within a transaction that can be rolled back. That way, the changes can be made, validated and reset to its initial state.
One drawback to developing and testing with a small subset of data is that performance problems in the code will often slip by. A query that runs well against 15,000 rows in a test library may die a horrible death against millions of rows in production. Performance tests aren’t the goal here, but it is something to beware.
How Do I Run the DB2 Tests I Write?
Testing can be done in many ways, ranging from a manual operation of running your own scripts after you've made a code change to full-blown automated testing that runs nightly or even when a code change is moved from development to test. In the absence of using a framework, you're on your own to write a processing engine (which is relatively painless) that can run all of the tests and return results.
If you're writing your own processor, listed below are some major features and requirements to consider:
- A minimal yet complete set of data that encompasses the set of business rules to be tested must be created. For IBM i applications, this would typically be one or more libraries and perhaps some IFS files. As discussed in the prior section, this is by far the biggest challenge, but it's extremely rewarding when in place.
- The first task done by the engine would be to restore the test schemas and IFS folders so that the data is in a known state when the tests begin.
- SQL scripts (or embedded SQL programs or both) that execute a test must be written (some sample scripts were shown in Part 1.
- An engine for processing the scripts or programs is required. This would typically be a program that reads through a list of SQL source members or embedded SQL programs to execute. The order of the tests should be controllable by name (alphabetical order), category, or some user-defined ordering scheme (because you may find you want certain tests to run before others). Further, the engine must record the status of the test when it completes (pass/fail). Finally, when the tests are completed, the engine should be able to report the status of each test along with a description for each failure.
- Some tests will need to modify data. One test's data modification can ruin another test's expectation for the state of the data! For this reason, a test that changes data should be able to give some kind of feedback to the processing engine, indicating that the data should be restored when it's done. Whether a single table or the whole data set should be restored is up to the developer (assuming the processing engine can handle this level of complexity).
- Be mindful about the order the tests are executed in. Ideally, tests that perform read-only operations should be executed together while the data is in main storage. Restoring data will necessitate the data to be dropped from main storage and reloaded.
- Depending on the testing load, the engine can be started each night or directly after developers move changes into the test environment.
- The test processing engine should have its library list established correctly before starting the processing.
- In the event of an unhandled exception, such as a decimal data error, the engine should be able to capture and respond to any message inquires on the job’s message queue in order to keep the process going (counting the test as a fail).
More Acknowledgements on the "Ugly" Side of Testing
If you think this is starting to sound like too much work, your intuition serves you well. Here are some more things to think about:
- Accept the fact that some important code in a legacy application will need to be refactored as modular before it is considered testable. In DB2, you may need to break apart procedures into multiple pieces (again, carefully watching for performance problems). In procedural programming, there are often embedded subroutines that just need to be ripped out and coded as a subprocedure so that it can be tested externally.
- To get started, select the most important business applications that have code conducive to testing.
- Since unit tests require careful planning, you won't be able to create them all at once. Unit test libraries build up over the years. As the test count grows, the execution time can take several hours to run.
- It may take some effort to convince management that writing unit tests is a worthwhile endeavor. It comes down to this: a developer can spend valuable time fixing bugs and data after an application fails (and costs the business money) or spend the time up front writing tests. These tests may seem like a waste at first, but as they grow and start identifying bugs, you'll realize they're like a good insurance policy that protects an import asset: the business app(s). Moreover, once a large number of tests are in place, it becomes easier to make difficult changes to the application. How many times have developers just said, "No way. That change is too drastic." Testing can help!
- Write tests while developing code when the possible scenarios and problems are fresh in your mind.
- Unit and quality test failure messages should be as specific as possible.
- Sometimes a legitimate application change will cause test scripts to fail. In these cases, the tests will have to be reviewed and modified to match the new logic changes.
The Impetus for Change
I used to scoff at the idea of taking time to set up a data environment and writing tests for code, especially database code. In my current position as a SQL Server DBA for a large ERP application, my mind was changed about this. There were several times I made seemingly safe, innocent changes that the unit tests (written by other developers) revealed to be disaster.
These particular changes probably would've made it past QA testing and landed in our clients' production environment. The problem was that some areas of the application had been recently changed and consequently conflicted with what I did, but neither of us knew it.
Now that we have many unit tests implemented, I have much greater confidence about making complicated database changes. I recently dared to overhaul a primary and poorly designed large table within the application. I looked through code and thought I completed all necessary changes. Sure enough, the unit tests caught about five things I missed. Without the tests, I probably would've shied away from this dangerous yet needful improvement.
Modern applications are usually too complicated to tackle without some sort of assistance. Implementing unit testing safeguards against application changes and also allows developers unfamiliar with the app to participate in making code changes because there will often be feedback if something is broken. Tests can also be a crude training tool as they demonstrate how code is expected to run.
The same techniques illustrated here for DB2 for i can also be applied to high-level languages (HLLs). Implementing RPG, C, COBOL, and DB2 unit tests doesn't alleviate the need for QA and UI testing. Unit tests are just one piece of the testing pie that developers should get in the habit of doing. Software is never cheaper to maintain and more pleasurable to use than when it's relatively bug-free (and can be safely modified).
LATEST COMMENTS
MC Press Online