Disy Tech-Blog

TDD for PL/SQL Developement

TDD for PL/SQL Developement

Setting up a Testing Framework with JUnit

18.05.2016 | Karsten Schnitter

Are you extensively developing routines in PL/SQL? We share our experience in setting up a TDD environment aiding the process.

Disy Spatial Workbench

For extensive spatial calculations we have developed our own framework known as Disy Spatial Workbench. It consists of a library of PL/SQL packages and a Java based batch engine to facilitate the execution of database routines and statements using these packages. The PL/SQL routines vary from very general utilities up to highly specialized algorithms. They can be used for adhoc analysis as well as within the full framework. With this setup the Disy Spatial Workbench enables very complex analyses. One example is the noise level mapping for the German Federal Railway Authority.

As the PL/SQL routines are widely used we want to ensure their correct behavior by automated tests. For this we developed our own test framework. A lot of useful functionality for this is already contained in the Disy Spatial Workbench so we build on this foundation.

Let’s have a look at the ideas!

Test Requirements

The most basic requirement is an Oracle database where the PL/SQL routines do their work. Now we need some framework that helps running tests against that database. We decided to use JUnit and our Java libraries to develop our own test framework.

Basic Test Building Blocks

These are the basic steps our testing framework should provide:

  • deploy the current version of the tested packages
  • create tables and insert test data
  • run the tested routines
  • access the result data and assert its correctness
  • clean up the database for the next test runs

These are the functional requirements for the testing framework. But there are also some non-functional requirements that needs addressing:

  • reproducibility
  • short test running time
  • availability to continuous integration (Jenkins)
  • execution of multiple tests in parallel
  • debugging of failing tests

Depending on your project you might want to drop one or the other of these requirements. For example the ability to run multiple test instances at same time might not be needed. Nonetheless, our testing framework addresses all of these issues.

Setting up the Database

The major decision for the test framework is what structures should be used within the database. As a first glance one could think:

“Why don’t I just create one test user and run all test instructions inside one transaction? After the test I rollback the transaction and everything should be fine.”

This standard procedure for many database tests cannot be applied in this scenario as we want to deploy the current version of our PL/SQL packages. If the PL/SQL routines were faulty they would leave our test scheme in a defunctional state. Furthermore the restriction to one transaction is to narrow for our scenario. Finally different developers might have and need different versions of the PL/SQL code, at least they should have a database user each. For our test framework we came to the conclusion that we needed to do something more than this.

We decided to create a new user/schema for each test. After the test the schema is dropped. The advantages of this choice are:

  • every test starts with a clean and consistent state of the database
  • running multiple tests in parallel is possible
  • no clean up for the next test is required
  • test schemas are available for debugging without blocking other tests
  • seamless continous integration

However there are some disadvantages with this approach:

  • every tests needs to create the full schema including all required tables and packages
  • SYS DBA access or similar rights are required for creating new users

Always creating the full schema may be a time consuming effort and slow down the TDD experience. We compromised by creating only tables required for the current test. The most time during the schema creation then is spent for deploying the packages to test. But this step needs to be done in any case. Using a priviliged account to create a new user was no problem in our case as we were already using a dedicated database server for the development.

The statements executed to create a test scheme look like the following:

CREATE BIGFILE TABLESPACE WB_TEST_059606 
    DATAFILE '/opt/oracle/oradata/testdb1/WB_TEST_059606.DBF' 
    SIZE 10M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 100M
CREATE USER WB_TEST_059606 IDENTIFIED BY WB_TEST_059606 
    DEFAULT TABLESPACE WB_TEST_059606 TEMPORARY TABLESPACE temp
GRANT CREATE SESSION to WB_TEST_059606
GRANT CREATE TABLE to WB_TEST_059606
GRANT CREATE PROCEDURE to WB_TEST_059606
GRANT CREATE SEQUENCE to WB_TEST_059606
GRANT CREATE TRIGGER to WB_TEST_059606
GRANT CREATE TYPE to WB_TEST_059606
GRANT CREATE LIBRARY to WB_TEST_059606
GRANT CREATE OPERATOR to WB_TEST_059606
GRANT CREATE INDEXTYPE to WB_TEST_059606
GRANT UNLIMITED TABLESPACE to WB_TEST_059606
GRANT DEBUG CONNECT SESSION TO WB_TEST_059606
GRANT DEBUG ANY PROCEDURE TO WB_TEST_059606
GRANT CREATE VIEW TO WB_TEST_059606
GRANT CREATE ANY DIRECTORY TO WB_TEST_059606

As you can see we create rather small tablespaces. We can tailor the rights of the created schema to our needs. For example we might grant rights to access tables from certain other users. The name WB_TEST_059606 is created by using a sequence in the SYS DB schema. Further checks are run before these statements are executed to check that the user does not already exist and that enough disk space is available.

Tearing down a test schema is just as simple:

DROP USER WB_TEST_059606 CASCADE
DROP TABLESPACE WB_TEST_059606 
    INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS

Implementing the Test Framework

We implemented our testing framework with the help of JUnit. This enables us to easily run the tests by maven on our build server or by any java ide.

Test Setup via ExternalResource

Setting up a test environment is the showcase of JUnit’s ExternalResource rule. We created our own implementation of that abstract class that would create a test database schema in the before() method and drop the schema in the after() method. Basically it executes the statements of the previous sections with some refinements. It can be configured to deploy packages, create tables and insert test data.

Using a JUnit rule has the benefit that it can be used either as class rule where it is only invoked once or as method rule where it is used for every test method. In our case that will create a new test schema for each invocation. Using it as class rule is benificial when testing table based functions that rely on the presence of some table but do not do any updates or inserts.

We extended our rule to give access to the database schema itself. Any actions you might want to do within a test such as adding additional data or querying results can be done with the help of our rule.

Example

Let’s look at a simple PL/SQL function that does a linear interpolation between to given points.

CREATE OR REPLACE PACKAGE DSW_GEOM_UTIL AS

FUNCTION interpolate_point_at(
  in_start MDSYS.SDO_GEOMETRY,
  in_end MDSYS.SDO_GEOMETRY,
  in_ratio NUMBER)
  RETURN MDSYS.sdo_geometry;

  END DSW_GEOM_UTIL;
CREATE OR REPLACE PACKAGE BODY DSW_GEOM_UTIL AS

FUNCTION interpolate_point_at(
  in_start MDSYS.SDO_GEOMETRY,
  in_end MDSYS.SDO_GEOMETRY,
  in_ratio NUMBER)
  RETURN MDSYS.sdo_geometry
AS
  l_gtype NUMBER := in_start.sdo_gtype;
  l_index NUMBER;
  out_array MDSYS.SDO_ORDINATE_ARRAY := MDSYS.SDO_ORDINATE_ARRAY();
BEGIN
  FOR l_index IN 1..in_start.get_dims()
  LOOP
    out_array.extend(1);
    out_array(l_index) := 
        in_ratio * in_end.sdo_ordinates(l_index) +
        (1 - in_ratio) * in_start.sdo_ordinates(l_index);
  END LOOP;
  RETURN MDSYS.SDO_GEOMETRY(in_start.sdo_gtype, 
                            in_start.sdo_srid, 
                            NULL, 
                            MDSYS.SDO_ELEM_INFO_ARRAY(1,1,1), 
                            out_array);
END interpolate_point_at;

END DSW_GEOM_UTIL;

This is a very basic analytic function that does not need any tables for its execution. For testing it is sufficient to deploy the package in our test schema and then execute a statement like the following:

  SELECT DSW_GEOM_UTIL.interpolate_point_at(
            MDSYS.SDO_GEOMETRY(2001, 
                               25832, 
                               NULL, 
                               MDSYS.SDO_ELEM_INFO_ARRAY(1,1,1), 
                               MDSYS.SDO_ORDINATE_ARRAY(0.0, 0.0)),
            MDSYS.SDO_GEOMETRY(2001, 
                               25832, 
                               NULL, 
                               MDSYS.SDO_ELEM_INFO_ARRAY(1,1,1), 
                               MDSYS.SDO_ORDINATE_ARRAY(10.0, 0.0)),
            0.5) 
  FROM DUAL;

So let’s see how the test for this function may look like:

@ClassRule
public static DSWTestSchema schema =
    new DSWTestSchema(DswPackages.DSW_GEOM_UTIL);

@Test
public void interpolateAtZeroGivesStartPoint() throws Exception {
  List<BigDecimal> resultOrdinates = schema.executeFunction(
      "DSW_GEOM_UTIL.interpolate_point_at("
          + "MDSYS.SDO_GEOMETRY("
              + "3301, 25832, NULL,"
              + "MDSYS.SDO_ELEM_INFO_ARRAY(1, 1 ,1),"
              + "MDSYS.SDO_ORDINATE_ARRAY(0.0, 0.0, 0.0)),"
          + "MDSYS.SDO_GEOMETRY("
              + "3301, 25832, NULL,"
              + "MDSYS.SDO_ELEM_INFO_ARRAY(1, 1, 1),"
              + "MDSYS.SDO_ORDINATE_ARRAY(10.0, 20.0, 30.0))"
          + ", 0)"
      + ".sdo_ordinates",
      sqlArrayToListTransformer);
  assertThat(resultOrdinates, containsCoordinates(0.0, 0.0, 0.0));
}

@Test
public void interpolateAtOneGivesEndPoint() throws Exception {
  List<BigDecimal> resultOrdinates = schema.executeFunction(
      "DSW_GEOM_UTIL.interpolate_point_at("
          + "MDSYS.SDO_GEOMETRY("
              + "3301, 25832, NULL,"
              + "MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1),"
              + "MDSYS.SDO_ORDINATE_ARRAY(0.0, 0.0, 0.0)),"
          + "MDSYS.SDO_GEOMETRY("
              + "3301, 25832, NULL,"
              + "MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1),"
              + "MDSYS.SDO_ORDINATE_ARRAY(10.0, 20.0, 30.0)),"
          + "1)"
      + ".sdo_ordinates",
      sqlArrayToListTransformer);
  assertThat(resultOrdinates, containsCoordinates(10.0, 20.0, 30.0));
}

@Test
public void interpolateAtMidpoint() throws Exception {
  List<BigDecimal> resultOrdinates = schema.executeFunction(
      "DSW_GEOM_UTIL.interpolate_point_at("
          + "MDSYS.SDO_GEOMETRY("
              + "3301, 25832, NULL,"
              + "MDSYS.SDO_ELEM_INFO_ARRAY(1, 1, 1),"
              + "MDSYS.SDO_ORDINATE_ARRAY(2.0, 4.0, 7.0)),"
          + "MDSYS.SDO_GEOMETRY("
              + "3301, 25832, NULL,"
              + "MDSYS.SDO_ELEM_INFO_ARRAY(1, 1, 1),"
              + "MDSYS.SDO_ORDINATE_ARRAY(10.0, 20.0, 30.0)),"
          + "0.5)"
      + ".sdo_ordinates",
      sqlArrayToListTransformer);
  assertThat(resultOrdinates, containsCoordinates(6.0, 12.0, 18.5));
}

Clearly our test framework provides a very consise way to write tests. In this case the function under test does not change the state of the database apart from deploying the package. Therefore we used our database rule as class rule so that only one schema is created for all tests. Notice that some utility classes are used to extract the result data and assert their correct values. They too belong to our test framework as support classes.

The total running time of all these test cases on our test database is about 30 seconds. The vast majority of that time is spend to create the new schema and deploy the PL/SQL packages. However the response time is fast enough to enable Test Driven Development of the PL/SQL routines.

Summary and Outlook

We showed how a test infrastructure for PL/SQL can be implemented using Java and JUnit. Having decided to create a small schema for every test run a JUnit rule is used for setting up the database. This allows to control the schema creation and grouping of test cases that can be run within the same schema. Creating tables and inserting test data is easily archieved.

Building on top of this basic idea our test framework is expanded to provide the unique database abstraction for the whole Disy Spatial Workbench. This allows the execution of batch jobs within integration tests while checking for connection leaks. Even complex jobs consisting of multiple steps each of which executing different PL/SQL or plain SQL statements are tested that way.

The development of our testing framework was greatly sped along by the fact that most of the functionality is also needed in the production code. However you can implement all the functionality using JUnit alone or with the help of e.g. Spring. When using a JPA provider like Hibernate you would create the test schema when setting up the DataSource.

Overall the development of complex database routines was greatly helped by this test framework. Even in its smallest implementation as presented in the example it allows for unit testing of PL/SQL routines and of plain SQL statements alike. If bugs are encountered stopping at any breakpoint allows inspection of the test schema for identification and eradication of the bug. While running times may be little longer they stay wihtin an acceptable time frame.


The title image Wolfsburg, Volkswagen, Crashtest, VW Käfer-Karosserie was taken 1973 by Lothar Schaak and published under CC-BY-SA 3.0 DE. We cropped the image and republish it under CC-BY-SA 3.0 DE.