Unit Test Automation Platform
Unit Test Automation Using Testing Framework Gallio
The aim is to define a framework for test cases. The software that we are aiming to use for this purpose is called gallio. This testing framework should have the flexibility to arrange test cases wherever they are needed according to the business logic. The test cases themselves include one or more unit tests. It should be possible to group unit tests as well as test cases and re-use them for other test cases. Several unit tests already exist and might be re-used and arranged in a different context. Once set up these test cases could run automatically during the testing process and that way cover the testing of basic functionality fast and reliable without the need of being performed by users or programmers over and over again. It should also be possible to run different test scenarios in parallel. There are several areas where recurring functionality has to be tested. Many of the functions depend on access rights. Therefore users with various privileges should be set up beforehand. The following tests should be started with privileged users first in order to make sure that the corresponding actions work in general. At a later stage the users with more restrictive access rights can be used. Gallio claims to handle all this and should be thoroughly investigated. This task involves setup and configuration of gallio, getting familiar with unit tests, arranging unit tests to test cases, performing tests using different settings, developing unit tests further and documenting the different steps to configure gallio as well as listing the pros and cons.
OPENING SCREENS
This is a good point to start with. A test case should be defined to open all available screens one after the other without errors. A scan through the menu items should open the corresponding screens or provide an adequate error message. Since this is also based on the user’s permissions, the test should be set up within one module at a time. The interaction between modules is then taken care of in the rights management tests.
MAINTAIN TABLES
The maintain tables screens in OpenPetra are the basic means of entering reference data. Most of the times they provide functionality to <ADD>, <EDIT> and <DELETE> records. Test cases should be defined for each action and each maintain table screen of a defined module. If possible a list of maintain table screens could be generated by request and taken as an input to alternate through each set of actions. In case an action cannot be performed, a check on the corresponding error message is needed. If the exception raised matches the desired outcome, then the test case should be considered to have finished successful. It might for example be ok that one cannot delete a record, because it has cross references to current data records.
VALIDATIONS
Although the data types determine the values that are allowed to be entered, there might not be a validation check in place to handle the error correctly. The goal is to scan the screen for input fields and try to raise an uncaught exception during saving. A set of special characters should be checked against the field’s data types.
RIGHTS MANAGEMENT
Profiles for users with different privileges should be provided beforehand. The user’s permissions then need to be tested against the granted privileges. A good start would be to have a user with read permissions only and try to generate and save data. Once these tests are more advanced, different test users with different roles in the various modules can be checked concerning their permissions when invoking a screen from another module.
PARTNER EDIT AND PARTNER FIND
Create a partner with basic information. See if it could be found by the partner find functionality or even in the database and ensure the entries are as expected. If so add more attributes to the partner record and check if it saves correctly.
TEST POPULATION
There already exist some tools to generate random test data for the partner module. This generation of test records needs to be looked at in detail and - if possible - a similar mechanism should be in place to generate finance data. Since this is too much to start with, a concept of saving and restoring the provided test data should be developed. Key users/testers will want to modify different records according to their purposes and keep this information. So far an empty database is delivered with a OpenPetra standalone installation. It should be possible to save and restore specific records for testing purposes. Investigations should be done on different possibilities as copying and anonymizing from production databases, setup a test database and backup regularly, load external files and have a concept of how to restore the data if there are changes to the database. Special attention must be paid to primary keys when importing saved or predefined records. Their former primary keys might be already in use due to a randomly generated test population.