Working with NUnit tests

From OpenPetra Wiki
Revision as of 08:10, 21 Haziran 2013 by Pokorra (talk | contribs) (→‎Modal Client Forms Tests: Problems with activation)
Jump to navigation Jump to search

Instructions

We use the open source application NUnit for our tests.

  • Download the latest version from https://launchpad.net/nunitv2/ Choose the latest version and NOT the one for .NET v1.1. At the time of writing the download choice is NUnit-2.6.0.12051.msi.
  • Install this file on your PC using the default options
  • Build the entire Open Petra solution
    • either by running nant generateSolution once; then running nant compile will compile the tests for OpenPetra (and re-running it will recompile the tests)
    • or by doing the same thing using the Developer's Assistant
  • You will find the resulting tests in dlls in the directory delivery/bin/ along with all the dlls that make up the Open Petra run-time.

Now you are ready to start testing...

  • Open NUnit (which you installed as part of the msi above) from the start menu
    • Note: run the 32 bit version of NUnit if you are on Win 7 (found in %ProgramFiles(x86)% and usually has "x86" in the executable name).
  • Choose File | Open Project to open the dll that you want to test, eg. Tests.Common.dll.
  • You can now run a single test, or run all tests at once, depending on which item in the tree you select

If all your test(s) pass, you will see a nice green bar across the top of NUnit. If any test fails the bar will be red. You will (usually) get helpful pointers to the part of the test that failed.

Important Note: If you are running a simple test or a server test you probably don't need to have the server console running at all. If you are running a client test, you will definitely have to start the server console before you run the test. If you forget to do this, the 'error' message that you get when the test fails (as it will) is not very helpful and unlikely to remind you to start the server - until you have seen this unhelpful message for the twentieth time!

Available Tests

  • We have quite simple function tests, eg. in csharp\ICT\Testing\Common (which tests a lot of the functionality of Ict.Common)
  • We have tests for string operations (TStringHelper), Database operations, CSV&XML Input/Output, etc.
  • We have tests to prove the correct operation of the server and database
  • We have tests to prove the correct operation of several of the client screens
  • We have a test to validate 'printed' output
    • The test in csharp\ICT\Testing\Common\Printing does not use the NUnit framework, but allows the user to see how an HTML file gets printed to paper or PDF.

File System Organisation for Tests

Test DLLs are organised in our folder structure beneath csharp\ICT\Testing. Most tests tests are in a lib sub-folder which is then further categorised by module feature (MCommon, MFinance, MPartner etc). You should further separate out specific server tests from client tests.

Anatomy of a Test Class

Start by reading the NUnit help, as well as looking at some existing examples of Open Petra tests. You will soon discover that test methods carry attributes such as [SetUp], [Test] etc. In fact all your public methods will need an attribute. This is for setting up the testing environment, cleaning it up afterwards, defining which method is a test and so on. You will also probably have a few private helper methods for doing repetitive tasks in your code.

Each [Test] method needs to set up its data, perform the action to be tested and then probe the result(s) using one or several of the 'Assert' overloads such as Assert.IsTrue() or Assert.AreEqual(). A good test method will have a few lines of code to set up the test - by making a call into the class being tested - and then a number of assert lines that test for successful execution. It is also a good practice to separate out the data initialisation part into a private helper method so that it is easy to read what the test is doing from the code.

Writing New Tests

When you are writing a new test you need to think about the following.

  • Can this test be added as a new test class (as a new .cs file) to an existing test DLL?
  • If you decide that there are a large number of tests to write on a new subject, then you will probably decide to start a new project DLL. In that case, where will the new project live in our Testing hierarchy?
  • What data will the new test need to work with (if any)? All our tests so far use the 'demo' database, which starts with a limited amount of content. That is normally a good thing for testing as it allows you to set up the starting condition that you want. However you will need to think whether your test needs to be sure to start with a particular table in an empty state or whether you can start with any prior data condition and whether your test can delete any data that you add.
  • Having decided what the working data is going to look like, it is quite a good idea to create a testData.cs file (perhaps as a partial class of the main test class) and define the data handling methods that you will make use of in that file. This helps to keep your test file cleaner. See for example csharp\ICT\Testing\lib\MFinance\ExchangeRates\CorporateDataSet.cs which is a partial class of csharp\ICT\Testing\lib\MFinance\ExchangeRates\CorporateRate.test.cs.
  • Plan carefully exactly what tests need to be carried out in order to prove the functionality of the target code being tested. Think about how to make sure that the maximum amount of the target code can be forced to run. This will mean arranging to run a nice test that succeeds, but also to set up conditions where the target code is made to run with 'bad' input conditions and prove that it handles those correctly.
  • You also need to be sure that when a test fails the system responds appropriately - provide appropriate Exception handling to ensure that this happens.

You may well be able to copy an existing .cs file and use it as the basis of your new test class. If you need to start a new project (DLL) then

  • decide where it will be located in the existing file structure and create a new folder for it.
  • add a typical test code file to the folder (copy an existing one and then delete all the content except maybe the Setup and TearDown parts).
  • set the namespace to match the location you have chosen.
  • include at least the #using statements indicated below

Now using Nant or the Developer's Assistant, generate the full solution (if you are already up to date this can be with a minimal compile). Load the Open Petra Testing solution. You will have a new project created for you with some initial project references.

As you write your tests you will likely need to add further project references. You can work out what these references are from the target code. You then can use the IDE to add a reference to your test project, but make sure to add a #using statement as well. Then you can be sure that when you re-generate the whole solution and the projects are re-created the references will all be correct.

Here are the minimum required references for a server project:

using System;
using System.Data;
using NUnit.Framework;
using Ict.Testing.NUnitPetraServer;
using Ict.Common.Data;
using Ict.Common.DB;
using Ict.Common.Remoting.Server;
using Ict.Common.Remoting.Shared;
using Ict.Petra.Server.App.Core;

Here are the likely candidates for a client test

using System;
using System.Windows.Forms;

using NUnit.Extensions.Forms;
using NUnit.Framework;

using Ict.Testing.NUnitForms;
using Ict.Testing.NUnitPetraClient;
using Ict.Testing.NUnitTools;

using Ict.Common;
using Ict.Common.IO;
using Ict.Common.Controls;
using Ict.Petra.Client.CommonControls;
using Ict.Petra.Client.CommonForms;


Conditions for [Test] Methods

  • It must be possible to execute each [Test] Method on its own;
  • The [Test] Methods must have no interdependencies on each other;
  • The [Test] Methods need to be able to run in any order;
  • If [Test] Methods access the database
    • they have to set the relevant parts of the database up so that the test can always run correctly - again, the test need to be able to run on it's own.
    • they might possibly undo any changes they have made to the database.

Server Tests

Background

If you only want to write a server test, you don't need to understand the content of this background section. You can skip directly to #Authoring Server Tests

We have a full server wrapped up in a dll, in csharp\ICT\Testing\NUnitPetraServer, Ict.Testing.NUnitServer.dll. This server allows a user to login, and then access the webconnector functions, Database access, and basically everything else inside the server.

Authoring Server Tests

In some respects, server tests are easier to write than client tests. We have already seen that usually it is not even necessary to run the server console separately in order to execute a server test. The test itself is probably simpler to write and the number of separate 'Asserts' required to prove success is probably fewer.

You are likely setting up the database to contain known data and asserting that you can retrieve the correct data set from it. Or you are confirming that given a specific data set you can successfully update the database without errors or conflicts.

Existing Server Test Samples

  • The project in csharp\ICT\Testing\MFinance\server\Gift has a simple test for DataAccess and for a WebConnector.
  • The project in csharp\ICT\Testing\MFinance\server\CrossLedger has a simple test for adding and removing ledgers of various types and testing the server method: GetAvailableLedgers().

Client Forms Tests

Background

If you only want to write a client test, you don't need to understand the content of this background section. You can skip directly to #Authoring Client Forms Tests

We make use of NUnitForms (http://nunitforms.sourceforge.net/). The project is alive, even though the last release is a couple of years old. We have our compiled dll of the latest VCS snapshot in our own VCS, in csharp\ThirdParty\NUnit.

We have made small modifications to NUnitForms so that it works for our generated forms with all the random Layout control names blurring the path to controls. The modifications are maintained in csharp\ThirdParty\NUnit\src.

NUnitForms is an extension to NUnit to enable NUnit to run tests on Windows Forms and access control properties. You do not need to install NUnitForms. NUnit can automatically access the DLL.

Our DLL Ict.Testing.NUnitClient.dll (project in csharp\ICT\Testing\NUnitPetraClient) is a full OpenPetra client, allows a demo user to login to the separately running server, and then open OpenPetra screens as if it was a normal user. There is currently no mock-up server.

Authoring Client Forms Tests

Basically, these tests are integration tests. Each test will likely call the Show method on the screen being tested. You can then access each individual control on the screen and check its value, change its value, push buttons and generally interact exactly as you would if you were using the screen yourself. You can even cope with pop-up dialogs and check their content and click one of the dialog buttons. For each type of control on the screen there is a Tester object. So for example 'ButtonTester'. Objects exist for all the standard Windows controls and Open Petra has created objects for its own specialised controls like auto-populated combo boxes and Petra Date controls. Every Tester object has a Properties property which returns the actual object itself. Here are some examples of code you might use.

If you want to simulate clicking a button -

ButtonTester btnNew = new ButtonTester("btnNew");
btnNew.Click();

If you want to access an actual control's properties, or to set its value -

TCmbAutoPopulated cmbFromCurrency = (new TCmbAutoPopulatedTester("cmbDetailFromCurrencyCode")).Properties;
Assert.AreEqual("EUR", cmbFromCurrency.GetSelectedString());

This is slightly more readable than

TCmbAutoPopulatedTester cmbFromCurrency = new TCmbAutoPopulatedTester("cmbDetailFromCurrencyCode");
Assert.AreEqual("EUR", cmbFromCurrency.Properties.GetSelectedString());

especially when there are many asserts; it also looks more exactly like the code that is being tested.

For the grid you may well decide to create both a 'tester' and a grid object, since you need the tester to SelectRow and an object to check the number of rows, for instance. Also bear in mind that the tester for a grid is a paged grid, which is a super-class of a standard grid, but allows a different set of properties and methods. You can cast a paged grid to a standard grid though. Most of our screens use standard grids. Here is some sample code.

TSgrdDataGrid grdDetails = (TSgrdDataGrid)(new TSgrdDataGridPagedTester("grdDetails")).Properties;
Assert.AreEqual(10, grdDetails.Rows.Count);
TSgrdDataGridPagedTester grdTester = new TSgrdDataGridPagedTester("grdDetails");
grdTester.SelectRow(1);

Data Access From Client Forms Tests

You will likely need to load and save data to the database within your test method. There is a standard base class (SerialisableDataSet) that you can use for this purpose. It has a simple LoadAll() and SaveChanges() method.

All you need to do is to include your variant of the class code here inside your own test class. (Change the Table to suit your test table)

        private class FMainDS : SerialisableDS
        {
            public static ADailyExchangeRateTable ADailyExchangeRate;

            public static void LoadAll()
            {
                ADailyExchangeRate = new ADailyExchangeRateTable();
                SerialisableDS.LoadAll(ADailyExchangeRate, ADailyExchangeRateTable.GetTableDBName());
            }

            public static bool SaveChanges()
            {
                TTypedDataTable TableChanges = ADailyExchangeRate.GetChangesTyped();
                
                return SerialisableDS.SaveChanges(ADailyExchangeRate, TableChanges, ADailyExchangeRateTable.GetTableDBName());
            }
        }

Now the LoadAll() and SaveChanges() methods can be called within your [Test] methods. Also the coding style is similar to that of most screens in that you work with a constructs such as FMainDS.TableName.Rows.Count.

Running Client Forms Tests

You should by now have realised that running a client forms test is no different from running a simple test or a server test. Load the test DLL into NUnit, select the test (or all of them) and click Run. But don't forget to start the server first! You will find that compiling your test requires that the server is NOT running but running your test requires that it is. So you will become quite used to the workflow of starting the server, running the test, stopping the server, editing the code and round again...

Existing Client Test Samples

  • The project in csharp\ICT\Testing\MFinance\GLForm creates GL Batches, posts them, and does some checks.
  • The project in csharp\ICT\Testing\MFinance\ExchangeRates runs very extensive tests on both the corporate and daily exchange rate screens. It also uses the serialisable Data Set described above. It tests user interaction, button clicking, validation, dialogs, import of data and many other features and is a good (though extensive) example to look at.


Modal Client Forms Tests

Most of the screens in Open Petra are modeless. This means that NUnit Forms treats them as a standard top-level window and there are no difficult issues to deal with. These screens can raise modal dialogs such as message boxes and the forms test framework has a simple way of checking the content of these boxes and clicking one of the buttons to close it and continue.

However when the screen you wish to test is a modal screen that is not a simple message box the forms test framework has to be used in a different way.

NUnit Forms uses a one-shot user-defined delegate function to handle modal forms. The help documentation (such as it is) will tell you that this is the 'modern' way of working. However, for these complex screens like the testing for Daily Exchange Rate when used modally, this delegate model breaks down. We found it necessary to use the previous model which the NUnit Forms authors have marked as 'obsolete' but which actually works very well, and is no more difficult to implement - in fact arguably in my opinion gives rise to code that is easier to read.

If you need to write a NUnit Forms test for a modal dialog, you should look at the test we have written in ICT/Testing/lib/MFinance/ExchangeRates for more details. Reproduced here is the comment from the ModalTest c# file.

        /// <summary>
        /// Performing tests on a modal dialog needs a different approach from tests on a simple class or a modeless screen.
        /// Here are some tips to help design your test.
        ///
        /// NUnitForms has a recommended method for handling modal dialogs.  It involves specifying a one-shot delegate to that gets invoked
        /// when the modal screen is displayed.  This works well for Message Boxes and the like but I could not get it to work with my
        /// daily exchange rate screen.  Fortunately NUnitForms previously used a different method for working with dialogs, and although it
        /// has been marked as [Obsolete], it still works!  So the modal tests written here all use this 'old' method successfully.
        ///
        /// The [Test] code involves creating a public void method as usual.  In this method you
        ///    perform some initialisation
        ///    call the NUnitForms method:  ExpectModal("AFormName", AHandlerMethodName)
        ///    instantiate your modal form: DialogResult result = myDialog.ShowModal()
        ///    test the return values after the dialog has closed.
        ///
        /// Then you create another public void method (AHandlerMethodName()) and in there you do the Assert's and button pushes to test the
        /// behaviour of the dialog itself.
        ///
        /// There are some things you need to know about coding this dialog handler:
        /// * Problems with the Activation Event (RunOnceOnActivation). On Windows Jenkins, this is not run at the right time.
        ///     We are now calling the method RunBeforeActivation() before ShowDialog inside the dialog.
        /// * You need to be sure that you close the dialog so that control returns to the [Test] method.
        ///     If the dialog does not close the whole Unit Test will hang indefinitely.  The effect of this requirement is that, if any of
        ///     your Assert tests fail the dialog remains open and needs you to manually close it.  (Possibly this is a Debug build behaviour
        ///     and a release build of the DLL may behave differently, but I have not checked that.)
        ///
        ///     Your code will probably have a few button clicks and a number of Assert's and then will close the dialog with a button click.
        ///     The requirement to ensure that the dialog closes even if an Assert fires means that you need to enclose all the actions in the dialog
        ///     inside a try/catch block, so that you can close the dialog in the catch or a finally block.
        /// * In order to return the Assert to the main [Test] routine, your catch block needs to save the exception information in a global variable.
        ///     The catch block then needs to return a different DialogResult (I use Abort).  Then the main [Test] method knows whether the dialog
        ///     closed with an error or not.
        ///
        /// If you look at the code here you will see that I use a couple of helper methods to
        /// * Call the modal handler method.  Since this method is marked as [Obsolete] I use a #pragma statement to turn off the resultant compiler warning.
        /// * Handle the exception message from the try/catch block.  This formats the message ready for the main method to use in an Assert.Fail().
        ///    Note that the message is constructed differently depending on whether the exception arose from an Assert or from a generic error in your code.
        /// </summary>

Your modal screen test will do the following

  • Set up the test and initialise any data
  • Instantiate the modal screen and set any pre-run properties
  • Tell NUnit Forms the name of your modal handler method
  • Call ShowDialog

Control will then pass to the handler method, which will be running in a different thread, though this need not concern you.

  • Do all your work in a try/catch block
  • Now you will click buttons and edit controls on the modal form
  • Check the behaviour using Assert's
  • If all is well you will finish by closing the dialog and returning a DialogResult - probably OK or Cancel
  • If an Assert failed, or there was some general program exception, you will drop into the catch block where you must examine the exception, save it and close the dialog with a result such as 'Abort'.

Control now passes back to the original ShowDialog line.

  • Examine the DialogResult and if necessary raise an Assert.Fail with the stored message that was previously caught.
  • Perform any final checks that the data is as you expect now that the dialog has closed.

The key thing to understand is that you must be sure to successfully close the modal dialog even if a test assert fails. Normally when an Assert fails your test code stops and control passes back to NUnit, which will leave the dialog hanging open. We want the dialog to close and then fail the test.

Remember that you don't just want all your tests to succeed. You also want to be sure that when a test fails the system responds appropriately.

Debugging

Just a final note about debugging your test. Most of the time you will not need to debug your test code because your 'Asserts' will give you enough information about what is going on. Furthermore you can use Console.WriteLine() in your test code and the output will appear in NUnit on the 'Text Output' tab. However it is perfectly possible to single step debug through your test code by 'attaching' your IDE to NUnit-agent.exe (not NUnit.exe) in the list of running processes. Then when you run the particular test in NUnit you will hit your breakpoint.

When You Have Finished

When you have finished coding your test you will be ready to commit it to trunk. But before you do there are a few more things to think about, especially if you want the test included in the nightly build and test run.

  • In order to include your test you will need to edit the ICT.Testing.build file in the csharp/ICT/Testing folder and include your DLL in one of the appropriate sections. If the test is a utility test, include it in the 'test-common' section. If it is a server test, include it in the 'test-server' section. If it is a forms test be sure to include it in the 'test-client' section because that will ensure that the OpenPetra server is launched first before running the tests in this section.
  • If your test project included any .sql files, be sure that you uploaded those to your branch. My Bazaar setup specifically excludes .sql files from its list of candidate files that are under source code control. So although all my tests worked on my PC, the build on our Windows Jenkins build server failed because the Windows Jenkins build server did not have all the files to run the tests!
  • Once you have added your DLL to the Ict.Testing.build file, run 'nant test' from the root folder of your branch. Make sure that your test (along with all the others) gives you a BUILD SUCCEEDED result.
  • One more thing - examine the text output from 'nant test'. (You can generate this by running nant test > ..\nant.log having set the current directory to your branch root. Then read the file in the parent folder.) Did your test emit any messages in its output that include the word 'error' or 'warning'? If so it will be necessary to get someone with access to the Windows Jenkins build server to modify the output parser to be aware that this message does not represent a failure. For this reason, try to write your test so that any TLogging() or Console.WriteLine() messages do not include 'Error' or 'Warning'!

It is probably best when you think you are ready to commit your branch to trunk, to commit everything to your development branch and to ask someone to make a test run on the Windows Jenkins build server on your development branch. That way you can confirm that

  • your build succeeds
  • your output conforms to the server's rules for measuring complete success.

Once your test is run on the Windows Jenkins build server every 24 hours we will find out if someone commits a change to the OpenPetra project that breaks your test!

Have fun!