Version 121 (modified by Mike A, 13 years ago) ( diff )

note about commenting out tests and encouraging writing tests without fixes.


"A bug is a test case you haven't written yet"
"Unit Tests allow merciless refactoring"

As it is a disaster management tool, bugs in Sahana that lead to aid/rescue workers in the field receiving incorrect or incomplete information could potentially lead to disaster victims losing their lives. Therefore, Sahana developers need to place more emphasis on testing than might be required in other types of projects.

This page defines what our current approach versus our BluePrint for future options

Test-Driven Development is a programming styles which says that you 1st write your test cases (from the specs) & then proceed to make them pass.

For manual tests, can pre-populate the database with random data using gluon.contrib.populate

Functional Tests

Building the Right Code

We have integrated into /static/selenium so that Functional Tests can be run easily. For Notes on how to create your own test cases see CreatingTestCasesFirst.

Tests can be developed using Selenium IDE, if desired.

See the Documentation for details on how to use this test suite.


  • Some wrapper scripts for Win32 attached
  • Improving the stability of Selenium tests:
  • Lots of useful tips:
  • Autocomplete is fiddly to test as need to trigger specific events:
    # Enter the search String
    sel.type("gis_location_autocomplete", "SearchString")
    # Trigger the event to get the AJAX to send
    sel.fire_event("gis_location_autocomplete", "keydown")
    # Wait for the popup menu
    for i in range(60):
            if "SearchString" == sel.get_text("css=ul.ui-autocomplete li:first-child a"):
    else:"time out")
    # Select the Result
    sel.fire_event("css=ul.ui-autocomplete li:first-child a", "mouseover")"css=ul.ui-autocomplete li:first-child a")

Systers' approach:

Alternative Options:

Unit Tests

"Building the Code Right"

Unit tests test the system at the level of units of source code a.k.a. code blocks. Not to be confused with functional or acceptance/customer tests; unit tests verify rather than validate.

Remember: don't write tests to prove your code works, write tests to show when and where it fails; a successful test is one that uncovers a bug.


  • stability and reliability - vital for scaling developer teams due to the need for a stable trunk,
  • self documenting system - with documentation that can't drift.
  • simplification of integration testing (because all the parts are known to work, only interfaces between them need testing),
  • better system design by:
    • easing refactoring (changes can be more rapidly verified not to create new bugs),
    • promoting refactoring (dividing units into testable pieces also creates more reusable pieces),
    • standing in for specification documents (i.e. describing by example what the code needs to do).


  • it takes time to develop and maintain the tests - unit tests are not a free lunch,
  • testing requires discipline - sloppy tests can be as bad or worse than sloppy code - false positives waste time, false negatives waste even more time. However, in most cases, any test that runs the code is better than no test at all.

Test Runner

nose is a python module/command which finds and runs tests. nose is suitable as it has been widely adopted in the python community, allows standard test specifications and is highly configurable and customisable, without confusing "magical" behaviour.

Web2py imposes a severe problem on any external test runner, i.e. application code is provided a global environment by web2py, it must expect that environment. Therefore, the test runner must reproduce and provide that environment to the tests. This means that we can't use any external test runner that doesn't know about web2py.

To cope with web2py, a wrapper and a plugin has been written, which means that nosetests isn't run in the normal way from the command line, although the aim is to be as close as possible. For example, extra arguments are just passed on to nose by the wrapper command, so the command's argument format is the same as that of the original nosetest command.

To run the unit tests in Sahana Eden via nose, there is a command in the tests/ folder within the application. Note: This command is not installed outside of the application as it is designed for developers, who may have several versions of Sahana Eden at any one time.

Writing unit tests

When writing unit tests, note that nose uses a convention over configuration approach when finding tests. i.e. if it looks like a test, then nose assumes it is a test.

To make your test look like a test, there are two options:

  1. easy: write a procedure whose name starts with "test_"
  2. advanced: inherit from unittest.TestCase and write your tests as described in
# examples:

def test_addition():
    assert 1 + 1 == 2

import unittest
class AdditionTest(unittest.TestCase):
    def test_positive(test):
        assert 1+1 == 2

    def test_negative(test):
        assert -1 + -1 == -2

Currently these tests are stored in /tests. Duplicate the module folder there of the unit being tested inside the tests/ folder. We know this duplication is not ideal, this may well change in future and tests be placed nearer the code being tested.


  • One unittest.TestCase per unit (procedure, method or function). This makes test output easy to follow as it is what unittest expects. I.e. don't test several units in one TestCase.
  • You can just use asserts instead of the cumbersome test.assertEquals etc. methods which often don't do what you need. nose can introspect the stack to make meaningful messages.
  • Avoid inheritance trees with unittests as the unittest.TestCase isn't really designed for inheritance, more as a convenient place to put the tests.
  • Break test methods up into smaller methods. Non-test code can be extracted out into external procedures/functions which are easier to reuse.
  • Tests should complete quickly, i.e. much less than a second. Remember they will be run thousands of times. A test that takes a minute to run might therefore cost days of developer time over its lifetime. To do this, encode precise failure cases in unit tests, don't do things like chuck random data at a unit.
  • In fact eliminate all randomness. A failure must be consistently reproducible to be fixable.
  • Read the nose documentation regarding writing tests:
  • Read this too:
  • Information on doing UnitTests in Web2Py:
  • Don't copy and paste actual output as expected result output in a unit test. This defeats the purpose of the unit tests, as they now test nothing and just become a maintenance headache. Instead, use what you expect. The only case for doing this is for a back-to-back test with old code when no unit tests exist.

Running unit tests

Assume our current directory is web2py/, which contains the applications folder. We want to test our trunk application.

To run all unit tests:


Currently these tests are stored in /tests. The nose plugin will look for tests in this folder. To select tests, give a file or folder path relative to the tests folder.

For example, there is a gis folder containing tests for the GIS mapping code. To run these particular tests:

./application/trunk/tests/ gis

Particular python version. runs the tests with whichever python version ran it. It will use /usr/bin/python if run as an ordinary command as above. To select a different python version run (e.g. python 2.5):

/path/to/python2.5 ./application/trunk/tests/

This may be useful to test different versions but also installed modules. N.B. No testing has been done with virtualenv as of this time.

if our current working directory happens to be the tests/ folder, we can run the tests more directly:


This is the most minimal version of the command. This is because is designed to be run from anywhere, i.e. not to be put in the PATH.


  • Run specific tests while your are working on something.
  • Always run all tests and make sure they all pass before a merge to a branch like trunk which must be kept stable.
    • If they don't pass, fix or leave out the code that is causing failure.
    • If the tests themselves are broken due to e.g. and API change, update them. Never paste actual output as expected output.
    • Don't comment out or remove a test just because it fails. For that matter, if you come across a bug, but don't have time to fix it, it is a good practice to write a test that shows it failing. It's much better than a TODO comment, because it can't be forgotten about, and it will probably be a lot more useful than a bug report for whoever will eventually fix it.

Javascript unit tests

Selenium enables functional testing but not really unit testing other than reporting the results of javascript test runs.

Selenium RC is great due to ability to handle JavaScript & also due to having an IDE for generating them (export as Python).
The IDE output needs to be modified to work with Web2Py.
NB Custom functions (e.g. for Random) cannot be shared with the Functional Tests (custom=JS) but the rest of the tests can be.
ToDo: Port the storeRandom function from JS to Python:

import random
print "test_%i@xxxxxxxx" % random.randint(1, 10000)

CherryPy's WebTest is good for in-process testing & can be run from a browser-less server(easier for CI-integration).
These tests are stored in /tests/webtest.
NB These are a work-in-progress...need to enable access to Web2Py environment (db, etc) using:

from import exec_environment

Or could write as a 'Controller' & call from CLI:

python -S appname -M -R 

Another similar option could be Pylon's WebTest

Doc Tests

Agile documentation which can be run using Web2Py's Admin UI.

To extend these to web applications, we have a module which uses wsgi_intercept & CherryPy's WebTest: modules/s3/
This can be used from Controllers like:

def login():
    """ Login
    >>> from applications.sahana.modules.s3.s3test import WSGI_Test
    >>> test=WSGI_Test(db)
    >>> '200 OK' in test.getPage('/sahana/%s/login' % module)
    >>> test.assertHeader("Content-Type", "text/html")
    >>> test.assertInBody('Login')

This works fine,although if an assert fails then the UI gets stuck :/
The 'db' access part isn't yet working.

Note that Web2Py uses a big doctest at the end of each file: def test_all()

Continuous Integration

We are starting to use Jenkins to run the Functional Tests automatically.

Other options:



Attachments (4)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.