wiki:DeveloperGuidelinesTesting

Version 126 (modified by Fran Boon, 12 years ago) ( diff )

--

Testing

Functional Tests

"Building the Right Code"

We use Selenium for this: DeveloperGuidelines/Testing

Hints:

  • CreatingTestCasesFirst
  • Improving the stability of Selenium tests: http://googletesting.blogspot.com/2009/06/my-selenium-tests-arent-stable.html
  • Lots of useful tips: http://www.eviltester.com
  • Autocomplete is fiddly to test as need to trigger specific events:
    # Enter the search String
    sel.type("gis_location_autocomplete", "SearchString")
    # Trigger the event to get the AJAX to send
    sel.fire_event("gis_location_autocomplete", "keydown")
    # Wait for the popup menu
    for i in range(60):
        try:
            if "SearchString" == sel.get_text("css=ul.ui-autocomplete li:first-child a"):
                break
        except:
            pass
        time.sleep(1)
    else:
        self.fail("time out")
    # Select the Result
    sel.fire_event("css=ul.ui-autocomplete li:first-child a", "mouseover")
    sel.click("css=ul.ui-autocomplete li:first-child a")
    time.sleep(4)
    

Unit Tests

"Building the Code Right"

Unit tests test the system at the level of units of source code a.k.a. code blocks. Not to be confused with functional or acceptance/customer tests; unit tests verify rather than validate.

Remember: don't write tests to prove your code works, write tests to show when and where it fails; a successful test is one that uncovers a bug.

Pros:

  • stability and reliability - vital for scaling developer teams due to the need for a stable trunk,
  • self documenting system - with documentation that can't drift.
  • simplification of integration testing (because all the parts are known to work, only interfaces between them need testing),
  • better system design by:
    • easing refactoring (changes can be more rapidly verified not to create new bugs),
    • promoting refactoring (dividing units into testable pieces also creates more reusable pieces),
    • standing in for specification documents (i.e. describing by example what the code needs to do).

Cons:

  • it takes time to develop and maintain the tests - unit tests are not a free lunch,
  • testing requires discipline - sloppy tests can be as bad or worse than sloppy code - false positives waste time, false negatives waste even more time. However, in most cases, any test that runs the code is better than no test at all.

Test Runner

nose is a python module/command which finds and runs tests. nose is suitable as it has been widely adopted in the python community, allows standard test specifications and is highly configurable and customisable, without confusing "magical" behaviour.

Web2py imposes a severe problem on any external test runner, i.e. application code is provided a global environment by web2py, it must expect that environment. Therefore, the test runner must reproduce and provide that environment to the tests. This means that we can't use any external test runner that doesn't know about web2py.

To cope with web2py, a wrapper and a plugin has been written, which means that nosetests isn't run in the normal way from the command line, although the aim is to be as close as possible. For example, extra arguments are just passed on to nose by the wrapper command, so the command's argument format is the same as that of the original nosetest command.

To run the unit tests in Sahana Eden via nose, there is a nose.py command in the tests/ folder within the application. Note: This command is not installed outside of the application as it is designed for developers, who may have several versions of Sahana Eden at any one time.

Writing unit tests

When writing unit tests, note that nose uses a convention over configuration approach when finding tests. i.e. if it looks like a test, then nose assumes it is a test.

To make your test look like a test, there are two options:

  1. easy: write a procedure whose name starts with "test_"
  2. advanced: inherit from unittest.TestCase and write your tests as described in http://docs.python.org/library/unittest.html
# examples:

def test_addition():
    assert 1 + 1 == 2

import unittest
class AdditionTest(unittest.TestCase):
    def test_positive(test):
        assert 1+1 == 2

    def test_negative(test):
        assert -1 + -1 == -2

Currently these tests are stored in /tests. Duplicate the module folder there of the unit being tested inside the tests/ folder. We know this duplication is not ideal, this may well change in future and tests be placed nearer the code being tested.

Advice:

  • One unittest.TestCase per unit (procedure, method or function). This makes test output easy to follow as it is what unittest expects. I.e. don't test several units in one TestCase.
  • You can just use asserts instead of the cumbersome test.assertEquals etc. methods which often don't do what you need. nose can introspect the stack to make meaningful messages.
  • Avoid inheritance trees with unittests as the unittest.TestCase isn't really designed for inheritance, more as a convenient place to put the tests.
  • Break test methods up into smaller methods. Non-test code can be extracted out into external procedures/functions which are easier to reuse.
  • Tests should complete quickly, i.e. much less than a second. Remember they will be run thousands of times. A test that takes a minute to run might therefore cost days of developer time over its lifetime. To do this, encode precise failure cases in unit tests, don't do things like chuck random data at a unit.
  • In fact eliminate all randomness. A failure must be consistently reproducible to be fixable.
  • Read the nose documentation regarding writing tests: http://readthedocs.org/docs/nose/en/latest/writing_tests.html
  • Read this too: http://ivory.idyll.org/articles/nose-intro.html
  • Information on doing UnitTests in Web2Py: http://www.web2py.com/AlterEgo/default/show/260
  • Don't copy and paste actual output as expected result output in a unit test. This defeats the purpose of the unit tests, as they now test nothing and just become a maintenance headache. Instead, use what you expect. The only case for doing this is for a back-to-back test with old code when no unit tests exist.

Running unit tests

Assume our current directory is web2py/, which contains the applications folder. We want to test our trunk application.

To run all unit tests:

./application/trunk/tests/nose.py

Currently these tests are stored in /tests. The nose plugin will look for tests in this folder. To select tests, give a file or folder path relative to the tests folder.

For example, there is a gis folder containing tests for the GIS mapping code. To run these particular tests:

./application/trunk/tests/nose.py gis

Particular python version. Nose.py runs the tests with whichever python version ran it. It will use /usr/bin/python if run as an ordinary command as above. To select a different python version run (e.g. python 2.5):

/path/to/python2.5 ./application/trunk/tests/nose.py

This may be useful to test different versions but also installed modules. N.B. No testing has been done with virtualenv as of this time.

if our current working directory happens to be the tests/ folder, we can run the tests more directly:

./nose.py

This is the most minimal version of the command. This is because nose.py is designed to be run from anywhere, i.e. not to be put in the PATH.

Advice

  • Run specific tests while your are working on something.
  • Always run all tests and make sure they all pass before a merge to a branch like trunk which must be kept stable.
    • If they don't pass, fix or leave out the code that is causing failure.
    • If the tests themselves are broken due to e.g. and API change, update them. Never paste actual output as expected output.
    • Don't comment out or remove a test just because it fails. For that matter, if you come across a bug, but don't have time to fix it, it is a good practice to write a test that shows it failing. It's much better than a TODO comment, because it can't be forgotten about, and it will probably be a lot more useful than a bug report for whoever will eventually fix it.

Javascript unit tests

Selenium enables functional testing but not really unit testing other than reporting the results of javascript test runs.

Selenium RC is great due to ability to handle JavaScript & also due to having an IDE for generating them (export as Python).
The IDE output needs to be modified to work with Web2Py.
NB Custom functions (e.g. for Random) cannot be shared with the Functional Tests (custom=JS) but the rest of the tests can be.
ToDo: Port the storeRandom function from JS to Python:

import random
print "test_%i@xxxxxxxx" % random.randint(1, 10000)

CherryPy's WebTest is good for in-process testing & can be run from a browser-less server(easier for CI-integration).
These tests are stored in /tests/webtest.
NB These are a work-in-progress...need to enable access to Web2Py environment (db, etc) using:

from gluon.shell import exec_environment
env=exec_environment('the_model_file.py') 

Or could write as a 'Controller' & call from CLI:

python web2py.py -S appname -M -R yourscript.py 

Another similar option could be Pylon's WebTest

Doc Tests

Agile documentation which can be run using Web2Py's Admin UI.

To extend these to web applications, we have a module which uses wsgi_intercept & CherryPy's WebTest: modules/s3/s3test.py
This can be used from Controllers like:

def login():
    """ Login
    >>> from applications.sahana.modules.s3.s3test import WSGI_Test
    >>> test=WSGI_Test(db)
    >>> '200 OK' in test.getPage('/sahana/%s/login' % module)
    True
    >>> test.assertHeader("Content-Type", "text/html")
    >>> test.assertInBody('Login')
    """

This works fine,although if an assert fails then the UI gets stuck :/
The 'db' access part isn't yet working.

Note that Web2Py uses a big doctest at the end of each file: def test_all()

Continuous Integration

We are starting to use Jenkins to run the Functional Tests automatically.

We could integrate this into GitHub using BuildHive

Other options: http://stackoverflow.com/questions/225598/pretty-continuous-integration-for-python

Load Testing

We recommend using Tsung


BluePrintTesting

DeveloperGuidelines

Attachments (4)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.