|Version 116 (modified by 12 years ago) ( diff ),|
Table of Contents
"A bug is a test case you haven't written yet"
"Unit Tests allow merciless refactoring"
As it is a disaster management tool, bugs in Sahana that lead to aid/rescue workers in the field receiving incorrect or incomplete information could potentially lead to disaster victims losing their lives. Therefore, Sahana developers need to place more emphasis on testing than might be required in other types of projects.
This page defines what our current approach versus our BluePrint for future options
Test-Driven Development is a programming styles which says that you 1st write your test cases (from the specs) & then proceed to make them pass.
For manual tests, can pre-populate the database with random data using gluon.contrib.populate
- TestCases - User Testing - List of things to test
Building the Right Code
We have integrated http://seleniumhq.org/projects/remote-control/ into
/static/selenium so that Functional Tests can be run easily. For Notes on how to create your own test cases see CreatingTestCasesFirst.
Tests can be developed using Selenium IDE, if desired.
See the Documentation for details on how to use this test suite.
- Some wrapper scripts for Win32 attached
- Improving the stability of Selenium tests: http://googletesting.blogspot.com/2009/06/my-selenium-tests-arent-stable.html
- Lots of useful tips: http://www.eviltester.com
- Autocomplete is fiddly to test as need to trigger specific events:
# Enter the search String sel.type("gis_location_autocomplete", "SearchString") # Trigger the event to get the AJAX to send sel.fire_event("gis_location_autocomplete", "keydown") # Wait for the popup menu for i in range(60): try: if "SearchString" == sel.get_text("css=ul.ui-autocomplete li:first-child a"): break except: pass time.sleep(1) else: self.fail("time out") # Select the Result sel.fire_event("css=ul.ui-autocomplete li:first-child a", "mouseover") sel.click("css=ul.ui-autocomplete li:first-child a") time.sleep(4)
- List of Tests: http://systers.org/systers-dev/doku.php/master_checklist_template
- GSoC project: http://systers.org/systers-dev/doku.php/svaksha:patches_release_testing_automation
"Building the Code Right"
Unit tests test the system at the level of units of source code a.k.a. code blocks. Not to be confused with functional or acceptance/customer tests; unit tests verify rather than validate.
- stability and reliability - vital for scaling developer teams due to the need for a stable trunk,
- self documenting system - with documentation that can't drift.
- simplification of integration testing (because all the parts are known to work, only interfaces between them need testing),
- better system design by:
- easing refactoring (changes can be more rapidly verified not to create new bugs),
- promoting refactoring (dividing units into testable pieces also creates more reusable pieces),
- standing in for specification documents (i.e. describing by example what the code needs to do).
- it takes time to develop and maintain the tests - unit tests are not a free lunch,
- sloppy tests are as bad or worse than sloppy code - unit tests are not a silver bullet or a free lunch,
nose is a python module/command which finds and runs tests. nose is suitable as it has been widely adopted in the python community, allows standard test specifications and is highly configurable and customisable, without confusing "magical" behaviour.
Web2py imposes a severe problem on any external test runner, i.e. application code is provided a global environment by web2py, it must expect that environment. Therefore, the test runner must reproduce and provide that environment to the tests. This means that we can't use any external test runner that doesn't know about web2py.
To cope with web2py, a wrapper and a plugin has been written, which means that nosetests isn't run in the normal way, although we'll aim for that as much as possible. Extra arguments are just passed on to nose by the wrapper command, so the command's argument format is the same as that of the original nosetest command.
To run the unit tests in Sahana Eden via nose, there is a nose.py command in the tests/ folder within the application. Note: This command is not installed outside of the application as it is designed for developers, who may have several versions of Sahana Eden at any one time.
Writing unit tests
When writing unit tests, note that nose uses a convention over configuration approach when finding tests. i.e. if it looks like a test, then nose assumes it is a test.
To make your test look like a test, there are two options:
- easy: write a procedure whose name starts with "test_"
- advanced: inherit from unittest.TestCase and write your tests as described in http://docs.python.org/library/unittest.html
# examples: def test_addition(): assert 1 + 1 == 2 import unittest class AdditionTest(unittest.TestCase): def test_positive(test): assert 1+1 == 2 def test_negative(test): assert -1 + -1 == -2
Currently these tests are stored in
/tests. Duplicate the module folder there of the unit being tested inside the tests/ folder. We know this duplication is not ideal, this may well change in future and tests be placed nearer the code being tested.
- One unittest.TestCase per unit (procedure, method or function). This makes test output easy to follow as it is what unittest expects. I.e. don't test several units in one TestCase.
- You can just use asserts instead of the cumbersome test.assertEquals etc. methods which often don't do what you need. nose can introspect the stack to make meaningful messages.
- Avoid inheritance trees with unittests as the unittest.TestCase isn't really designed for inheritance, more as a convenient place to put the tests.
- Break test methods up into smaller methods. Non-test code can be extracted out into external procedures/functions which are easier to reuse.
- Tests should complete quickly, i.e. much less than a second. Remember they will be run thousands of times. A test that takes a minute to run might therefore cost days of developer time over its lifetime. To do this, encode precise failure cases in unit tests, don't do things like chuck random data at a unit.
- In fact eliminate all randomness. A failure must be consistently reproducible to be fixable.
- Read the nose documentation regarding writing tests: http://readthedocs.org/docs/nose/en/latest/writing_tests.html
- Read this too: http://ivory.idyll.org/articles/nose-intro.html
- Never use actual output as expected result output in a test. This defeats the purpose of the unit tests, as they now test nothing and just become a maintenance headache. Instead, use what you expect.
Running unit tests
Assume our current directory is web2py/, which contains the applications folder. We want to test our trunk application.
To run all unit tests:
Currently these tests are stored in
/tests. The nose plugin will look for tests in this folder. To select tests, give a file or folder path relative to the tests folder.
For example, there is a gis folder containing tests for the GIS mapping code. To run these particular tests:
Particular python version. Nose.py runs the tests with whichever python version ran it. It will use /usr/bin/python if run as an ordinary command as above. To select a different python version run (e.g. python 2.5):
This may be useful to test different versions but also installed modules. N.B. No testing has been done with virtualenv as of this time.
if our current working directory happens to be the
tests/ folder, we can run the tests more directly:
This is the most minimal version of the command. This is because nose.py is designed to be run from anywhere, i.e. not to be put in the PATH.
- Run specific tests while your are working on something.
- Always run all tests and make sure they all pass before a merge to a branch like trunk which must be kept stable.
- If they don't pass, fix or leave out the code that is causing failure.
- If the tests themselves are broken due to e.g. and API change, update them. Never paste actual output as expected output.
The IDE output needs to be modified to work with Web2Py.
NB Custom functions (e.g. for Random) cannot be shared with the Functional Tests (custom=JS) but the rest of the tests can be.
ToDo: Port the storeRandom function from JS to Python:
import random print "test_%i@xxxxxxxx" % random.randint(1, 10000)
- AltereEgo entry on testing in Web2Py
- Web2Py Slice on Unit Testing
CherryPy's WebTest is good for in-process testing & can be run from a browser-less server(easier for CI-integration).
These tests are stored in
NB These are a work-in-progress...need to enable access to Web2Py environment (db, etc) using:
from gluon.shell import exec_environment env=exec_environment('the_model_file.py')
Or could write as a 'Controller' & call from CLI:
python web2py.py -S appname -M -R yourscript.py
Another similar option could be Pylon's WebTest
Agile documentation which can be run using Web2Py's Admin UI.
- e.g. http://127.0.0.1:8000/admin/default/test/sahana/default.py
To extend these to web applications, we have a module which uses wsgi_intercept & CherryPy's WebTest:
This can be used from Controllers like:
def login(): """ Login >>> from applications.sahana.modules.s3.s3test import WSGI_Test >>> test=WSGI_Test(db) >>> '200 OK' in test.getPage('/sahana/%s/login' % module) True >>> test.assertHeader("Content-Type", "text/html") >>> test.assertInBody('Login') """
This works fine,although if an assert fails then the UI gets stuck :/
The 'db' access part isn't yet working.
Note that Web2Py uses a big doctest at the end of each file:
We are starting to use Jenkins to run the Functional Tests automatically.
- Infrasctructure support docs
Other options: http://stackoverflow.com/questions/225598/pretty-continuous-integration-for-python
- Regression tests.pdf (482.2 KB ) - added by 12 years ago.
) - added by 12 years ago.
Source for the PDF
) - added by 12 years ago.
) - added by 12 years ago.
Interactive Wrapper for Selenium
Download all attachments as: .zip