= Testing = [[TOC]] ''"A bug is a test case you haven't written yet"'' ''"Unit Tests allow merciless [http://diveintopython.org/refactoring/refactoring.html refactoring]"'' This page defines what our current approach versus our [wiki:BluePrintTesting BluePrint for future options] Test-Driven Development is a programming style which says that you 1st write your test cases (from the [BluePrint specs]) & then proceed to make them pass. == Introduction == Selenium provides the ability to test Sahana Eden as users see it - namely through a web browser. This therefore does end-to-end Functional Testing, however it can also be used as Unit Testing We are building our framework around the new !WebDriver, despite having some legacy code in the older format: * http://seleniumhq.org/docs/appendix_migrating_from_rc_to_webdriver.html#why-migrate-to-webdriver The tests are stored in {{{eden/modules/tests}}} == Installation of the testing environment in your machine == In order to execute the automated Selenium powered test scripts, you must install the Selenium Web Drivers into Python. Download and install latest Selenium package: {{{ wget http://pypi.python.org/packages/source/s/selenium/selenium-2.22.1.tar.gz tar zxvf selenium-2.22.1.tar.gz cd selenium-2.22.1 python setup.py install }}} == Running / Executing Automated test scripts: == Before running the Selenium scripts, you should put your database into a known state: {{{ clean }}} For the whole test suite, it is assumed that you are using: {{{ deployment_settings.base.prepopulate = ["IFRC_Train"] }}} Run the whole test suite for the Eden application: {{{ cd web2py python web2py.py -S eden -M -R applications/eden/modules/tests/suite.py }}} Run a class and all tests in that class: {{{ cd web2py python web2py.py -S eden -M -R applications/eden/modules/tests/suite.py -A -C mytestclass }}} Run just one test within a class: {{{ cd web2py python web2py.py -S eden -M -R applications/eden/modules/tests/suite.py -A -C mytestclass -M mytestmethod }}} === Command Line Arguments === A number of command line arguments have been added and more are being added so to get the latest list of available options use the --help switch, which you can quickly do as follows: {{{ python modules/tests/suite.py --help }}} Important options include: -C for class to run -M for test method within a class to run, when you use this option either use the -C option or provide the method in the format class.method If you have HTMLTestRunner installed then a nicely formatted html report will be generated, should you want to disable this then use the --nohtml option. The HTML report will be written to the path given by the switch --html_path which by default will the web2py folder since that is where the tests scripts are run from. The file name will have a timestamp appended to it, if you want you can have just a date by using the option html_name_date. The option --suite will run certain predefined tests. At the moment it supports '''smoke''', which runs a test to look for broken urls otherwise it will run all the tests. If a class or method are selected then this option is ignored. == Writing / Creating your own test scripts: == We aim to make it as easy as possible to write additional tests, which can easily be plugged into the testing suite or/and executed separately. The canonical example is: {{{eden/modules/tests/org/org_create_organisation.py}}} New tests should be stored in a subfolder per module, adding the foldername to {{{eden/modules/tests/__init__.py}}} & creating an {{{__init__.py}}} in the subfolder. The key is to make tests which are as least fragile as possible through: * State (we should be able to run individual tests easilty, which check their current state as-required) * Deployment_Settings * Localisation * Theme This suggests refactoring tests to centralise common elements into a library to mean fixes should only happen in 1 place. There are a number of possible selectors to use to find your elements...the 'ID' may be the most stable, so don't be afraid of patching the code to add IDs where you'd like to be able to reach them reliably: * http://selenium.googlecode.com/svn/trunk/docs/api/py/selenium/selenium.selenium.html We separate data out into a separate file, so that this is easy to change to allow reruns of the tsts with different data sets. == !ToDo == * Store results in a format suitable for use by CI * Namespacing of tests * Include per-test timings * Integrate this into !GitHub using [https://buildhive.cloudbees.com BuildHive] * Run from Nose? * http://blog.shiningpanda.com/2011/12/introducing-selenose.html == See Also == * TestCases - List of things to test * http://code.google.com/p/selenium/wiki/PageObjects - Style suggestion Systers' approach: * http://systers.org/systers-dev/doku.php/automated_functional_testing * List of Tests: http://systers.org/systers-dev/doku.php/master_checklist_template * GSoC project: http://systers.org/systers-dev/doku.php/svaksha:patches_release_testing_automation Alternative Options: * http://zesty.ca/scrape/ * [http://pycon.blip.tv/file/3261277 Lightning Talk] (2.30)