[[TOC]] = Blueprint: QA (General Ideas) = We have a [http://eden.sahanafoundation.org/wiki/DeveloperGuidelines/Testing Testing process] in current use.[[BR]] This page is for looking at potential improvements. Behaviour-Driven Development takes Test-Driven Development further to focus on the Specification rather than the Verification using something like [http://www.codeplex.com/pyspec pyspec] or [http://pypi.python.org/pypi/PyFIT/0.8a2 PyFIT]to provide testable specs: * http://behaviour-driven.org/BDDProcess * http://fitnesse.org/FitNesse.AcceptanceTests There are a huge number of Testing Tools available to cover the various parts of the Testing process: * http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy * http://vallista.idyll.org/~grig/articles/ A community available for assistance: * http://lists.idyll.org/listinfo/testing-in-python Testing procedure regarding checking in code and merging: * After a merge, run all tests. * If the tests don't pass, don't commit the merge. In other words, all tests must pass before pushing to a stable branch. This does not stop buggy code without tests getting into a branch, so a possible future enhancement might be to ensure all new code gets tested, e.g. by ensuring 100% coverage. There may be a script added to automate this. ---- Testing that Users/ customers should be doing: == Acceptance or 'Customer' tests == These are the highest level tests that check that the right thing has been built. These can often be manual tests, but automating them early on helps avoid wasted effort and disappointment by highlighting things the customer does not need. It may be enough to let the customer test the system for a few days to ensure satisfaction. ---- Testing that Developers should be doing: == Unit Tests (must do) == "Building the Code Right" [[BR]] The current implementation of Unit Tests use python's unittest. [[BR]] It's details can be found here - [http://eden.sahanafoundation.org/wiki/QA#UnitTests Unit Tests] [[BR]] * [http://www.python.org/doc/2.6/library/doctest.html DocTest] - inline with code: Agile Documentation * Web2Py supports running doctests on Controllers from the admin UI, e.g.: http://127.0.0.1:8000/admin/default/test/sahana/default.py * http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-2-doctest.html * !DocTests for HTML apps (good since in-process hence can capture errors): http://agiletesting.blogspot.com/2006/04/in-process-web-app-testing-with-twill.html * Uses wsgi_intercept: http://code.google.com/p/wsgi-intercept/ * [http://pypi.python.org/pypi/dutest dutest] - !DocTest !UnitTest integration (includes HTML-aware output checkers such as [http://codespeak.net/lxml/api/lxml.doctestcompare-pysrc.html lxml.doctestcompare.LHTMLOutputChecker]) * http://ojs.pythonpapers.org/index.php/tpp/article/viewArticle/56 * [http://docs.python.org/library/unittest.html UnitTest] (formerly [http://pyunit.sourceforge.net PyUnit]) * http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-1-unittest.html * http://diveintopython.org/unit_testing/index.html * [http://somethingaboutorange.com/mrl/projects/nose/ Nose] - a discovery-based unittest extension * [http://mechanicalcat.net/tech/webunit Webunit] - adds supports for HTTP GET/POST testing to unittest * [http://www.cherrypy.org/wiki/Testing#Usingthetesttoolswithyourownapplications WebTest] - !CherryPy's extensions to unittest * http://www.cherrypy.org/browser/trunk/cherrypy/test/webtest.py == Continuous Integration == Whenever a commit is made it should be checked to see that it doesn't break anything * [http://bitten.edgewall.org Bitten] - integrates with Trac * [http://cruisecontrol.sourceforge.net CruiseControl] - integrates with Trac: https://oss.werkbold.de/trac-cc/ * [https://launchpad.net/pqm Patch Queue Manager] - integrates with Bzr (allows branch merging) '''Note: As of January 2012, BZR/Launchpad info for eden is deprecated. Please visit the GitHub page. Thanks.'''[[BR]] Alternate options which could be investigated: * http://buildbot.net/trac * http://redsymbol.net/talks/auto-qa-python/ * http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix * An instance of Eden can also be used which will enable Scheduling, enable subscription of notifications for test results, can also provide formatted results. == Regression Testing == Fired by dev after certain number of changes or whenever they like. * http://www.pycheesecake.org/ * Case Study: http://pycheesecake.org/wiki/CleaningUpPyBlosxom * [http://www.logilab.org/857 PyLint] * Review: http://www.doughellmann.com/articles/CompletelyDifferent-2008-03-linters/index.html * http://docs.python.org/library/test.html == Documentation == As well as writing !DocStrings in all functions, we can generate an overall API using: * http://epydoc.sourceforge.net If writing a separate manual then we can use: * http://docutils.sourceforge.net ---- Testing that Testers should be doing as part of [http://en.wikipedia.org/wiki/Acceptance_test Acceptance]: == Boundary Testing (should do) == Building the Right Code Checks functionality of modules against [BluePrints specs] This sees the application as a black box & so the same tests could be run here against both the Python & PHP versions, for instance. Sahana is a Web-based application, so testing should be from browser perspective: Functional tests can be written using [http://seleniumhq.org Selenium]: * The details about the current implementation of Selenium can be found here - [http://eden.sahanafoundation.org/wiki/QA/Automated/Selenium SeleniumTests] * A lot of Selenium-related articles: http://vallista.idyll.org/~grig/articles/ * Nice slides on Selenium: http://www.slideshare.net/alexchaffee/fullstack-webapp-testing-with-selenium-and-rails A new alternative that we should look at is [http://www.getwindmill.com/features Windmill]. Alternate opions which could be investigated: * [http://wwwsearch.sourceforge.net/mechanize/ Mechanize] - library for programming website browsing * [http://twill.idyll.org/testing.html Twill] is built on Mechanize * [http://pypi.python.org/pypi/zope.testbrowser/3.6.0a1 zope.testbrowser] is built on Mechanize (& not Zope-specific) * MaxQ: http://agiletesting.blogspot.com/2005/02/web-app-testing-with-python-part-1.html * [http://blog.jeffhaynie.us/introducing-testmonkey.html TestMonkey] - not ready for primetime but worth keeping an eye on * [http://jakarta.apache.org/jmeter/ JMeter] * [http://www.badboy.com.au Badboy] * [http://pythonpaste.org/testing-applications.html Paste] == Integration Testing (good thing) == We depend on various 3rd-party components so we need to ensure that as these components are upgraded this doesn't break any of our functionality: * Web2Py * !CherryPy * SimpleJSON * T2 * !OpenLayers * jQuery * Ext == Usability Tests == * [http://www.it.rit.edu/~jxs/development/Sahana/00_UI_Comments.html UI Guidelines] - comments on UI issues in Sahana2 === Accessibility === * Are we XHTML 1.0 compliant? * Are we usable without !JavaScript? == Performance Tests == Whilst the Web2Py framework is fast, we should check that we're not doing anything stupid to slow it down: * http://groups.google.com/group/web2py/browse_thread/thread/cf5c5bd53bc42d49 === Load Tests === How many simultaneous users can the system support? * We recommend using [Testing/Load Tsung] * [http://www.joedog.org/index/siege-home Siege] === Stress Tests === If extreme load is applied to the application, does it recover gracefully? * Tools above but using more extreme parameters == Security Tests == Whilst the Web2Py framework is secure by design, we should validate this: * http://mdp.cti.depaul.edu/examples/default/features Things developers can do to reduce risks: * http://www.sans.org/top25errors/#cat1 == Smoke Tests == The current implementation can be found here - [http://eden.sahanafoundation.org/wiki/QA#SmokeTests SmokeTests] [[BR]] "Smoke tests" provide a way to highlight the worse failures, and cover a large part of the system. Basically these are like a generic 'can I view this page?' acceptance test. The idea is to write a small script that discovers all the views in the system, and make requests to them, highlighting exceptions. These tests can run quickly as they do not require a round-trip HTTP request. When exceptions occur, these can be turned into regression tests. == Test Coverage == coverage is a python command/module that allows easy measurement of test coverage over a python program. You can generate summary reports or pretty HTML reports of the code coverage. [http://nedbatchelder.com/code/coverage/] BluePrints