Changes between Version 18 and Version 19 of BluePrint/Testing
- Timestamp:
- 04/19/13 04:12:53 (12 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
BluePrint/Testing
v18 v19 1 = !BluePrint: Quality Assurance =2 1 [[TOC]] 2 = Blueprint: QA (General Ideas) = 3 We have a [http://eden.sahanafoundation.org/wiki/DeveloperGuidelines/Testing Testing process] in current use.[[BR]] 4 This page is for looking at potential improvements. 3 5 4 == Introduction == 5 This blueprint outlines the development of the automatic testing framework. This automatic testing framework will provide robust testing of the code-base and ensure proper maintenance of the code. 6 Behaviour-Driven Development takes Test-Driven Development further to focus on the Specification rather than the Verification using something like [http://www.codeplex.com/pyspec pyspec] or [http://pypi.python.org/pypi/PyFIT/0.8a2 PyFIT]to provide testable specs: 7 * http://behaviour-driven.org/BDDProcess 8 * http://fitnesse.org/FitNesse.AcceptanceTests 6 9 7 Whenever some changes are added in the current code, it has to be validated and tested upon with the integration with the other components of the code. So, this framework will provide this support. 10 There are a huge number of Testing Tools available to cover the various parts of the Testing process: 11 * http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy 12 * http://vallista.idyll.org/~grig/articles/ 13 A community available for assistance: 14 * http://lists.idyll.org/listinfo/testing-in-python 8 15 9 With tests running on a scheduler, continuous testing can be done. This is important to Sahana, seeing the rapid movement of the code-base. Currently, Sahana has automatic test framework, whose details can be found [http://eden.sahanafoundation.org/wiki/QA here] 16 Testing procedure regarding checking in code and merging: 17 * After a merge, run all tests. 18 * If the tests don't pass, don't commit the merge. 10 19 11 == Stakeholders == 12 * Developers - With an automatic testing framework setup, it will be relatively easier for the developers to test their changes in the code. 20 In other words, all tests must pass before pushing to a stable branch. This does not stop buggy code without tests getting into a branch, so a possible future enhancement might be to ensure all new code gets tested, e.g. by ensuring 100% coverage. 13 21 14 * People who want to deploy Sahana - They would like to run the tests to ensure that the system is integrated well and ready for deployment.22 There may be a script added to automate this. 15 23 16 * Sahana as a service - Automated testing will provide Quality Assurance to it’s clients. 24 ---- 25 Testing that Users/ customers should be doing: 26 == Acceptance or 'Customer' tests == 27 These are the highest level tests that check that the right thing has been built. 28 These can often be manual tests, but automating them early on helps avoid wasted effort and disappointment by highlighting things the customer does not need. 29 It may be enough to let the customer test the system for a few days to ensure satisfaction. 17 30 18 * Bug Marshalls 19 == User Stories == 20 * Developers will run the test suite on making changes to the code to test if it works with the integration of the system. Developers may also see the test results mailed to the list to see the possible bugs introduced into the system. 31 ---- 32 Testing that Developers should be doing: 33 == Unit Tests (must do) == 34 "Building the Code Right" 35 [[BR]] 36 The current implementation of Unit Tests use python's unittest. 37 [[BR]] 38 It's details can be found here - [http://eden.sahanafoundation.org/wiki/QA#UnitTests Unit Tests] 39 [[BR]] 21 40 22 * People who want to deploy will run the test suite to check the functionality of the system. 41 * [http://www.python.org/doc/2.6/library/doctest.html DocTest] - inline with code: Agile Documentation 42 * Web2Py supports running doctests on Controllers from the admin UI, e.g.: http://127.0.0.1:8000/admin/default/test/sahana/default.py 43 * http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-2-doctest.html 44 * !DocTests for HTML apps (good since in-process hence can capture errors): http://agiletesting.blogspot.com/2006/04/in-process-web-app-testing-with-twill.html 45 * Uses wsgi_intercept: http://code.google.com/p/wsgi-intercept/ 46 * [http://pypi.python.org/pypi/dutest dutest] - !DocTest !UnitTest integration (includes HTML-aware output checkers such as [http://codespeak.net/lxml/api/lxml.doctestcompare-pysrc.html lxml.doctestcompare.LHTMLOutputChecker]) 47 * http://ojs.pythonpapers.org/index.php/tpp/article/viewArticle/56 23 48 24 * The “bug marshalls” will review the test results mailed periodically by the CI Server and fix the bugs or log them. 49 * [http://docs.python.org/library/unittest.html UnitTest] (formerly [http://pyunit.sourceforge.net PyUnit]) 50 * http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-1-unittest.html 51 * http://diveintopython.org/unit_testing/index.html 52 * [http://somethingaboutorange.com/mrl/projects/nose/ Nose] - a discovery-based unittest extension 53 * [http://mechanicalcat.net/tech/webunit Webunit] - adds supports for HTTP GET/POST testing to unittest 54 * [http://www.cherrypy.org/wiki/Testing#Usingthetesttoolswithyourownapplications WebTest] - !CherryPy's extensions to unittest 55 * http://www.cherrypy.org/browser/trunk/cherrypy/test/webtest.py 25 56 26 == Requirements == 27 <Group requirements in subsections, e.g.,, etc.> 28 <http://en.wikipedia.org/wiki/Requirements_analysis requirements> 29 <Identify different types of requirements:> 30 === Functional === 57 == Continuous Integration == 58 Whenever a commit is made it should be checked to see that it doesn't break anything 59 * [http://bitten.edgewall.org Bitten] - integrates with Trac 60 * [http://cruisecontrol.sourceforge.net CruiseControl] - integrates with Trac: https://oss.werkbold.de/trac-cc/ 61 * [https://launchpad.net/pqm Patch Queue Manager] - integrates with Bzr (allows branch merging) 62 '''Note: As of January 2012, BZR/Launchpad info for eden is deprecated. Please visit the GitHub page. Thanks.'''[[BR]] 31 63 32 * Maintain the CI Server to run the tests periodically and send the results. 64 Alternate options which could be investigated: 65 * http://buildbot.net/trac 66 * http://redsymbol.net/talks/auto-qa-python/ 67 * http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix 68 * An instance of Eden can also be used which will enable Scheduling, enable subscription of notifications for test results, can also provide formatted results. 33 69 34 * Automatically Create Dummy Data while testing. 70 == Regression Testing == 71 Fired by dev after certain number of changes or whenever they like. 72 * http://www.pycheesecake.org/ 73 * Case Study: http://pycheesecake.org/wiki/CleaningUpPyBlosxom 74 * [http://www.logilab.org/857 PyLint] 75 * Review: http://www.doughellmann.com/articles/CompletelyDifferent-2008-03-linters/index.html 76 * http://docs.python.org/library/test.html 35 77 36 * Extend Role Tests (Currently is limited to the IFRC roles for RMS) 78 == Documentation == 79 As well as writing !DocStrings in all functions, we can generate an overall API using: 80 * http://epydoc.sourceforge.net 81 If writing a separate manual then we can use: 82 * http://docutils.sourceforge.net 83 ---- 84 Testing that Testers should be doing as part of [http://en.wikipedia.org/wiki/Acceptance_test Acceptance]: 85 == Boundary Testing (should do) == 86 Building the Right Code 37 87 38 * Run Selenium and Smoke tests in multiple templates with multiple user accounts. Ideally, these tests can be run against each and every template where the target functionality is available. For templates where the functionality is not available, the test should auto-deactivate. 88 Checks functionality of modules against [BluePrints specs] 39 89 40 * Clearer Error Messages in a way that anyone can reproduce them.90 This sees the application as a black box & so the same tests could be run here against both the Python & PHP versions, for instance. 41 91 42 * The CI Server should catch failures in any template. So, it should run tests across templates and include the template name in the aggregated report. 92 Sahana is a Web-based application, so testing should be from browser perspective: 43 93 44 * Simplify Selenium Tests and make them easier to read and more robust. 94 Functional tests can be written using [http://seleniumhq.org Selenium]: 95 * The details about the current implementation of Selenium can be found here - [http://eden.sahanafoundation.org/wiki/QA/Automated/Selenium SeleniumTests] 96 * A lot of Selenium-related articles: http://vallista.idyll.org/~grig/articles/ 97 * Nice slides on Selenium: http://www.slideshare.net/alexchaffee/fullstack-webapp-testing-with-selenium-and-rails 45 98 46 * Adapt tests to meet needs of evolving CI Server(SysAdmin/ContinuousIntegration) 99 A new alternative that we should look at is [http://www.getwindmill.com/features Windmill]. 47 100 48 * Load Tests 49 === Non-functional === 50 http://en.wikipedia.org/wiki/Non-functional_requirements 51 === Interoperability === 52 === Standards === 53 === System Constraints === 101 Alternate opions which could be investigated: 102 * [http://wwwsearch.sourceforge.net/mechanize/ Mechanize] - library for programming website browsing 103 * [http://twill.idyll.org/testing.html Twill] is built on Mechanize 104 * [http://pypi.python.org/pypi/zope.testbrowser/3.6.0a1 zope.testbrowser] is built on Mechanize (& not Zope-specific) 105 * MaxQ: http://agiletesting.blogspot.com/2005/02/web-app-testing-with-python-part-1.html 106 * [http://blog.jeffhaynie.us/introducing-testmonkey.html TestMonkey] - not ready for primetime but worth keeping an eye on 107 * [http://jakarta.apache.org/jmeter/ JMeter] 108 * [http://www.badboy.com.au Badboy] 109 * [http://pythonpaste.org/testing-applications.html Paste] 54 110 55 == Use-Cases == 56 [[Image(Use case testing.png)]] 57 == Design == 111 == Integration Testing (good thing) == 112 We depend on various 3rd-party components so we need to ensure that as these components are upgraded this doesn't break any of our functionality: 113 * Web2Py 114 * !CherryPy 115 * SimpleJSON 116 * T2 117 * !OpenLayers 118 * jQuery 119 * Ext 58 120 59 === Workflows === 60 * The CI Server runs the tests periodically and sends the results. On a regular basis, the bug marshalls review the test results and see if the reported negatives are false negatives or true negatives. If they are false negatives, they fix the tests. If they are true negatives, they report the bug on the trac or fix the bug themselves. 121 == Usability Tests == 122 * [http://www.it.rit.edu/~jxs/development/Sahana/00_UI_Comments.html UI Guidelines] - comments on UI issues in Sahana2 123 === Accessibility === 124 * Are we XHTML 1.0 compliant? 125 * Are we usable without !JavaScript? 61 126 62 * The developers make changes to the code and run the tests. If their changes do not break the tests, then this code is viable for merging. 127 == Performance Tests == 128 Whilst the Web2Py framework is fast, we should check that we're not doing anything stupid to slow it down: 129 * http://groups.google.com/group/web2py/browse_thread/thread/cf5c5bd53bc42d49 63 130 64 * The clients run the tests to make sure that the functionality this software is intended to provide is fulfilled. 65 === Technologies === 131 === Load Tests === 132 How many simultaneous users can the system support? 133 * We recommend using [Testing/Load Tsung] 134 * [http://www.joedog.org/index/siege-home Siege] 66 135 67 * For setting up the CI Server, some of the technologies which can be used are given here - [http://eden.sahanafoundation.org/wiki/BluePrintTesting#ContinuousIntegration ContinuousIntegration] 68 69 * For Continuous Integration, an instance of Eden can also be used which will enable Scheduling, enable subscription of notifications for test results, can also provide formatted results. 136 === Stress Tests === 137 If extreme load is applied to the application, does it recover gracefully? 138 * Tools above but using more extreme parameters 70 139 71 * For functional tests, currently, Selenium and Smoke Tests are used. The current Selenium tests should be made more robust and they should work across browsers. Currently, there is support for Firefox Webdriver(upto version 16) and Chrome Webdriver. We need to provide support for Safari Webdriver, Opera Webdriver and Internet Explorer Webdriver. 140 == Security Tests == 141 Whilst the Web2Py framework is secure by design, we should validate this: 142 * http://mdp.cti.depaul.edu/examples/default/features 143 Things developers can do to reduce risks: 144 * http://www.sans.org/top25errors/#cat1 72 145 73 * The Role tests(which currently run only on the IFRC template and uses Selenium Unit Test) are to extend to run on multiple templates. 74 == Implementation == 75 The seleium tests can be run on mainly IFRC template. WIth some changes, they can be run on the default template as well. However, they don’t work across templates. 146 == Smoke Tests == 147 Currently there is a need for more tests in Sahana. However, writing tests take time. 148 "Smoke tests" provide a way to highlight the worse failures, and cover a large part of the system. 149 Basically these are like a generic 'can I view this page?' acceptance test. 150 The idea is to write a small script that discovers all the views in the system, and make requests to them, highlighting exceptions. 151 These tests can run quickly as they do not require a round-trip HTTP request. 152 When exceptions occur, these can be turned into regression tests. 76 153 77 The unit tests expect that some particular modules are enabled in the template. If they are not enabled and unit tests are run in that template, then false negatives are reported. 154 == Test Coverage == 155 coverage is a python command/module that allows easy measurement of test coverage over a python program. 156 You can generate summary reports or pretty HTML reports of the code coverage. 157 [http://nedbatchelder.com/code/coverage/] 78 158 79 Current implementation of the selenium tests, smoke tests, role tests can be found here - 80 81 https://github.com/flavour/eden/tree/master/modules/tests 82 83 Unit tests and benchmark tests can be found here - 84 85 https://github.com/flavour/eden/tree/master/modules/unit_tests 86 == References == 87 88 [http://eden.sahanafoundation.org/wiki/BluePrintTesting BluePrintTesting] 89 90 [http://eden.sahanafoundation.org/wiki/QA QA] 91 92 [http://eden.sahanafoundation.org/wiki/DeveloperGuidelinesTesting DeveloperGuidelinesTesting] 93 94 [http://eden.sahanafoundation.org/wiki/SysAdmin/ContinuousIntegration SysAdmin/ContinuousIntegration] 95 [[BR]] 96 [[BR]] 97 BluePrint 159 ---- 160 Sahana 2 Links: http://wiki.sahanafoundation.org/doku.php?id=dev:home#design_and_development_guides 161 ---- 162 BluePrints