Changes between Version 18 and Version 19 of BluePrint/Testing

04/19/13 04:12:53 (10 years ago)



  • BluePrint/Testing

    v18 v19  
    1 = !BluePrint: Quality Assurance =
     2= Blueprint: QA (General Ideas) =
     3We have a [ Testing process] in current use.[[BR]]
     4This page is for looking at potential improvements.
    4 == Introduction ==
    5 This blueprint outlines the development of the automatic testing framework. This automatic testing framework will provide robust testing of the code-base and ensure proper maintenance of the code.
     6Behaviour-Driven Development takes Test-Driven Development further to focus on the Specification rather than the Verification using something like [ pyspec] or [ PyFIT]to provide testable specs:
     7 *
     8 *
    7 Whenever some changes are added in the current code, it has to be validated and tested upon with the integration with the other components of the code. So, this framework will provide this support.
     10There are a huge number of Testing Tools available to cover the various parts of the Testing process:
     11 *
     12 *
     13A community available for assistance:
     14 *
    9 With tests running on a scheduler, continuous testing can be done. This is important to Sahana, seeing the rapid movement of the code-base. Currently, Sahana has automatic test framework, whose details can be found [ here]
     16Testing procedure regarding checking in code and merging:
     17* After a merge, run all tests.
     18* If the tests don't pass, don't commit the merge.
    11 == Stakeholders ==
    12 * Developers - With an automatic testing framework setup, it will be relatively easier for the developers to test their changes in the code.
     20In other words, all tests must pass before pushing to a stable branch. This does not stop buggy code without tests getting into a branch, so a possible future enhancement might be to ensure all new code gets tested, e.g. by ensuring 100% coverage.
    14 * People who want to deploy Sahana - They would like to run the tests to ensure that the system is integrated well and ready for deployment.
     22There may be a script added to automate this.
    16 * Sahana as a service - Automated testing will provide Quality Assurance to it’s clients.
     25Testing that Users/ customers should be doing:
     26== Acceptance or 'Customer' tests ==
     27These are the highest level tests that check that the right thing has been built.
     28These can often be manual tests, but automating them early on helps avoid wasted effort and disappointment by highlighting things the customer does not need.
     29It may be enough to let the customer test the system for a few days to ensure satisfaction.
    18 * Bug Marshalls
    19 == User Stories ==
    20 * Developers will run the test suite on making changes to the code to test if it works with the integration of the system. Developers may also see the test results mailed to the list to see the possible bugs introduced into the system.
     32Testing that Developers should be doing:
     33== Unit Tests (must do) ==
     34"Building the Code Right"
     36The current implementation of Unit Tests use python's unittest.
     38It's details can be found here - [ Unit Tests]
    22 * People who want to deploy will run the test suite to check the functionality of the system.
     41 * [ DocTest] - inline with code: Agile Documentation
     42  * Web2Py supports running doctests on Controllers from the admin UI, e.g.:
     43  *
     44  * !DocTests for HTML apps (good since in-process hence can capture errors):
     45   * Uses wsgi_intercept:
     46  * [ dutest] - !DocTest !UnitTest integration (includes HTML-aware output checkers such as [ lxml.doctestcompare.LHTMLOutputChecker])
     47   *
    24 * The “bug marshalls” will review the test results mailed periodically by the CI Server and fix the bugs or log them.
     49 * [ UnitTest] (formerly [ PyUnit])
     50  *
     51  *
     52  * [ Nose] - a discovery-based unittest extension
     53  * [ Webunit] - adds supports for HTTP GET/POST testing to unittest
     54  * [ WebTest] - !CherryPy's extensions to unittest
     55   *
    26 == Requirements ==
    27 <Group requirements in subsections, e.g.,, etc.>
    28 < requirements>
    29 <Identify different types of requirements:>
    30 === Functional ===
     57== Continuous Integration ==
     58Whenever a commit is made it should be checked to see that it doesn't break anything
     59 * [ Bitten] - integrates with Trac
     60 * [ CruiseControl] - integrates with Trac:
     61 * [ Patch Queue Manager] - integrates with Bzr (allows branch merging)
     62'''Note: As of January 2012, BZR/Launchpad info for eden is deprecated. Please visit the GitHub page. Thanks.'''[[BR]]
    32 *    Maintain the CI Server to run the tests periodically and send the results.
     64Alternate options which could be investigated:
     65 *
     66  *
     67 *
     68 * An instance of Eden can also be used which will enable Scheduling, enable subscription of notifications for test results, can also provide formatted results.
    34 *    Automatically Create Dummy Data while testing.
     70== Regression Testing ==
     71Fired by dev after certain number of changes or whenever they like.
     72 *
     73  * Case Study:
     74 * [ PyLint]
     75  * Review:
     76 *
    36 *    Extend Role Tests (Currently is limited to the IFRC roles for RMS)
     78== Documentation ==
     79As well as writing !DocStrings in all functions, we can generate an overall API using:
     80 *
     81If writing a separate manual then we can use:
     82 *
     84Testing that Testers should be doing as part of [ Acceptance]:
     85== Boundary Testing (should do) ==
     86Building the Right Code
    38 *    Run Selenium and Smoke tests in multiple templates with multiple user accounts. Ideally, these tests can be run against each and every template where the target functionality is available. For templates where the functionality is not available, the test should auto-deactivate.
     88Checks functionality of modules against [BluePrints specs]
    40 *    Clearer Error Messages in a way that anyone can reproduce them.
     90This sees the application as a black box & so the same tests could be run here against both the Python & PHP versions, for instance.
    42 *    The CI Server should catch failures in any template. So, it should run tests across templates and include the template name in the aggregated report.
     92Sahana is a Web-based application, so testing should be from browser perspective:
    44 *    Simplify Selenium Tests and make them easier to read and more robust.
     94Functional tests can be written using [ Selenium]:
     95  * The details about the current implementation of Selenium can be found here - [ SeleniumTests]
     96  * A lot of Selenium-related articles:
     97  * Nice slides on Selenium:
    46 *    Adapt tests to meet needs of evolving CI Server(SysAdmin/ContinuousIntegration)
     99A new alternative that we should look at is [ Windmill].
    48 *    Load Tests
    49 === Non-functional ===
    51 === Interoperability ===
    52 === Standards ===
    53 === System Constraints ===
     101Alternate opions which could be investigated:
     102 * [ Mechanize] - library for programming website browsing
     103  * [ Twill] is built on Mechanize
     104  * [ zope.testbrowser] is built on Mechanize (& not Zope-specific)
     105 * MaxQ:
     106 * [ TestMonkey] - not ready for primetime but worth keeping an eye on
     107 * [ JMeter]
     108 * [ Badboy]
     109 * [ Paste]
    55 == Use-Cases ==
    56 [[Image(Use case testing.png)]]
    57 == Design ==
     111== Integration Testing (good thing) ==
     112We depend on various 3rd-party components so we need to ensure that as these components are upgraded this doesn't break any of our functionality:
     113 * Web2Py
     114  * !CherryPy
     115  * SimpleJSON
     116 * T2
     117 * !OpenLayers
     118 * jQuery
     119 * Ext
    59 === Workflows ===
    60 *    The CI Server runs the tests periodically and sends the results. On a regular basis, the bug marshalls review the test results and see if the reported negatives are false negatives or true negatives. If they are false negatives, they fix the tests. If they are true negatives, they report the bug on the trac or fix the bug themselves.
     121== Usability Tests ==
     122 * [ UI Guidelines] - comments on UI issues in Sahana2
     123=== Accessibility ===
     124 * Are we XHTML 1.0 compliant?
     125 * Are we usable without !JavaScript?
    62 *    The developers make changes to the code and run the tests. If their changes do not break the tests, then this code is viable for merging.
     127== Performance Tests ==
     128Whilst the Web2Py framework is fast, we should check that we're not doing anything stupid to slow it down:
     129 *
    64 *    The clients run the tests to make sure that the functionality this software is intended to provide is fulfilled.
    65 === Technologies ===
     131=== Load Tests ===
     132How many simultaneous users can the system support?
     133 * We recommend using [Testing/Load Tsung]
     134 * [ Siege]
    67 *    For setting up the CI Server, some of the technologies which can be used are given here - [ ContinuousIntegration]
    69 * For Continuous Integration, an instance of Eden can also be used which will enable Scheduling, enable subscription of notifications for test results, can also provide formatted results.
     136=== Stress Tests ===
     137If extreme load is applied to the application, does it recover gracefully?
     138 * Tools above but using more extreme parameters
    71 *  For functional tests, currently, Selenium and Smoke Tests are used. The current Selenium tests should be made more robust and they should work across browsers. Currently, there is support for Firefox Webdriver(upto version 16) and Chrome Webdriver. We need to provide support for Safari Webdriver, Opera Webdriver and Internet Explorer Webdriver.
     140== Security Tests ==
     141Whilst the Web2Py framework is secure by design, we should validate this:
     142 *
     143Things developers can do to reduce risks:
     144 *
    73 * The Role tests(which currently run only on the IFRC template and uses Selenium Unit Test) are to extend to run on multiple templates.
    74 == Implementation ==
    75 The seleium tests can be run on mainly IFRC template. WIth some changes, they can be run on the default template as well. However, they don’t work across templates.
     146== Smoke Tests ==
     147Currently there is a need for more tests in Sahana. However, writing tests take time.
     148"Smoke tests" provide a way to highlight the worse failures, and cover a large part of the system.
     149Basically these are like a generic 'can I view this page?' acceptance test.
     150The idea is to write a small script that discovers all the views in the system, and make requests to them, highlighting exceptions.
     151These tests can run quickly as they do not require a round-trip HTTP request.
     152When exceptions occur, these can be turned into regression tests.
    77 The unit tests expect that some particular modules are enabled in the template. If they are not enabled and unit tests are run in that template, then false negatives are reported.
     154== Test Coverage ==
     155coverage is a python command/module that allows easy measurement of test coverage over a python program.
     156You can generate summary reports or pretty HTML reports of the code coverage.
    79 Current implementation of the selenium tests, smoke tests, role tests can be found here -
    83 Unit tests and benchmark tests can be found here -
    86 == References ==
    88 [ BluePrintTesting]
    90 [ QA]
    92 [ DeveloperGuidelinesTesting]
    94 [ SysAdmin/ContinuousIntegration]
    95 [[BR]]
    96 [[BR]]
    97 BluePrint
     160Sahana 2 Links: