wiki:BluePrint/Testing/TestSuite

Version 35 (modified by somayjain, 8 years ago) ( diff )

--

BluePrint: Test Suite

Introduction

This blueprint outlines the development of the automatic testing framework. This automatic testing framework will provide robust testing of the code-base and ensure proper maintenance of the code.

Whenever some changes are added in the current code, it has to be validated and tested upon with the integration with the other components of the code. So, this framework will provide this support.

With tests running on a scheduler, continuous testing can be done. This is important to Sahana, seeing the rapid movement of the code-base. Currently, Sahana has automatic test framework, whose details can be found here

Stakeholders

  • Developers - With an automatic testing framework setup, it will be relatively easier for the developers to test their changes in the code.
  • People who want to deploy Sahana - They would like to run the tests to ensure that the system is integrated well and ready for deployment.
  • Bug Marshalls

User Stories

  • Developers will run the test suite on making changes to the code to test if their code works with the integration of the system. If their changes do not break the tests, then their code is viable for merging.
  • Developers may also see the test results mailed to the list to see the possible bugs introduced into the system.
  • On a regular basis, the "bug marshals" review the test results sent by the CI Server and see if the reported negatives are false negatives or true negatives. If they are false negatives, they fix the tests or log a ticket on the trac for a bug in the testing code. If they are true negatives, they log a ticket on the trac for a bug in the code on which testing is done or fix the bug themselves.
  • Clients who want to deploy will run the test suite to check the functionality of the system.

Use Case Diagram

Requirements

Functional

CI Server

  • Maintain the CI Server to-
    1. Run the tests daily
    2. Generate an aggregated report for all the templates
    3. Send the report via email

Test Error Messages

  • Error messages are only generated for real errors in Sahana Eden (no false positives)
  • Error messages clearly indicate how the error can be manually repeated and/or where in the code it is occuring

Selenium Tests

  • Run Selenium Tests in multiple templates with multiple user accounts.
  • Make the selenium tests run independent of the template and the field type used in the template so that they are easier to read, write tests and the tests are more robust.

Role Tests

  • Extend Role Tests to run on different templates(Currently is limited to the IFRC roles for RMS)

Smoke Tests

  • Run Smoke tests in multiple templates.

Load Tests

  • Load Tests : Will measure performance of Eden given the load under both normal and anticipated peak conditions.

Design

Workflows

CI Server

  • The CI Server -
    1. Fetches the latest code from Github Repository
    2. Runs the tests daily.
    3. Generates the report which includes -
      1. Template Name
      2. Test Name
      3. Location of test script in Eden
      4. Traceback, record details in case of failure of test
      5. Pass Status in case of passed test
    4. Saves the aggregate report of the tests on the server at http://82.71.213.53/RESULTS/.
    5. Mails it to the concerned people.

The workflow is depicted here -

  • People may also subscribe to receive the test results via email.

Selenium Tests

  • The tests are specified in a tests.py file in each template's directory. This file contains the class name of the tests which are to be run on this template.
  • If the tests are not specified in tests.py file for some template, the tests specified for 'default' template are run.
  • The test suite -
    1. Provides a set of functions(create, search, edit) which will take the tablename, labels of the field and the record data as input.
    2. Automatically checks the type of field(option, autocomplete, text, date, datetime, etc) being used in the current template and fill the form accordingly.
    3. Takes the command line arguments to specify which browser the tests should run on(Firefox and Chrome are already implemented)
  • Once we have a function that automatically checks the template used and the type of fields used in the form, a code-template is created which just needs to be copied, pasted and fed in the record when creating a new Selenium test. This ensures that writing Selenium tests are as easy as giving a test case. A sample code-template can be of the form -
# Create Method
def test_<testname>_create_<module_name>(self):
    self.login(account, nexturl)    # already implemented
    # The data which is to be added to the table
    data_list = [ ( column_name1, data1 ), ( column_name2, data2 ), .. so on ]
    self.create(tablename , data_list)

# Search Method
def test_<testname>_search_type_<module_name>(self):
    self.login(account, nexturl)    # already implemented
    search_fields = [ ( column_name1 , [label1, label2, ..] ),    # the list of labels are those which 
                                                                  # are to be searched for under column_name1 
                      ( column_name2 , [label1, label2, ..] ),
                      .. so on ]

    self.search(type, tablename, search_fields)    # type can be simple or advanced

# Edit Method
def test_<testname>_edit_<module_name>(self):
    self.login(account, nexturl)    # already implemented
    # The new data which is to be edited in the table
    edit_list = [ ( column_name1, data1 ), ( column_name2, data2 ), .. so on ]
    self.create(tablename , edit_list)

Role Tests

  • The role tests are currently run only on the IFRC template. The test suite should automatically generate data for multiple templates so that the role tests can be run on them as well.
  • Automatic scripts should add the test users to the database and generate data which would perform create, read, update and delete operations to test the permitted roles of the test users.

Load Test

  • The load tests will test the amount of load Eden can take under normal and peak conditions. So, the load tests should be divided into phases, with each phase incrementally increasing the number of users and requests. Features of Tsung can be used to do this effectively.
  • Client and Server should ideally be setup on separate machines.
Design of the Test Suite

See: BluePrint/Testing/Load

Running the tests
  • Setup your machine - Load/Setup
  • Change the path of tsung-1.0.dtd in the DOCTYPE if you have changed the the tsung installation directory to <tsung_directory>/share/tsung/tsung-1.0.dtd
  • Run tsung -f <path_to_test_file> start
  • To generate the report, go to the log directory and run - /opt/tsung-1.4.2/lib/tsung/bin/tsung_stats.pl
    • Note : Installation directory for tsung is opt/tsung-1.4.2 as per the Setup instructions on wiki. If you have changed that, then run <tsung_directory>/lib/tsung/bin/tsung_stats.pl to generate the report.

Remote Tests

The design for Selenium Tests-

  • Prerequisite 1: There are some more instances of Eden with varying configurations (template/python version/database type) running on some servers(preferably in local network with the CI Server, as this will increase the speed of handling requests by the CI Server).
  • Prerequisite 2: There is an 'testing' instance of Eden running on the CI Server which contains the test suite.
  • The 'testing' instance of Eden will shoot the test suite, while changing the base URL in the test suite. The base URL currently is 127.0.0.1. If we change it to the IP address of some of the currently running instances of Eden on other servers, the tests will start running for those servers.
  • The above task can be done as a background process on the CI Server or running the tests using SeleniumGrid to run them in parallel.

Points 3 and 4 will be done on the CI Server using scripts(python or bash) and it will be built on top of the current test suite, so that we have to change minimal code in the current test suite.

For this plan to succeed, we have to make sure that the Selenium tests run independent of the template. For this, I have investigated that some of the reasons why Selenium Tests fail on different templates are -

  • Text in Option field does not matches with that given in test data.
  • Field type is incorrect. Most common example - autocomplete v/s option.
  • Module is disabled.

So, the above issues must be handled prior to running the tests remotely. To find the field type, investigating it from the HTML in the test since we wish to run the tests remotely.

Once this is done, we have a tests which can run on a class of templates which are not way too much different from default and IFRC templates.

So, we run these tests on these templates for the enabled modules.

Technologies

CI Server

Possible options -

  • Jenkins
    • Advantages
      • Easy to build Eden
      • Easy integration with selenium grid, so that the selenium tests can be run remotely and parallely
      • Git plugin -
        • Can trigger tests on a commit easily.
        • Can automatically publish build status on the git commit.
        • Can create tag/push to master automatically on successful tests after the commit.
      • Run tests on new pull request (GitHub pull request builder plugin)
      • Email notifications about the test results.
      • IRC notifications about the test results. (IRC plugin)
      • Add more nodes on which jenkins is run, so that the load is distributed
    • Setting Up
      • Jenkins is easy to install.
      • Installation of Selenium Grid needed.
      • Setting up some slaves for Selenium grid for cross browser, parallel selenium tests (Can be done on a single machine too)
      • Slight modifications in the Selenium test suite to incorporate Selenium Grid.
      • Configuring Jenkins to run tests on a git commit, etc, as required.
      • Configuration is done by an authenticated web interface.
    • Reference - https://wiki.jenkins-ci.org/display/JENKINS/Home
  • Selenium Grid
    • Advantages
      • Can run selenium tests across -
        • Different browsers
        • Different operating systems
        • Different machines in Parallel.
      • Works on the concept of hub and nodes.
      • The tests are run on a single machine - hub.
      • Execution will be done on different machines - nodes.
      • Will increase the speed of the Selenium tests since they will be distributed across machines.
        • Eg - Some nodes will run create, some will run the search tests.
      • We can then trigger the tests on a commit/pull request.
      • The time taken will roughly decrease depending on the number of nodes.
        • Eg - If there are 4 nodes, the tests will run roughly 4 times faster.
    • Setting Up
      • To install, need to download the Selenium Server jar file - on the hub as well as on all nodes.
      • Will have to change some initialization code for the Selenium tests where the Webdriver is initialised.
    • Reference - http://code.google.com/p/selenium/wiki/Grid2
  • Solano
    • Comparison with Jenkins -
      • Jenkins has a lot more plugins, which are written by a much larger community. So, it becomes more extensible
        • Eg - git plugin, IRC plugin, github pull request plugin.
      • Jenkins is easier to integrate with Selenium Grid, which enables us to run the tests in parallel.
      • Jenkins has a better documentation, easier to use.
    • Disadvantages -
      • Our tests run using python’s unittest. We will have to change it to run using nosetests or py.test. Also, on changing too, the tests will not be able to run in parallel.
      • Reference - http://docs.tddium.com/python/
  • Sahana Eden to drive the CI

Selenium Tests

  • For functional tests, Selenium Tests are used. The selenium tests should work across browsers. Currently, there is support for Firefox Webdriver(upto version 16) and Chrome Webdriver. We need to provide support for Safari Webdriver, Opera Webdriver and Internet Explorer Webdriver.

Smoke Tests

The Smoke tests visit every link in Eden to check for errors on a page and broken links. For this, twill and mechanize are used.

Role Tests

  • The Role tests(which currently run only on the IFRC template and uses Selenium Unit Test) are to extend to run on multiple templates.

Load Tests

  • Tsung

Implementation

Current Implementation

The Seleium tests can be run on mainly IFRC template. With some changes, they can be run on the default template as well. However, they don’t work across templates.

The unit tests expect that some particular modules are enabled in the template. If they are not enabled and unit tests are run in that template, then false negatives are reported.

Current implementation can be found here -

https://github.com/flavour/eden/tree/master/modules/tests

Unit tests and benchmark tests can be found here -

https://github.com/flavour/eden/tree/master/modules/unit_tests

Documentation about the Load Tests can be found here - Load Tests

GSoC '13

As part of Google Summer of Code, I (Somay Jain) am working on the project Automatic Test Framework, details of which can be found here - https://docs.google.com/document/d/1cKApwBgJtKD85xObjMgCVCUazFQmG34SF1IDti5_7Ho/edit

The Selenium tests design for running them across different templates can be found here.

CI Server

The implementation information of CI Server can be found here - SysAdmin/ContinuousIntegration

The test stats run by the CI Server during GSoC'13 can be found here - Spreadsheet

Future Work

  • Adapt tests to meet needs of evolving CI Server(SysAdmin/ContinuousIntegration). The tests should successfully run on the CI Server both locally and remotely. CI Server should separate the tests on different machines and run them locally on all of them.
  • For Continuous Integration, an instance of Eden can also be used which will enable Scheduling, enable subscription of notifications for test results, can also provide formatted results.

References

Blueprint on General Ideas - BluePrint/Testing

Current Implementation - DeveloperGuidelines/Testing

SysAdmin/ContinuousIntegration


BluePrint

Attachments (2)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.