Developer Guidelines: Pre-Populate
Table of Contents
We need to be able to import data into a Sahana Eden instance for testing, demos and to help with getting up and running such as:
- Item Catalogue
- Organisation List
- Warehouse Supplies
- Human Resources (Staff & Volunteers)
- Locations (Administrative Boundaries)
On First Run
The template selected in
models/000_config.py will be examined to see which template to load:
settings.base.prepopulate is used to control prepopulate, unless overridden in models/000_config.py.
The setting is a list of directory names and will import data from a number of CSV files. The directories must be in the modules/templates directory tree and must include a file called tasks.cfg.
The prepopulate deployment setting can also be a number which will relate to a single directory defined in folders.cfg....the number is easier to manage in deployment shell scripts.
Deployment setting examples
Use the Shipment folder as the starting point for the prepopulate.
settings.base.prepopulate = ["Standard"]
Uses the roles to import some special roles and then the user directory which is a special directory which is external to the version control and so private data (potentially sensitive data - such as volunteer details) can be held and then imported.
settings.base.prepopulate = ["roles", "user"]
The tasks.cfg file comprises a list of import jobs. They can be of two distinct types:
- Basic Importer Jobs
- Special Import Jobs
The tasks.cfg file supports the # character on the first column as a comment.
Within the modules/templates there are a number of different folder:
- default - base data, such as lookup lists, which will be needed in the production deployment
- demo - user data which is useful for demo or training instances
- regression - data for testing
- roles - data for setting up roles for different types of permissions.
modules/templates/default folder contains default data which is commonly used by all instances of Sahana Eden, so you could include
default in the
demo data would load by default and to explicitly stop demo data from being imported during preload, in
models/000_config.py, include the following setting
settings.base.prepopulate_demo = None
Basic Importer Jobs
These will use the UI Importer to load in the data and requires the following information:
- The controller
- The function
- The CSV
- The XSL transform file
Which will use the questionnaire24H.csv file, which in this case will be in the same directory as the tasks.cfg file, to import the data. Using the question_list.xsl transform file which will be the static/formats/s3csv/survey directory. The importer will then use the survey/question_list function to "manage" the import using any onvalidate or onaccept callbacks that are set up by this controller.
The CSV file does not need to be in the same directory as the tasks.cfg file, if it is held in a different file then the full path needs to be given relative to the modules/templates directory.
All transform files are stored in the
static/formats/s3csv directory tree.
Some special table mappings are maintained in s3import::S3BulkImporter.alternateTables so that, for instance, persons can be imported into hrm/person
Special Import Jobs
For some import jobs it is easier to create a purpose built import function rather than use the Importer. These are created by using a asterisk at the start of the line in the tasks.cfg file, as follows:
This uses the csv file default_location.csv and the function gis_set_default_location to set, in this case, the default location. Another special function is import_role which is used to import user permissions (or roles) examples of the roles and what the csv looks like can be found in the modules/templates/roles directory.
Data can be imported using URL calls:
http://127.0.0.1:8000/eden/supply/item/create.s3csv ?filename=/home/web2py/applications/eden/static/formats/s3csv/eric.csv &transform=/home/web2py/applications/eden/static/formats/s3csv/eric.xsl
However this method isn't easily automated
Also see: UserGuidelines/Importer
This offers us better options to automate the import of data.
auth.override = True resource = s3db.resource("supply_item") stylesheet= os.path.join(request.folder, "static", "formats", "s3csv", "supply", "item.xsl") filename = os.path.join(request.folder, "static", "modules", "templates", "default", "supply_item.csv") File = open(filename, "r") resource.import_xml(File, format="csv", stylesheet=stylesheet) db.commit()
Also see S3/DataImportCLI