|Version 43 (modified by 12 years ago) ( diff ),|
Implementation can be found here: SynchronisationImplementation
Blueprint for Synchronization
We need to implement a system performing automatic synchronization between Sahana instances.
For synchronisation with other systems, we should be able to talk in open standards, such as EDXL
Currently SahanaPy data exporting module exports data in CSV (web based: not for autonomous process). We can add support for XML and JSON. JSON is a modern, light-weight alternative to XML which is appropriate to our low-bandwidth operating environment. XML export should be done using XSD stylesheets so that it's easy to export in different formats.
Each syncable record has a UUID field to uniquely identify it across instances.
Automatic synchronization is different from manual data export / import module present in Sahana. Automatic process should run continuously as daemon.
Currently we are using database dump for exporting which is definitely not optimal way for synchronization of databases. A paper written by Leslie Klieb ( http://hasanatkazmi.googlepages.com/DistributedDisconnectedDatabases.pdf ) discusses various ways for this. In the light of this research, we can implement synchronization as following:
- we need to put time stamp as additional attribute in each table of database (tables which has data like names of missing people etc, we do not need to sync internally required tables which an instance of Sahana installation uses for saving internal information).
Data deleted from Sahana should stay available but with a deleted flag. This would then not be visible during normal DB queries, but is accessible for audit purposes if required. We can make this a reusable field in
models/00_db.py & then add it to each table definition (well, all real, syncable data - no need for internal settings). Delete flag will be Boolean represented if tuple has been deleted or not.
When new tuple is added: new date is entered, when tuple is updated: date is modified to present one. If tuple is deleted, we set delete flag as true for that tuple (and do not delete it for real) Now take two instances of Sahana A & B. Now A calls JSON-RPC (or XML-RPC) passing his (A's) UUID, now B looks into synchronization table (in B's database) for the last time data was sent from B to A, then B create JSON/XML of only those entries/tuples which are after that date and return then to A. It also sends in deleted tuples after the asked date. Now B immediately asks A and same process is repeated for A. Now each machine either updates or puts new tuples in specific tables. It also deletes all tuples which the other machine has deleted IF and only if it hadn't updated that tuple in its own database after the deletion on other machine.
An important outcome of this implementation can also be used in manual data exporting modules of Sahana. We can let the user select the age of data which they want to export (i.e. export data form a starting date to b date). Moreover, we can easily set these web services to call its own exposed web service rather them directly communicating with database layer.
Now as it is quite literal after reading last paragraph that this cannot be accomplished over standard web site based architecture so we need to make daemon (or service ) which will continuously run in the background basically doing two tasks:
- 1) It must find (process in loop) other Sahana servers in the network who have some data
- 2) It must expose a service to the network telling servers as they enter the network that it has some new data
This process needs to be autonomous and servers must be able to find each other without specifying IP. This can be accomplished by using ZeroConf. So we need to come out from domain of web2py for this task. We can definitely hook our software with web2py execution sequence for automatic starting of this service as the server goes online.
We can always ship this with PortablePython eliminating need of installing Python on end machines (like what XAMPP is doing for PHP and MySQL) Reference:
- Diagram of service as exposed to the network: http://hasanatkazmi.googlepages.com/sahana-rough.jpg
- Initial proposal: http://hasanatkazmi.blogspot.com/2009/04/sahana-proposal.html
- Reference Data should have fixed UUIDs, so that they don't sync (Currencies/Projections/Markers)
- UI available to decide which tables to sync
- Support Clusters rather than just 1 default + specific instances
- Use S3XML
- Sync via USB stick (or email attachment).
- Maybe do this by having the system try an online sync & if this fails giving the user the option to sync via file copy instead. Obviously need a matching import system to handle at the other end
Old Blueprint for Synchronization
The module as present now:
All tables have UUID fields: DeveloperGuidelinesDatabaseSynchronization
We can Export the tables - CSV is best-supported within Web2Py currently
"complete database backup/restore with db.export_to_csv_file(..),db.import_from_csv_file(...),
reimporting optionally fixes references without need for uuid"
This can be done using appadmin, but we have started work on a user-friendly way of dumping all relevant tables:
This works well, but has some needed enhancements:
- Define how to deal with duplicates (currently, if a UUID is duplicated then the CSV file Updates the record, if a UUID isn't present then it is Created)
- Download all
- Download all for a Module
Need clear list of which tables to include:
- not lookup lists which are the same across sites
- e.g. OpenStreetMap/Google Layers (but WMS/SOS Layers Yes. Shapefiles Layers Yes if uploads copied across as well)
- not site-specific stuff such as system_config, gis_keys, etc
Create an index manually to make the search by uuid faster.
other related threads:
There is a simple 1-table example appliance which has the ability to do syncs via XML-RPC:
In Sahana2 the record ids are UUIDs built from each instance's 'base_uuid'
There is a sync_instance table:
CREATE TABLE sync_instance ( base_uuid VARCHAR(4) NOT NULL, -- Instance id owner VARCHAR(100), -- Instance owner's name contact TEXT, -- Contact details of the instance owner url VARCHAR(100) DEFAULT NULL, -- Server url if exists last_update TIMESTAMP NOT NULL, -- Last Time sync with the instance sync_count INT DEFAULT 0, -- Number of times synchronized PRIMARY KEY(base_uuid) );