[[TOC]] = Importer Blueprint = Integrating/Developing a framework to extract structured data from web sources with a simple query language. The SpreadsheetImporter will be a component of this. But it would also be good to be able to import from the following formats: * PDF * HTML (File/URL) * DOC * XML formats (not matching out data schema) via [wiki:S3XRC S3XRC], such as: * RSS * Ushahidi * CSV of various file layouts, and representing complex resources * News feeds * HTML * Incoming SMS Some of these formats will be able to be parsed and imported, others may be unstructured and saved as a "New Feed". Some of the data may be tabular, or just single record. Having the data parsed into an !ElementTree allows [wiki:S3XRC S3XRC] to handle all the database integrity & framework rules. Q: Is this correct? !ElementTree without pointers between separate trees does not seem to have a way to encode a directed acyclic graph. A general database schema is a DAG plus self-loops (references from a table to itself, so long relations among elements are not cyclic). (For instance, consider volunteers. They have components via pe_id. They also have references to zero or more elements of the volunteer skills table. Other volunteers point to those same skill records. Thus there are multiple roots -- the skills -- to the tree of volunteers. The same structure occurs in inventory, where catalog items are referenced by multiple order items, but order items are also components of orders. In these cases, there isn't a (clean) way to pick one root for a tree. If we decided to have an skill category table, then we would have diamond-shaped DAGs -- a volunteer could point to several skills, and those skills could point to a common category.) For output, this is not relevant because the records will have their primary keys and foreign keys available. It's only an issue when creating a collection of dag-structured data, as no actual keys have been assigned yet. This is not hard to overcome -- it just means adding placeholder keys to represent the linkage between records in separate ElementTrees. There are examples of DAG representations and algorithms -- a search for "xml directed acyclic graph" will turn them up. - this also allows Eden's Importer tool to be used as a Mashup handler for other systems (such as Agasti) by posting the data back out. A generic importing tool, which allowed data to be imported from various sources automatically. The data could be parsed and fitted into our data model, or it may just be added to a news feed aggregator. This project could include: * A User friendly interface to match fields to parse the data * Intermediary step where the spreadsheet (as you've extracted it) is displayed on the screen, allowing the user to remove blank/invalid rows, merge rows, deal with data from merged cells and match the columns with the Sahana data model * Importing from "flat" tables to linked tables - the spreadsheet could contain data that needs to be imported into a number of different tables. * Spreadsheets with multiple sheets * Methods of automatically (or with a user friendly interface) cleaning data (removing duplicate values with variations due to typos) - for example: * If there were a list of countries which contained Indonesia, Spain, India, Indonesiasia, New Zealand, NZ, France, UK, Indonsia - the import may be able to identify which fields were duplicates, rather than adding 2 incorrect spellings for Indonesia. * Also important for catching things like different spelling, punctuation or orders of words. Ideally different templates will be able to be designed (by users) for importing different types of data. Machine learning algorithms with (multiple?) human verification could try parsing new data formats based on previous templates used. If the templates can be saved out as XSLT then the [wiki:BluePrintSynchronisation Sync scheduler] can be used to do regular imports. This should link to the BluePrintDeduplication in workflow. == Useful Links == * Karma: a system for doing the Import/Clean/Integrate/Publish workflow through a UI paradigm of 'Programming by Demonstration' (instead of via Widgets): * [ftp://ftp.umiacs.umd.edu/pub/louiqa/PUB2010/GeoNets_Shubham.pdf Presentation from ISCRAM 2010] * [http://isi.edu/integration/videos/mashup_building.mp4 Video] * [http://content.digitalwell.washington.edu/msr/external_release_talks_12_05_2005/16012/lecture.htm Presentation] * [http://mashmaker.intel.com Intel MashMaker]: Firefox extension to ease widget-based HTML-based mashups * http://wiki.github.com/fizx/parsley/ * http://developer.yahoo.com/yql/guide/ * [http://www.unixuser.org/~euske/python/pdfminer/ PDFMiner] is an !OpenSource tool to convert PDF docs into text. * Web-scraping using !BeautifulSoup: http://ictd.asia/wiki/CWC_Flood_Forecast_-_India * [wiki:pyparsing] is an !OpenSource tool to parse textual content * included in Sahana Eden's {{{modules}}} folder == Code snippets == Extract hyperlinks from HTML docs: {{{ import sgmllib class MyParser(sgmllib.SGMLParser): def parse(self, s): self.feed(s) self.close() def __init__(self, verbose=0): sgmllib.SGMLParser.__init__(self, verbose) self.hyperlinks = [] def start_a(self, attributes): for name, value in attributes: if name == "href": self.hyperlinks.append(value) def get_hyperlinks(self): return self.hyperlinks import urllib, sgmllib f = urllib.urlopen("http://www.python.org") s = f.read() myparser = MyParser() myparser.parse(s) print myparser.get_hyperlinks() }}} * Code to extract a text node by traversing all the siblings in a doc.(local-name(business in this case) should me known beforehand) {{{ import xml.dom.minidom def get_a_document(name="/tmp/doc.xml"): return xml.dom.minidom.parse(name) def find_business_element(doc): business_element = None for e in doc.childNodes: if e.nodeType == e.TEXT_NODE and e.localName == "business": business_element = e break return business_element }}} ---- BluePrints