wiki:BluePrint/Importer

Version 15 (modified by Nitin Rastogi, 15 years ago) ( diff )

--

Integrating/Developing a framework to extract structured data from web sources with a simple query language.

The Spreadsheet Importer will be a component of this. But it would also be good to be able to import from the following formats:

  • PDF
  • HTML (File/URL)
  • DOC
  • XML (Not matching out data schema)
  • RSS
  • News feeds
  • Ushahidi
  • Incoming SMS

Some of these formats will be able to be parsed and imported, others may be unstructured and saved as a "New Feed".
Some of the data may be tabular, or just single record.

A generic importing tool, which allowed data to be imported from various sources automatically. The data could be parsed and fitted into our data model, or it may just be added to a news feed aggregator. This project could include:

  • A User friendly interface to match fields to parse the data
  • Importing from "flat" tables to linked tables
  • Methods of automatically (or with a user friendly interface) cleaning data (removing duplicate values with variations due to typos) - for example:
    • If there were a list of countries which contained Indonesia, Spain, India, Indonesiasia, New Zealand, NZ, France, UK, Indonsia - the import may be able to identify whcih fields were duplicates, rather than adding 2 incorrect spellings for Indonesia.
    • Also important for catching things like different spelling, punctuation or orders of words.

Ideally different templates will be able to be designed (by users) for importing different types of data. Machine learning algorithms with (multiple?) human verification could try parsing new data formats based on previous templates used.

Some links that might be useful:

  • Code snippet to extract hyperlinks from HTML docs.
    import sgmllib
    
    class MyParser(sgmllib.SGMLParser):
        
        def parse(self, s):
            self.feed(s)
            self.close()
    
        def __init__(self, verbose=0):
            sgmllib.SGMLParser.__init__(self, verbose)
            self.hyperlinks = []
    
        def start_a(self, attributes):
            for name, value in attributes:
                if name == "href":
                    self.hyperlinks.append(value)
    
        def get_hyperlinks(self):
            return self.hyperlinks
    
    import urllib, sgmllib
    
    f = urllib.urlopen("http://www.python.org")
    s = f.read()
    
    
    myparser = MyParser()
    myparser.parse(s)
    
    
    print myparser.get_hyperlinks()
    
    
  • Code to extract a text node by traversing all the siblings in a doc.(local-name(business in this case) should me known beforehand)
    import xml.dom.minidom
    
    def get_a_document(name="/tmp/doc.xml"):
        return xml.dom.minidom.parse(name)
    
    
    def find_business_element(doc):
        business_element = None
        for e in doc.childNodes:
            if e.nodeType == e.TEXT_NODE and e.localName == "business":
                business_element = e
                break
        return business_element
    
Note: See TracWiki for help on using the wiki.