Changes between Version 3 and Version 4 of BluePrintDeduplication


Ignore:
Timestamp:
08/29/10 15:27:11 (14 years ago)
Author:
Fran Boon
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • BluePrintDeduplication

    v3 v4  
    11= Data De-duplication Blue Print =
    22
    3 We often get duplicate data in a system, especially if we do [BluePrintImporter Bulk Imports] from other data sources but also because many users have the tendancy to enter new records, rather than reusing existing records.
     3We often get duplicate data in a system, especially if we do [BluePrintImporter Bulk Imports] from other data sources but also because many users have the tendency to enter new records, rather than reusing existing records.
    44
    55== Process ==
     
    99  * In order to determine if the records are in fact duplicate, the user should have the option to open up the records and somehow see where they are referred to.
    1010 1. Merging Duplicate Records  (see [http://wiki.sahanafoundation.org/lib/exe/fetch.php/foundation:gsoc_kohli:import:resolve_duplicates.jpg wireframe])
    11  1. Replacing Duplicate Records (must be work with offline instances over sync too)
     11 1. Replacing Duplicate Records (must work with offline instances over sync too)
    1212
    1313A complete specifications can be found at [http://wiki.sahanafoundation.org/doku.php/foundation:gsoc_kohli:import:duplicates]
     
    1818
    1919=== Locations ===
    20 Identifying duplicate locations really should involve significant use of maps for context of the two point being checked, perhaps fields showing great-circle distance from each other, and if we have hierarchy polygons available, then performing spatial analysis to see if it is the same town in the same region or two towns that share the same name, but are in different regions? Whereas peoples names may use Soundex, addresses, phone number etc. Document deduping could use SHA1 checksum analysis of the file to detect dupes (e.g. there is a very low probability of two files sharing the same SHA1 hash), and think that an SHA1 hash should be calculated for a document or image file at time of upload.
     20Identifying duplicate locations really should involve significant use of maps for context of the two point being checked, perhaps fields showing great-circle distance from each other, and if we have hierarchy polygons available, then performing spatial analysis to see if it is the same town in the same region or two towns that share the same name, but are in different regions?
     21
     22=== Documents ===
     23Document deduping could use SHA1 checksum analysis of the file to detect dupes (e.g. there is a very low probability of two files sharing the same SHA1 hash), and think that an SHA1 hash should be calculated for a document or image file at time of upload.
     24
     25=== People ===
     26Peoples names may use Soundex, addresses, phone number etc...although be careful!
    2127
    2228== Current Progress ==