MIGRATE YOUR HISTORICAL DATA WITH A SELECTION OF TOOLS

The complexities of migrating bulk data from an existing system is daunting, and a common concern for large organisations looking to upgrade their tech stack. ShareDo have created an Onboarding Framework that ensures this process is as smooth and accurate as possible..

The 3 ways of bulk importing data into sharedo

Bulk migrate your legacy data into ShareDo using our Data Load Tool, via API, or with a CSV spreadsheet. 

DATA LOAD TOOL

The extract, transform and load of complex and large volume data into ShareDo.

API

The ingress of (usually low volume) data into an online ShareDo via a REST interface.

CSV

The import of a ‘few rows’ of data into a single subset, sometimes performed by users.

NOT MUTUALLY EXCLUSIVE

It is common for new customers to utilise a combination of all three.

For example – loading data from a legacy system en-masse, while simultaneously adopting an API integration to keep a single version of the truth for business units who remain on the legacy system longest during the implementation schedule..

How it works

OUR ONBOARDING FRAMEWORK goes beyond an etl-based approach

An Extract, Transfer, Load (ETL) approach to complex schemas is prone to leaving data in an inconsistent state, and is not easily extendable to meet the differing and unique needs of each business.

That’s why we developed our own data-onboarding framework that provides an extensible framework to accommodate different data domains.

The first step is to identify all of your data sources. These may include a combination of an existing legacy case management system, physical document stores, document templates, and more. We recommend extracting data from your data sources into an intermediary schema so that you can apply the transformation necessary to load into the ShareDo import schema.

The second step is to transfer your data sources into a a Staging ETL environment. It is from here that your data will be loaded into the Sharedo_Import database with the records that are required to be loaded into ShareDo.

This process is typically implemented by the client using map data from the source locations according to the mapping provided in the Master Data Dictionary.

The canonical staging database provides a location and structure which is understood by the ShareDo data load framework. All data required in the migration will be uploaded to this schema.

The conceptual structure of this database is shown below:

The final stage involves uploading the data from the Canonical Staging database into the ShareDo platform and your chosen Document Management System (DMS) using one of the 3 available bulk upload methods (Data Load Tool, API, or CSV).

FREQUENTLY ASKED QUESTIONS

ITEM #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Ready to learn more?

let's talk