麻豆原创

麻豆原创 Africa News Center

How to Choose the Best 麻豆原创 tool for your 麻豆原创 S/4 HANA Data Migration Project

The toolset on offer by 麻豆原创 for data migration is often overlooked and consideration only given to it when the project has already kicked off. Perhaps this is due partly to the fact that many customers are not aware, on a technical level, of the data migration tools on offer and what would best suit their project.

鈥淥ften the pre-migration and data analysis activities are seen as critical to the data migration project and focus is placed on this,鈥 states Paul McCormick, executive and lead migration consultant at听.

鈥淥ur experience has taught us that it鈥檚 just as critical to consider what toolset to use. Choosing the right tool to do the job will dramatically reduce the risk of the data migration, will speed up the technical element of the data migration build, enable data errors and issues to be identified earlier and provide the most effective way to automate data transformations.鈥

Top reasons to focus on 麻豆原创 migration toolsets:

  1. There are different toolsets available for on-premises versus cloud environments.
  2. The build and execution methodology is very much driven in terms of what can be achieved by the specific toolset.
  3. Resources and their skill sets. Skills in the various toolsets are very specialised and you will rarely find consultants with expertise in all the toolsets on offer.
  4. The loading mechanism to be used will impact the toolsets.
  5. The volume of data will play a role in determining the toolset听鈥 will the volumes be small or large?
  6. The data source will play a role in deciding toolsets听鈥 is the data source coming from an 麻豆原创 system or a non-麻豆原创 system?
  7. The current status of the data quality must be considered听鈥 what is the quality of the current data like, will the transformation be light or heavy?

麻豆原创鈥檚 most common data migration toolset

麻豆原创 Data Services (DS) & Information Steward (IS) are typically used when there are:

麻豆原创 Migration Cockpit (SMC)听is a relatively new tool released by 麻豆原创 and is embedded within S/4 HANA听鈥 both听cloud and on-premises versions. It is positioned for use in the transformation and loading of data into S/4 HANA. SMC takes data held in a predefined data template format, either in spreadsheets or in a staging database and applies value mapping, technical validation, and reporting of any technical validation issues to this data. It uses standard load APIs developed by 麻豆原创.

Features:

鈥淚n our view, the use of 麻豆原创 Migration Cockpit is entirely complementary to the use of 麻豆原创 DS, as we would typically recommend a separation between the transformation and load steps of ETL. It works well in combination with 麻豆原创 Data Services, particularly when loading data into a cloud-based S/4 HANA system but should only be used as a standalone solution when loading low complexity data with minimal transformation into S/4 HANA. It is also fully compatible with 麻豆原创鈥檚 ADP as this uses the same data templates to construct data for loading. Reporting on data loaded or errors during load is weak. Problem-solving issues using MC is not at this point easy,鈥 continues McCormick.

麻豆原创 Agile Data Preparation听has been developed as a Web tool that incorporates simplified versions of transforms from 麻豆原创 DS to enable profiling, cleansing, de-duplication and data preparation. It provides an extract and transform tool to complement the load functionality of 麻豆原创 MC and works best with limited volumes of data.

鈥淪ome might think that 麻豆原创 IS and 麻豆原创 ADP compete, but I believe they satisfy different needs based on the specific customer requirement. 麻豆原创 IS is a fully-fledged data quality monitoring and data prep tool installed on-premises. Where a customer has the need for a data quality assessment and monitoring tool before a data migration, during a data migration and after a GO-Live, they would invest into 麻豆原创 IS. Where the data migration is less complex with smaller data volumes but they still want the ability to use a tool which can assist in enhancing data quality before migration, then the customer could look at using something like 麻豆原创 ADP,鈥 saysMcCormick.

麻豆原创 Smart Data Integration (SDI) and Smart Data Quality (SDQ)听are 麻豆原创 HANA-based tools that allow you to replicate and transform data from (and in some cases to) remote sources into 麻豆原创 HANA. (Note that the use of this is for 麻豆原创 HANA generally rather than S/4 HANA.)

Features:

鈥溌槎乖 SDI and SDQ have much in common with 麻豆原创 DS both in terms of their user interface and the functionality available. They are often used where big data source systems are in scope and 麻豆原创 DS is not available. Also, in cases where components available for transformations are light and not as mature as 麻豆原创 DS. The interface uses a web IDE, and latency can be an issue, it is not necessarily the easiest development environment,鈥 states听McCormick.

Historically,听Legacy Systems Migration Workbench (LSMW)听has been one of the most widely used tools for data migration in 麻豆原创 landscapes. Although it provides no extraction functions, it does have extensive mapping, transformation and loading capabilities and is limited only by what is possible within an ABAP programming environment. However, it must be noted that 麻豆原创 has stated that LSMW is no longer supported for loading data in an S/4 HANA system and is superseded by the 麻豆原创 Migration Cockpit. (麻豆原创 NOTE:2287723)

GlueData鈥檚 top considerations when selecting an 麻豆原创 tool for your data migration project.听Most important: Do not only focus on the data migration project, also look into future use.

To what extent do you need to combine data from multiple sources, remove duplicates and provide enrichment to the data to be loaded?

Will the data remain in multiple systems rather than be consolidated into the new environment after go-live and how do you need the data to remain synchronised?

What volumes of data need to be read from the sources, mapped, transformed and loaded into the new system?

How critical a consideration is data migration performance?

What downtime windows will the tool need to work within?

To what extent does the tool need to accommodate any complex interdependencies between different sets of data?

Is there any information that should not be visible to certain people?

To what extent is it necessary to control access to data, documents and reports?

Will any of the data need to be encrypted and if any specific algorithms are to be used, can the tool support these?

To what extent will transformation rules, validations and connections to other systems be used during the data migration need to be reused after go-live to ensure the continued quality of your data?

What other forthcoming projects requiring data migration are being considered?

Do you have an ETL tool to deal with after go-live data quality initiatives or data integration projects?

Does your organisation already have experience/skills in using one or more of the tools?

What additional skills will be needed to support data quality after go-live, or to support multiple ongoing interfaces?

Are there known deficiencies in current data, or do you think analysis might show them up?

Will data cleansing be a single once-off exercise, or will you continue to cleanse data throughout the migration?

At GlueData, we have developed a complete 麻豆原创 data migration methodology which includes best practices strategy and architecture as well as multiple accelerators to streamline the data migration and ensure ongoing data quality.

We recently completed a large-scale data migration using a three tier data services landscape. Data was staged in an MS SQL Server and the load mechanisms included IDOC, custom programs and LSMW. The scope of the migration included:

We are currently busy with multiple 麻豆原创 4/HANA data migrations in various countries across the world and across multiple industries including banking, pharmaceuticals, retail and mining.

See our Web site听听or contact us at听info@gluedata.com听to find out more.

Exit mobile version