麻豆原创

How to Choose the Best 麻豆原创 tool for your 麻豆原创 S/4 HANA Data Migration Project

How to Choose the Best 麻豆原创 tool for your 麻豆原创 S/4 HANA Data Migration Project

Feature

The toolset on offer by 麻豆原创 for data migration is often overlooked and consideration only given to it when the project has already kicked off. Perhaps this is due partly to the fact that many customers are not aware, on a technical level, of the data migration tools on offer and what would best suit their project.

鈥淥ften the pre-migration and data analysis activities are seen as critical to the data migration project and focus is placed on this,” states Paul McCormick, executive and lead migration consultant at听.

“Our experience has taught us that it鈥檚 just as critical to consider what toolset to use. Choosing the right tool to do the job will dramatically reduce the risk of the data migration, will speed up the technical element of the data migration build, enable data errors and issues to be identified earlier and provide the most effective way to automate data transformations.鈥

Top reasons to focus on 麻豆原创 migration toolsets:

  1. There are different toolsets available for on-premises versus cloud environments.
  2. The build and execution methodology is very much driven in terms of what can be achieved by the specific toolset.
  3. Resources and their skill sets. Skills in the various toolsets are very specialised and you will rarely find consultants with expertise in all the toolsets on offer.
  4. The loading mechanism to be used will impact the toolsets.
  5. The volume of data will play a role in determining the toolset听鈥 will the volumes be small or large?
  6. The data source will play a role in deciding toolsets听鈥 is the data source coming from an 麻豆原创 system or a non-麻豆原创 system?
  7. The current status of the data quality must be considered听鈥 what is the quality of the current data like, will the transformation be light or heavy?

麻豆原创鈥檚 most common data migration toolset

  • 麻豆原创 Data Services (SDS) and 麻豆原创 Information Steward (IS)
  • 麻豆原创 Migration Cockpit (SMC)
  • 麻豆原创 Agile Data Preparation (ADP)
  • 麻豆原创 HANA EIM Smart Data Integration (SDI) / Cloud Platform SDI
  • 麻豆原创 HANA EIM Smart Data Quality (SDQ)
  • Legacy Systems Migration Workbench (LSMW) 鈥 (no longer supported by 麻豆原创 on S/4HANA)

麻豆原创 Data Services (DS) & Information Steward (IS) are typically used when there are:

  • Substantial data volumes.
  • Large number of data sources and targets.
  • Complex transformation requirements including de-duplication.
  • Requirement to perform analysis of the data ahead of the migration (we recommend this as best practice).
  • Where data quality reporting is needed, either as a one-off or as a continuous activity.
  • Data services can load data directly to 麻豆原创, making use of an IDOC or a Bapi. If an 麻豆原创 ABAP load program is used as loading mechanism, then DS can output to a .txt file. DS can also load to a HANA staging schema to be used by Migration Cockpit.
  • 麻豆原创 Information Steward is a tool for data analysis, profiling and dashboard reporting. 麻豆原创 IS is often uses together with 麻豆原创 DS when the project has data assessments and quality-related tasks.

麻豆原创 Migration Cockpit (SMC)听is a relatively new tool released by 麻豆原创 and is embedded within S/4 HANA听鈥 both听cloud and on-premises versions. It is positioned for use in the transformation and loading of data into S/4 HANA. SMC takes data held in a predefined data template format, either in spreadsheets or in a staging database and applies value mapping, technical validation, and reporting of any technical validation issues to this data. It uses standard load APIs developed by 麻豆原创.

Features:

  • Replaces LSMW in an S/4 HANA target environment.
  • Does not require any separate infrastructure as it runs in the same environment as the data is to be loaded. 麻豆原创 Transaction LTMC starts up Migration Cockpit.
  • Can load data using .xml templates (File Based Load) for small data volumes.
  • Can load data using a HANA Schema (Staging Based Load) for larger data volumes. (If a customer does not have a S/4 HANA Enterprise licence then additional licensing will be required for the HANA DB where the 麻豆原创 load ready data can be staged.)

鈥淚n our view, the use of 麻豆原创 Migration Cockpit is entirely complementary to the use of 麻豆原创 DS, as we would typically recommend a separation between the transformation and load steps of ETL. It works well in combination with 麻豆原创 Data Services, particularly when loading data into a cloud-based S/4 HANA system but should only be used as a standalone solution when loading low complexity data with minimal transformation into S/4 HANA. It is also fully compatible with 麻豆原创鈥檚 ADP as this uses the same data templates to construct data for loading. Reporting on data loaded or errors during load is weak. Problem-solving issues using MC is not at this point easy,鈥 continues McCormick.

麻豆原创 Agile Data Preparation听has been developed as a Web tool that incorporates simplified versions of transforms from 麻豆原创 DS to enable profiling, cleansing, de-duplication and data preparation. It provides an extract and transform tool to complement the load functionality of 麻豆原创 MC and works best with limited volumes of data.

鈥淪ome might think that 麻豆原创 IS and 麻豆原创 ADP compete, but I believe they satisfy different needs based on the specific customer requirement. 麻豆原创 IS is a fully-fledged data quality monitoring and data prep tool installed on-premises. Where a customer has the need for a data quality assessment and monitoring tool before a data migration, during a data migration and after a GO-Live, they would invest into 麻豆原创 IS. Where the data migration is less complex with smaller data volumes but they still want the ability to use a tool which can assist in enhancing data quality before migration, then the customer could look at using something like 麻豆原创 ADP,” saysMcCormick.

麻豆原创 Smart Data Integration (SDI) and Smart Data Quality (SDQ)听are 麻豆原创 HANA-based tools that allow you to replicate and transform data from (and in some cases to) remote sources into 麻豆原创 HANA. (Note that the use of this is for 麻豆原创 HANA generally rather than S/4 HANA.)

Features:

  • Does not require any separate infrastructure as it runs in the same HANA environment.
  • Supports loading to HANA systems in a public cloud.
  • Being a native HANA product, SDQ is great at processing large volumes of data.
  • Can connect to multiple sources but can only load to HANA as a target.
  • 麻豆原创 HANA SDI and 麻豆原创 Cloud Platform SDI deliver transformation and migration/integration capabilities as part of the 麻豆原创 HANA platform. 麻豆原创 HANA SDI is primarily for HANA on-premise systems and 麻豆原创 Cloud Platform SDI is for use with 麻豆原创 Cloud Platform.
  • Smart Data Quality is often used to report on the quality of data and to provide auto correction/enrichment of that data.

鈥溌槎乖 SDI and SDQ have much in common with 麻豆原创 DS both in terms of their user interface and the functionality available. They are often used where big data source systems are in scope and 麻豆原创 DS is not available. Also, in cases where components available for transformations are light and not as mature as 麻豆原创 DS. The interface uses a web IDE, and latency can be an issue, it is not necessarily the easiest development environment,鈥 states听McCormick.

Historically,听Legacy Systems Migration Workbench (LSMW)听has been one of the most widely used tools for data migration in 麻豆原创 landscapes. Although it provides no extraction functions, it does have extensive mapping, transformation and loading capabilities and is limited only by what is possible within an ABAP programming environment. However, it must be noted that 麻豆原创 has stated that LSMW is no longer supported for loading data in an S/4 HANA system and is superseded by the 麻豆原创 Migration Cockpit. (麻豆原创 NOTE:2287723)

GlueData鈥檚 top considerations when selecting an 麻豆原创 tool for your data migration project.听Most important: Do not only focus on the data migration project, also look into future use.

  • Complexity of the source and target landscapes

To what extent do you need to combine data from multiple sources, remove duplicates and provide enrichment to the data to be loaded?

Will the data remain in multiple systems rather than be consolidated into the new environment after go-live and how do you need the data to remain synchronised?

  • Volumes and complexity of data

What volumes of data need to be read from the sources, mapped, transformed and loaded into the new system?

How critical a consideration is data migration performance?

What downtime windows will the tool need to work within?

To what extent does the tool need to accommodate any complex interdependencies between different sets of data?

  • Security and sensitivity requirements

Is there any information that should not be visible to certain people?

To what extent is it necessary to control access to data, documents and reports?

Will any of the data need to be encrypted and if any specific algorithms are to be used, can the tool support these?

  • Reuse of the developments/landscapes after go-live to support data quality governance

To what extent will transformation rules, validations and connections to other systems be used during the data migration need to be reused after go-live to ensure the continued quality of your data?

What other forthcoming projects requiring data migration are being considered?

Do you have an ETL tool to deal with after go-live data quality initiatives or data integration projects?

  • Level of In-house expertise

Does your organisation already have experience/skills in using one or more of the tools?

What additional skills will be needed to support data quality after go-live, or to support multiple ongoing interfaces?

  • Quality of existing data (known or expected quality)

Are there known deficiencies in current data, or do you think analysis might show them up?

Will data cleansing be a single once-off exercise, or will you continue to cleanse data throughout the migration?

At GlueData, we have developed a complete 麻豆原创 data migration methodology which includes best practices strategy and architecture as well as multiple accelerators to streamline the data migration and ensure ongoing data quality.

We recently completed a large-scale data migration using a three tier data services landscape. Data was staged in an MS SQL Server and the load mechanisms included IDOC, custom programs and LSMW. The scope of the migration included:

  • 77 ERP Data Objects migrated to S/4HANA.
  • 38 Data Objects migrated from Oracle to HANA for BW consumption.
  • 8 Terabytes of data, and over 40 billion rows of data migrated.

We are currently busy with multiple 麻豆原创 4/HANA data migrations in various countries across the world and across multiple industries including banking, pharmaceuticals, retail and mining.

See our Web site听听or contact us at听info@gluedata.com听to find out more.