Live SAP BODS Online Training Classes by SAP Experts | Acutesoft
India:+91 (0)9848346149, USA: +1 973-619-0109, UK: +44 207-993-2319 Santosh@acutesoft.com
Select Page

SAP BODS ONLINE TRAINING

SAP BODS ONLINE TRAINING IN INDIA, USA & UK

SAP BODS ONLINE TRAINING IN CANADA

SAP BODS ONLINE TRAINING IN AUSTRALIA

SAP BODS ONLINE TRAINING

SAP BUSINESS OBJECTS DATA SERVICES & RAPIT MARTS COURSE CURRICULUM

Download [PDF]

  • Introduction

      What is Data Warehouse
      Data Warehouse Functions & Implementation
      Data Warehouse Products & Venders
      About SAP’s DatawareHouse-SAP BW
      SAP BW functions, Benefits & Limitations
      About Business Objects & their Products
      Why SAP Acquired Business Objects
      Data Centric Vs Process Centric DWH system.
  • SAP BO Data Service Overview
      About Data Services – Introduction
      Functions – Data Integration & Data Management
      Data Services Product Evolution(ATL, DI & DQ)
      Architecture – by Components
      Data Services Tools & its functions
      Data Services Objects & Objects Hierarchy
      BODS Objects Naming standards
      BODS Objects comparison with SAP BW Objects
  • Data Services – Basic Level:
      Repository Manager-BODS Repository
      Repository Types- Local, Central, Profile
      Repository Creation & Up gradation
      Server Manager-Job Server(JS) & Access Server(AS)
      JS & AS Creation, Job Server-Repository Assignment
      Management Console – Introduction & Components
      Data Service Designer – Introduction & GUI
      Getting Started with Designer to Develop First ETL Flow
  • Data store & Formats
      Datastore-Overview & Types
      Datastore Creation – DB,SAP, Adopter, Web Service
      Formats – Flat files, Excel, DTD, XSD, COBAL Copybooks
      Data Extraction from Database Tables
      Data Extraction from Excel Workbook – Multiple Sheets
      Data Extraction from Flat files ( CSV, Notepad, SAP Transport)
      Data Extraction from XML File ( DTD, XSD)
      Data Extraction from COBOL Copybooks
      Data Distribution to Flat File & XML
      Dynamic Extraction – Files Selection & Sheet Selection
  • Data Services-Transforms
      Transforms & Categories (DI, DQ, PF)
      Data Integration – Data_Transfer, Date_Generation
      Data Integration – Effective_Date, Hierarchy_Flattening
      Data Integration – History_Preserving, Key_Generation
      Data Integration – Pivot, Reverse_Pivot, XML_Pipeline
      Platform – Case, Map_Operation, Merge, Query
      Platform – SQL, Validation Custom ABAP Transform
  • Data Services-Advance Level
      BODS Admin Console – Administration, Auto Reporting.
      Real time Jobs, Embedded Data Flows Variables, Parameters,
      Substitution Parameters, System Config
      Debugging, Recovery Mechanism
      Data Assessment – Data Profiling
      BODS Performance Tuning Techniques
      Multi-User Development Environment – Intro & Advantages
      Multi-User Environment Implementation & Migration
      BODS Object Migration Techniques
  • Data Services-SAP Systems Handing
      SAP System – Intro, SAP System allowed, Terminology
      SAP BODS and SAP ERP / SAP BI Integration
      SAP BODS User & Role creation in SAP
      SAP ERP – Data Flow, Interfaces, Objects allowed to BODS
      Creation SAP Application Datastore – Properties
      SAP ERP – Tables & Hierarchies Data Extraction
      SAP ERP – Getting RFC enabled Functions Modules
      SAP ERP – IDOC Data Extraction & IDOC Data Loading
      SAP ERP – Extraction Flat file data from SAP APP Server
      SAP BW – Data flow, interfaces, Objects allowed to BODS
      Creating SAP BW Source and SAP BW Target Datastores
      SAP BW – Extracting Data from
      SAP BW infoproviders – OHD
      SAP BW – Extracting Data from SAP BW Bases tables-/BIC/
      SAP BW – Loading Data to SAP BW infoproviders(MD & TD)
      SAP BW – BW Jobs Execution
  • SAP BOBJ RAPID MARTS:
      Rapid Marts – Introduction, Advantages, Limitations
      Total Rapid Marts SAP offering, Rapid Marts for SAP Solution
      Rapid Marts – Versions, Dataflow, Content
      Why and Whom to go for Rapid Marts implementation?
      Rapid Marts implementation Out of Box:
      – Environment Checklist
      – Recommend for Upgradation
      – Environment setup ( Pre installation phase)
      – Installation of RM(Data Model, ETL, Visuals)
      – Configuration of RM ( ETL, Visuals)
      – RM File and Custom ABAP Transform Maintenance
      – Rapid Mart directory Maintenance
      Rapid Marts Testing – Sample Data Loads, Testing Reports
      Rapid Marts Recovery Mechanism – For Delta Loads
      Rapid Marts implementation – Customization (as per Req.)
      – Data Model Customization
      – ETL Customization
      – Visuals Customization
      Rapid Marts Performance Tuning Techniques (Standard)

SAP BUSINESSOBJECTS DATA SERVICES

The SAP BusinessObjects solution portfolio delivers extreme insight through specialized end-user tools on a single, trusted business intelligence platform. This entire platform is supported by SAP BusinessObjects Data Services. On top of SAP BusinessObjects Data Services, the SAP BusinessObjects solution portfolio layers the most reliable, scalable, flexible, and manageable business intelligence (BI) platform which supports the industry’s best integrated end-user interfaces: reporting, query and analysis, and performance management dashboards, scorecards, and applications.

True data integration blends batch extraction, transformation, and loading (ETL) technology with real-time bi-directional data flow across multiple applications for the extended enterprise.

By building a relational datastore and intelligently blending direct real-time and batch data-access methods to access data from enterprise resource planning (ERP) systems and other sources, SAP has created a powerful, high-performance data integration product that allows you to fully leverage your ERP and enterprise application infrastructure for multiple uses.

SAP provides a batch and real-time data integration system to drive today’s newgenerationofanalyticandsupply-chainmanagementapplications. Using the highly scalable data integration solution provided by SAP, your enterprise can maintain a real-time, on-line dialogue with customers, suppliers

Software benefits

Use SAP BusinessObjects Data Services to develop enterprise data integration for batch and real-time uses. With the software:

  • You can create a single infrastructure for batch and real-time data movement to enable faster and lower cost implementation.
  • Your enterprise can manage data as a corporate asset independent of any single system. Integrate data across many systems and reuse that data for many purposes.
  • You have the option of using pre-packaged data solutions for fast deployment and quick ROI. These solutions extract historical and daily data from operational systems and cache this data in open relational databases.

The software customizes and manages data access and uniquely combines industry-leading, patent-pending technologies for delivering data to analytic, supply-chain management, customer relationship management, and Web applications.

Unification with the platform

SAP BusinessObjects Data Services provides several points of platform unification:

  • Get end-to-end data lineage and impact analysis
  • Create the semantic layer (universe) and manage change within the ETL design environment

SAP deeply integrates the entire ETL process with the business intelligence platform so you benefit from:

  • Easy metadata management
  • Simplified and unified administration
  • Life cycle management
  • Ease of use and high productivity

SAP BusinessObjects Data Services combines both batch and real-time datamovementandmanagementtoprovideasingledataintegrationplatform for information management from any information source, for any information use.

Using the software, you can:

  • Stage data in an operational datastore, data warehouse, or data mart.
  • Update staged data in batch or real-time modes.
  • Createasinglegraphicaldevelopmentenvironmentfordeveloping,testing, and deploying the entire data integration platform.
  • Manage a single metadata repository to capture the relationships between different extraction and access methods and provide integrated lineage and impact analysis.

High availability and performance

The high-performance engine and proven data movement and management capabilities of SAP BusinessObjects Data Services include:

  • Scalable, multi-instance data-movement for fast execution
  • Load balancing
  • Changed-data capture
  • Parallel processing

SAP BusinessObjects Metadata Management

SAP BusinessObjects Metadata Management provides an integrated view of metadata and its multiple relationships for a complete Business Intelligence project spanning some or all of the SAP BusinessObjects solution portfolio. Use the software to:

  • View metadata about reports, documents, and data sources from a single repository.
  • Analyze lineage to determine data sources of documents and reports.
  • Analyze the impact of changing a source table, column, element, or field on existing documents and reports.
  • Track different versions (changes) to each object over time.
  • View operational metadata (such as the number of rows processed and CPU utilization) as historical data with a datetime.
  • View metadata in different languages.

Impact and Lineage Analysis reports

Impact and Lineage Analysis reports include:

  • Datastore Analysis – For each datastore connection, view overview, table,function,andhierarchyreports. SAPBusinessObjectsDataServices users can determine:
  • What data sources populate their tables
  • What target tables their tables populate
  • Whether one or more of the following SAP BusinessObjects solution portfolio reports uses data from their tables:
  • Business Views
  • Crystal Reports
  • SAP BusinessObjects BW Universes Builder
  • SAP BusinessObjects Web Intelligence documents
  • SAP BusinessObjects Desktop Intelligence documents
  • Universe analysis – View Universe, class, and object lineage. Universe users can determine what data sources populate their Universes and what reports use their Universes.
  • Business View analysis – View the data sources for Business Views in the Central Management Server (CMS). You can view business element and business field lineage reports for each Business View.
  • Crystal Business View users can determine what data sources populate their Business Views and what reports use their views.
  • Report analysis – View data sources for reports in the Central Management Server (CMS). You can view table and column lineage reports for each Crystal Report and Web Intelligence Document managed by CMS. Report writers can determine what data sources populate their reports.nic
  • Dependency analysis – Search for specific objects in your repository and understand how those objects impact or are impacted by other SAP BusinessObjects Data Services or SAP BusinessObjects BW Universe Builder objects and reports. Metadata search results provide links back into associated reports.

To view impact and lineage analysis for SAP BusinessObjects solution portfolio applications, you must configure the Metadata Integrator

SNMP Agent

SAP BusinessObjects Data Services error events can be communicated using applications supported by simple network management protocol (SNMP) for better error monitoring. Install an SAP BusinessObjects Data Services SNMP agent on any computer running a Job Server. The SNMP agent monitors and records information about the Job Servers and jobs running on the computer where the agent is installed. You can configure network management software (NMS) applications to communicate with the SNMP agent. Thus, you can use your NMS application to monitor the status of jobs

Multi-user

SAP BusinessObjects Data Services Multi-user is an advanced optional component that enables your development team to work together on interdependent parts of an application through all phases of development. While each user works on applications in a unique local repository, the team uses a central repository to store the master copy of the entire project. The central repository preserves all versions of an application’s objects, so you can revert to a previous version if needed.

Distributed architecture

SAP Business Objects Data Services has a distributed architecture. An Access Server can serve multiple Job Servers and repositories. The multi-user licensed extension allows multiple Designers to work from a central repository.

DEMOS VIDEOS

UPCOMING DEMOS

For updated Schedules please contact
calls

SAP BUSINESS OBJECTS DATA SERVICES (SAP BODS ) INTERVIEW QUESTIONS AND ANSWERS

  • What are the steps included in Data integration process?
  • Stage data in an operational datastore, data warehouse, or data mart.
    Update staged data in batch or real-time modes.
    Create a single environment for developing, testing, and deploying the entire data integration platform.
    Manage a single metadata repository to capture the relationships between different extraction and access methods and provide integrated lineage and impact analysis.

  • Define the terms Job, Workflow, and Dataflow
  • A job is the smallest unit of work that you can schedule independently for execution.
    A work flow defines the decision-making process for executing data flows.
    Data flows extract, transform, and load data. Everything having to do with data,
    including reading sources, transforming data, and loading targets, occurs inside a data flow.

  • Arrange these objects in order by their hierarchy: Dataflow, Job, Project, and Workflow.
  • Project, Job, Workflow, Dataflow.

  • What are reusable objects in DataServices?
  • Job, Workflow, Dataflow.

  • What is a transform?
  • A transform enables you to control how datasets change in a dataflow.
    What is a Script?
    A script is a single-use object that is used to call functions and assign values in a workflow.

  • What is a real time Job?
  • Real-time jobs “extract” data from the body of the real time message received and from any secondary sources used in the job.

  • What is an Embedded Dataflow?
  • An Embedded Dataflow is a dataflow that is called from inside another dataflow.

  • What is the difference between a data store and a database?
  • A datastore is a connection to a database.

  • How many types of datastores are present in Data services?
  • Three.
    Database Datastores: provide a simple way to import metadata directly froman RDBMS.
    Application Datastores: let users easily import metadata frommost Enterprise Resource Planning (ERP) systems.
    Adapter Datastores: can provide access to an application’s data and metadata or just metadata.

  • What is the use of Compace repository?
  • Remove redundant and obsolete objects from the repository tables.

  • What are Memory Datastores?
  • Data Services also allows you to create a database datastore using Memory as the Database type. Memory Datastores are designed to enhance processing performance of data flows executing in real-time jobs.

  • What are file formats?
  • A file format is a set of properties describing the structure of a flat file (ASCII). File formats describe the metadata structure. File format objects can describe files in:
    Delimited format – Characters such as commas or tabs separate each field.
    Fixed width format – The column width is specified by the user.
    SAP ERP and R/3 format.

  • Which is NOT a datastore type?
  • File Format

  • What is repository? List the types of repositories.
  • The DataServices repository is a set of tables that holds user-created and predefined system objects, source and target metadata, and transformation rules. There are 3 types of repositories.
    A local repository
    A central repository
    A profiler repository

  • What is the difference between a Repository and a Datastore?
  • A Repository is a set of tables that hold system objects, source and target metadata, and transformation rules. A Datastore is an actual connection to a database that holds data.
    What is the difference between a Parameter and a Variable?
    A Parameter is an expression that passes a piece of information to a work flow, data flow or custom function when it is called in a job. A Variable is a symbolic placeholder for values.

  • When would you use a global variable instead of a local variable?
  • When the variable will need to be used multiple times within a job.
    When you want to reduce the development time required for passing values between job components.
    When you need to create a dependency between job level global variable name and job components.

  • What is Substitution Parameter?
  • The Value that is constant in one environment, but may change when a job is migrated to another environment.

  • List some reasons why a job might fail to execute?
  • Incorrect syntax, Job Server not running, port numbers for Designer and Job Server not matching.

  • List factors you consider when determining whether to run work flows or data flows serially or in parallel?
  • Consider the following:
    Whether or not the flows are independent of each other
    Whether or not the server can handle the processing requirements of flows running at the same time (in parallel)

  • What does a lookup function do? How do the different variations of the lookup function differ?
  • All lookup functions return one row for each row in the source. They differ in how they choose which of several matching rows to return.
    List the three types of input formats accepted by the Address Cleanse transform.
    Discrete, multiline, and hybrid.
    Name the transform that you would use to combine incoming data sets to produce a single output data set with the same schema as the input data sets.
    The Merge transform.

  • What are Adapters?
  • Adapters are additional Java-based programs that can be installed on the job server to provide connectivity to other systems such as Salesforce.com or the JavaMessagingQueue. There is also a SoftwareDevelopment Kit (SDK) to allow customers to create adapters for custom applications.
    List the data integrator transforms
    Data_Transfer
    Date_Generation
    Effective_Date
    Hierarchy_Flattening
    History_Preserving
    Key_Generation
    Map_CDC_Operation
    Pivot Reverse Pivot
    Table_Comparison
    XML_Pipeline
    List the Data Quality Transforms
    Global_Address_Cleanse
    Data_Cleanse
    Match
    Associate
    Country_id
    USA_Regulatory_Address_Cleanse

  • What are Cleansing Packages?
  • These are packages that enhance the ability of Data Cleanse to accurately process various forms of global data by including language-specific reference data and parsing rules.

  • What is Data Cleanse?
  • The Data Cleanse transform identifies and isolates specific parts of mixed data, and standardizes your data based on information stored in the parsing dictionary, business rules defined in the rule file, and expressions defined in the pattern file.

  • What is the difference between Dictionary and Directory?
  • Directories provide information on addresses from postal authorities. Dictionary files are used to identify, parse, and standardize data such as names, titles, and firm data.
    Give some examples of how data can be enhanced through the data cleanse transform, and describe the benefit of those enhancements.
    Enhancement Benefit
    Determine gender distributions and target
    Gender Codes marketing campaigns
    Provide fields for improving matching
    Match Standards results
    A project requires the parsing of names into given and family, validating address information, and finding duplicates across several systems. Name the transforms needed and the task they will perform.
    Data Cleanse: Parse names into given and family.
    Address Cleanse: Validate address information.
    Match: Find duplicates.
    Describe when to use the USA Regulatory and Global Address Cleanse transforms.
    Use the USA Regulatory transform if USPS certification and/or additional options such as DPV and Geocode are required. Global Address Cleanse should be utilized when processing multi-country data.
    Give two examples of how the Data Cleanse transform can enhance (append) data.
    The Data Cleanse transform can generate name match standards and greetings. It can also assign gender codes and prenames such as Mr. and Mrs.

  • What are name match standards and how are they used?
  • Name match standards illustrate the multiple ways a name can be represented.They are used in the match process to greatly increase match results.
    What are the different strategies you can use to avoid duplicate rows of data when re-loading a job.
    Using the auto-correct load option in the target table.
    Including the Table Comparison transform in the data flow.
    Designing the data flow to completely replace the target table during each execution.
    Including a preload SQL statement to execute before the table loads.

  • What is the use of Auto Correct Load?
  • It does not allow duplicated data entering into the target table.It works like Type 1 Insert else Update the rows based on Non-matching and matching data respectively.

  • What is the use of Array fetch size?
  • Array fetch size indicates the number of rows retrieved in a single request to a source database. The default value is 1000. Higher numbers reduce requests, lowering network traffic, and possibly improve performance. The maximum value is 5000

  • What are the difference between Row-by-row select and Cached comparison table and sorted input in Table Comparison Tranform?
  • Row-by-row select look up the target table using SQL every time it receives an input row. This option is best if the target table is large.
    Cached comparison table – To load the comparison table into memory. This option is best when the table fits into memory and you are comparing the entire target table
    Sorted input – To read the comparison table in the order of the primary key column(s) using sequential read.This option improves performance because Data Integrator reads the comparison table only once.Add a query between the source and the Table_Comparison transform. Then, from the query’s input schema, drag the primary key columns into the Order By box of the query.

  • What is the use of using Number of loaders in Target Table?
  • Number of loaders loading with one loader is known as Single loader Loading. Loading when the number of loaders is greater than one is known as Parallel Loading. The default number of loaders is 1. The maximum number of loaders is 5.

  • What is the use of Rows per commit?
  • Specifies the transaction size in number of rows. If set to 1000, Data Integrator sends a commit to the underlying database every 1000 rows.
    What is the difference between lookup (), lookup_ext () and lookup_seq ()?
    lookup() : Briefly, It returns single value based on single condition
    lookup_ext(): It returns multiple values based on single/multiple condition(s)
    lookup_seq(): It returns multiple values based on sequence number

  • What is the use of History preserving transform?
  • The History_Preserving transform allows you to produce a new row in your target rather than updating an existing row. You can indicate in which columns the transform identifies changes to be preserved. If the value of certain columns change, this transform creates a new row for each row flagged as UPDATE in the input data set.

  • What is the use of Map-Operation Transfrom?
  • The Map_Operation transform allows you to change operation codes on data sets to produce the desired output. Operation codes: INSERT UPDATE, DELETE, NORMAL, or DISCARD.

  • What is Heirarchy Flatenning?
  • Constructs a complete hierarchy from parent/child relationships, and then produces a description of the hierarchy in vertically or horizontally flattened format.
    Parent Column, Child Column
    Parent Attributes, Child Attributes.

  • What is the use of Case Transform?
  • Use the Case transform to simplify branch logic in data flows by consolidating case or decision-making logic into one transform. The transformallows you to split a data set into smaller sets based on logical branches.

  • What must you define in order to audit a data flow?
  • You must define audit points and audit rules when you want to audit a data flow.

  • List some factors for PERFORMANCE TUNING in data services?
  • The following sections describe ways you can adjust Data Integrator performance
    Source-based performance options
    Using array fetch size
    Caching data
    Join ordering
    Minimizing extracted data
    Target-based performance options
    Loading method and rows per commit
    Staging tables to speed up auto-correct loads
    Job design performance options
    Improving throughput
    Maximizing the number of pushed-down operations
    Minimizing data type conversion
    Minimizing locale conversion
    Improving Informix repository performance