SAP HANA Interview Questions

Top most important SAP HANA interview questions and answers by Experts:

Here is a list of Top most important SAP HANA interview questions and answers by Experts.If you want to download SAP HANA interview questions pdf free ,you can register with RVH techguru. Our experts prepared these SAP HANA interview questions to accommodate freshers level to most experienced level technical interviews.

If you want to become an expert in SAP HANA ,Register for SAP HANA online training here.

 

1.What are the two developer roles in HANA SPS05?
The two developer roles are Modeler and Application Programmer.

Modeler: modeler is concerned with the definition of model and schemas used in the SAP HANA, the specification and definition of tables, views, primary keys, indexes, partitions and inter-relationship of the data, designing and defining authorization and access control through the specification of privileges, roles and users and generally uses the perspective “Administration Console” and “Modeler”.

Application Programmer: Programmer is concerned with building SAP HANA applications which are designed based on MVC (model-view-controller) architecture and generally uses the perspective “SAP HANA Development”.

2.Explain HANA database Architecture (SP05)?
Clients connect to the database system which forms a session within the database in the form of SQL statements. In the HANA database, each SQL statement is processed in the context of a transaction. New sessions are assigned to a new transaction.

Traditional database applications uses JDBC and ODBC interface to communicate with the database management system over a network connection and application uses SQL to manage and query the data stored in the database. In the HANA database Index server is the main component of database management which contains the actual data stores and the engines for processing the data. The index server processes incoming SQL or MDX statements in the context of transaction.

The Transaction manager coordinates database transactions, and keeps track of running and closed transactions. When a transaction is committed or rolled back, the transaction manager informs the involved storage engines about this event so they can execute necessary actions.

The HANA database has its own scripting language called SQLScript that is designed to enable optimization and parallelization. HANA supports the Business Functional Library (BFL) and Predictive Analysis Library (PAL) and can be called directly from within SQLScript. It also supports the development of programs written in R language.

SQL and SQLScript are implemented using a common infrastructure of Built-in functions. That have access to various meta definitions such as definitions of relational tables, columns, views, and indexes, definitions of SQLScript procedures. This Metadata is stored in one common catalog (row store or column store).

The Persistence layer ensures that the database after a restart is restored to the most recent committed state. It uses a combination of write-ahead logs, shadow paging and save points. The persistence layer also contains Logger that manages the transaction log.
The Index server uses the Preprocessor Server for analyzing the text data and extracting the information based on text search capabilities. The Name Server knows where the components are running and which data is located on which server. The Statistics Server collects information about status, performance and resource consumption from other servers in the system.
3.What is SAP XS (Extended Application Service)?
SAP HANA XS provides end-to-end support for web-based applications .

4.What are Development objects?
The building blocks of SAP HANA applications are called development objects.

5.What is Repository?
The HANA Repository is storage system for development objects and is built into SAP HANA.
The repository supports Version control, Transport, and Sharing of objects between multiple developers. We can add objects to the repository, update the objects, publish the objects, and compile these objects into runtime objects.

6.What are the different perspectives available in HANA?
Modeler: used for creating various types of views and analytical privileges.
SAP HANA Development: Used for programming applications for creating development objects to access or update data models such as Server-side Java script or HTML files.
Administration: Used to monitor the system and change settings.
Debug: Used to debug code such as SQLScript (.procedure files) or Server-side Java script (.xsjs files).

To open a perspective, go to Window → Open Perspective.

7.Before starting development work in SAP HANA studio, What are the roles a user should have on SAP HANA server ?
Modeling, Content_Admin.

8.What is a Delivery Unit?
Delivery unit (DU) is a container used by the Life Cycle Manager (LCM) to transport repository objects between the SAP HANA systems. The name of DU must contain only capital letters (A-Z), digits (0-9) and underscores (_).

9.What is a workspace?
The place where you work on project-related objects is called a repository workspace.
10.What is a package and its types?
Package is used to group together related content objects in SAP HANA studio. By default it creates Non-structural.
Type Description
Structural Package only contains sub-packages. It cannot contain repository objects.
Non-Structural Package contains both repository objects and sub-packages.

11.What are the default packages delivered with the repository?
Sap
System-local
System-local.generated
System-local.private

12.What can be the maximum length of a package name?
190 characters including dots. Example: RajKumar.pkg123

13.What are package privileges?
REPO.READ: Read access to the selected package and design-time objects (both native and imported).
REPO.EDIT_NATIVE_OBJECTS: Authorization to modify design-time objects in packages originating in the system the user is working in.
REPO.ACTIVATE_NATIVE_OBJECTS: Authorization to activate/reactivate design-time objects in package originating in the system the user is working in.
REPO.MAINTAIN_NATIVE_PACKAGES: Authorization to update or delete native packages or create sub-packages of packages originating in the system in which the user is working.

14.How each object is uniquely identified in the repository?
Each object is uniquely identified by the combination of package name, object name and object type.

15.Can multiple objects of the same type can have the same object name?
Yes, only when they belong to different packages.

16.What are the different tasks you can perform in setting up the basis persistence model for SAP HANA XS?
Creating Schema, Creating Table, Creating View, Creating Sequence and Importing table content.

17.What are the different tasks you can perform in modeler perspective?
Import metadata, Load data, Create packages, Create information views, Create Procedures, Create Analytical privileges, Import SAP NetWeaver BW objects, Create Decision Tables, Import and Export objects.

18.What are the supported object types in modeler perspective?
Attribute views, Analytical views, Calculation views, Analytical privileges, Procedures, Decision tables, Process Visibility Scenario.

19.What are the different modeler preferences and how do you set?
You can set the modeler preferences by choosing the menu Window → Preferences → Modeler (or) Quick launch → Manage Preferences.

20.Why to configure Import server?
In order to load data data from external sources to SAP HANA we need to establish a connection with the server. To connect, we need to provide details of Business Objects Data Services repository and ODBC drivers. Once the connection is established, we can import the tables definition and then load data into table definitions.

Quick launch – Configure Import Server
Enter the IP address of the server from which you want to import data
Enter the repository name
Enter the ODBC data source, and choose OK.

21.How to Import table definitions?
If you want to import all table definitions, Go to
File menu → Choose Import
Expand the SAP HANA Content node
Choose “Mass Import of Metadata” and choose next
Select the target system where you want to import all the table definitions, and choose next
In the Connections Details Dialog , enter the user name and password of the target system
Select the required source system and choose Finish.
Note: If you want to import selective table definitions use “Selective Import of Metadata”.
22.How to load data into tables?
Quick Launch → Data Provisioning
Choose Source
Choose Load (for Initial load) or Replicate (for data replication)
Select the required tables to load or replicate
Click Finish.

23.How to upload data from Flat files?
File menu → Import
In ‘Select an Import Source’ section, expand the ‘SAP HANA Content’ node
Select ‘Data from Local file’ and choose Next
Select the Target system to which you want to import the data using Flat file, choose Next
In ‘Define Properties Import Page’ browse the file containing the data
Select ‘New’ option (If you want to load the data into a new table) or
Select the ‘Existing’ option (If you want to append the data to an existing table)
Click Finish.

24.How to copy standard content delivered by SAP?
Quick Launch → Mass Copy
Create a mapping between source package and target package
Choose Next to view the summary
Click Finish to confirm content copy.

25.What is Schema mapping? How do you do Schema mapping?
Schema mapping is done when the physical schema in the target system is not the same as the physical
schema in the source system.

Quick Launch → Schema Mapping
Choose Add
Create a mapping in the Target system between the Authoring schema and Physical schema
Click OK.

Note: Schema mapping only applies to references from repository objects to catalog onjects. It is not intended to be used for repository to repository references.

26.In which configuration table, the mapping between authoring and physical schema is stored?
SYS_BI.M_SCHEMA_MAPPING

27.What’s the purpose of Generating Time Data?
If you model a time attribute view without generating time data, an empty view will be shown when you use data preview. To generate Time Data go to
Quick Launch → Generate Time Data
If your financial year is from January to December, Choose ‘Calender Type’ as Gregorian else Fiscal
Click Generate.

28.In which configuration table the generated time data information will be stored?
For Gregorian calendar type (SYS_BI)
M_TIME_DIMENSION_YEAR M_TIME_DIMENSION_MONTH
M_TIME_DIMENSION_WEEK M_TIME_DIMENSION
For Fiscal
M_FISCAL_CALENDAR

29.What is an Attribute?
Attribute represents the descriptive data used in modeling. Example: City, Country, etc.

30.What is a Simple Attribute?
Simple attributes are individual analytical elements that are derived from the data foundation. For example Product_ID, Product_Name are attributes of a Product subject area.

31.What is a Calculated Attribute?
Calculated attributes are derived from one or more existing attributes or constants. For example deriving the full name of a customer (first name and last name), assigning a constant value to an attribute that can be used for arithmetic calculations.

32.What is a Private Attribute?
Private attributes used in an analytical view allow you to customize the behavior of an attribute for only that view. For example if you create an analytical view and you want a particular attribute to behave differently than it does in the attribute view to which it belongs, you can define it as a private attribute.

33.What is a Measure?
Measures are simple measurable analytical elements and are derived from Analytic and Calculation views.

34.What is a Simple Measure?
Simple Measure is a measurable analytical element that is derived from the data foundation.

35.What is a Calculated Measure?
Calculated Measures are defined based on a combination of data from OLAP cubes, arithmetic operators, constants, and functions.

36.What is a Restricted Measure?
Restricted measures are used to filter the value for an output field based on the user-defined rules For example you can restrict the revenue column only for Region = APJ, Year = 2013.

37.What are Counters?
Counters add a new measure to the Calculation view definition to count the recurrence of an attribute. For example, to count how many times Product appears.

38.What is an Attribute View?
Attribute views are used to model entity based on the relationships between attribute data contained in multiple source tables. You can model Columns, Calculated columns and Hierarchies.

Also you can fine-tune the attributes of an Attribute view:
Can apply filter to restrict values
Can be defined as Hidden so that they can be processed but not visible to end users
Can be defined as key attributes and used when joining multiple tables
Can be further drill down by ‘Drill Down Enable’ property.

39.What are the tables to be imported for creating attribute view of type Time?
T009 and T009B.

40.What is Label Mapping?
We can choose to associate an attribute with another attribute description. Label mapping is also called as Description mapping. For example if A1 has a label column B1, then you can rename B1 to A1.description. The related columns appear side by side during data preview.

41.What happens when one of the table in Attribute view has modified column with data type?
It reflects the previous state of the columns, even if you remove and add it again. It is referring to the cache. To resolve this issue close the editor and reopen it.

42.What happens when you open an attribute view with a missing column in the required object?
An error will be shown “column is not found in table schemaname.tablename” and the editor does not open. To make it consistent

Open the required object and add the missing column/attribute/measure temporarily
Now open the object which was previously giving error
Find all references to this column, Save the object
Now go ahead and delete the column from the required object.

43.What is an Analytic view?
Analytic views are used to model data that includes measures. In case of multiple tables, measures must originate from only one of these tables (central table). You can model Columns, Calculated columns, Restricted columns, Variables, and Input parameters.

Also you can fine-tune the attributes of an Analytic view:
Can apply filter to restrict values
Can be defined as Hidden so that they can be processed but not visible to end users
Can be defined as key attributes and used when joining multiple tables
Can be further drill down by ‘Drill Down Enable’ property
You can model Aggregation type on measures
You can model Currency and Unit of Measure.

44.Can we include Attribute views in Analytic view definition?
Yes

45.What does the Scenario panel of Analytic view editor contains?
Data Foundation: represents the tables used for defining the fact table of the view. You can specify the central table by selecting a value in ‘Central Entity’ property.
Logical Join: represents the relation between fact table and attribute views to create start schema.
Semantics: represents the output structure of the view.

46.What does the aggregation type ‘Calculate Before Aggregation’ mean?
If you select ‘Calculate Before Aggregation’, the calculation happens as per the expression specified and then the results are aggregated as SUM, MAX, MIN, or COUNT. If it is NOT selected, the calculation happens as per the expression specified but the data is not aggregated but shown as FORMULA.

47.How to activate the other objects (required or impacted objects) along with current object?
By using ‘Save and Activate All’ option in the toolbar.

48.Can you add column views to Analytic view and Calculation view?
We can add column views in a Calculation view but not in the Analytic view.

49.Consider there is a table that contains product ID’s with no product description and you have a text table for products that has language specific description for each product? How can you get the language specific data?
Create a text join between these two tables. The right table should be the text table and is mandatory to specify the “Language Column” in the ‘Properties’ view.

50.What are the restrictions while creating the join between the views and fact table?
A table should not appear twice in any join path, i.e. Self join is not supported.
While creating join between Analytic view and Attribute view the same table cannot be used in both the views.

51.What is Calculation view?
Calculation view is more advanced slice of the data and can include measures from multiple source of tables, can include advanced SQL logic. The data foundation of the calculation view can include any combination of tables, column views, attribute views and analytic views. We can create joins, unions, projections, and aggregation levels on the sources. You can model Attributes, Measures, Calculated measures, Counters, Hierarchies (created outside of the attribute view), Variables and Input parameters.

52.Calculation views are modeled based on what?
Graphical views or Scripted views but not as SQLScript. However there are exceptions to this rule. SQLScript with the following properties can be used in Calculation view:
No input parameters
Always Read-only (do not make changes to database)
Side-effect free.

53.What are the option available in ‘Run With’ while creating calculation view?
Definer’s Right, Invoker’s Right.
Definers right: System uses the rights of the definer while executing the view or procedure for any user.
Invokers right: System uses the rights of the current user while executing the view or procedure.

54.While creating a Graphical Calculation view, what are the options available in Tools palette?
Union, Join, Projection, and Aggregation.
Note: You can have only one source of input for Projection and Aggregation views.
You can create filters on Projection and Aggregation view attributes.

55.How to create Counters in Graphical Calculation view?
For example to get the number of distinct values of an attribute:
Go to the Output pane, right click Counters
From the context menu, choose New
Choose Attribute
Click ok.

56.Is it mandatory to include measures for Calculation view?
No. Calculation view containing no measures works like an attribute view and is not available for reporting purposes.

57.How do you debug the Calculation view with lot of complexity at each level?
By previewing the data of an intermediate node.

58.What is Mapping input parameter in Calculation view?
It is used for mapping the input parameters in the underlying data sources of the calculation view with calculation view parameters.

59.In calculation view, What is the option ‘Auto Map by Name’ used for?
It automatically creates the input parameters corresponding to the source and perform a 1:1 mapping.

60.What are the options available in Source input parameter?
Create New Map 1:1
Map by Name
Remove Mapping

61.Consider there are two tables (Actual sales and Planned sales) with similar structures. I want to see the combined data in a single view but at the same time how can I differentiate the data between these two tables

Create a union view (Graphical) between the two tables and have a ‘Constant column’ indicating constant values like ‘A’ for Actual sales and ‘P’ for Planned sales. The default value for the constant column is NULL.

62.What is a Constant column and how to create it?
In a Union view, a Constant column is created for the output attributes for which there is no mapping to the source attributes. To create Constant column:
Right click the attribute in the target list
Choose Manage Mappings
To map the source to the target column, select the required source from the dropdown list
To assign a default value to the constant column, enter a value in the Constant value field
Select the required data type, length and scale as required
Click ok.

63.What is the difference between HANA Variable and Input parameter?
HANA Variables do not impact the execution and used to filter the attributes, for example we can filter a result to a specific country or product and are applied in the WHERE clause of the SQL query.
HANA Input parameters used to manipulate the execution of the information model, for example currency codes or dates when exchange rates have to be calculated and are passed as PLACEHOLDER in the FROM clause of the SQL query.

64.In which configuration table you can find the variables information?
You can find in _SYS_BI schema
BIMC_VARIABLE BIMC_VARIABLE_ASSIGNMENT
BIMC_VARIABLE_VIEW BIMC_VARIABLE_VALUE
65.What are the different types of Input parameters supported?
Attribute value/Column
Currency (Available in Calculation view only)
Date (Available in Calculation view only)
Static list
Derived from Table (Available in Analytic and Graphical Calculation view)
Empty
Direct Type (Available in Analytic view)

66.How can you check whether an input parameter is mandatory or not?
From the properties of Input parameter in the Properties pane.

67.What is Hierarchy?
We create hierarchies between attributes to improve analysis by displaying attributes according to their defined relationships. There are two types of hierarchies:

Level Hierarchy: The root and child nodes are accessed only in the defined order. It consists of one or more levels of aggregation.
Example: We can drill down from Country to State and to City etc.

Parent/Child Hierarchy: This hierarchy contains a parent attribute and constructed from a single parent attribute.
Example: Employee master (employee and manager).

68.How to create an hierarchy for Analytic view?
Hierarchy is not supported in Analytic view but can be used only in Attribute view and Calculation view.

69.While creating hierarchy, what does the option ‘Aggregate All Nodes’ mean?
For example there is a member A with value 100, A1 with value 10, A2 with value 20 where A1 and A2 are children of A. By default the option ‘Aggregate All Nodes’ is set to false and you will see a value of 30 for A. When this option is set to true, you will count the posted value 100 for A as well and see a result of 130.

70.How can you generate a Sales report for a region in a particular currency where you have the sales data in a database table in a different currency?
Create an Analytic view by selecting the table column containing the sales data and currency and perform currency conversion. Once the view is activated, we can use it to generate reports.

71.What are the factors that affect currency conversion?
Currency conversion is performed based on source currency, target currency, exchange rate, and date of conversion. You can select currency from the attribute data used in the view. Currency conversion is enabled for Analytic view and Calculation views.

72.What is the prerequisite for doing the currency conversion?
You need to import tables TCURC, TCURF, TCURN, TCURR, TCURT, TCURV, TCURW & TCURX.

73.What is the prerequisite for Unit of Measure?
You need to import the tables T006 & T006A.

74.What happens when you activate an object?
The object is exposed to repository and for analysis.

75.What is the difference between Activate and Redeploy?
Activate – It deploys the inactive objects.
Redeploy – It deploys the active objects. You do this when run-time object is corrupted or deleted and you want to create it again. OR when the object goes through client-level activation and server-level activation but fails at MDX, and the object status is still active.

76.What are the supported activation modes?
Activate and Ignore the inconsistencies in impacted objects
Stop activation in case of inconsistencies in impacted objects.

Irrespective of the activation mode, if even one of the selected objects fails (either during validation or during activation), the complete activation job fails and none of the selected objects will be activated.
77.Can you explain the behavior of activation job?
The status of the activation job indicates whether the activation of the objects is successful or failed.
In case of failure (status is completed with errors) the process is rolled back and none of the objects are activated
In the summary part the job log shows success, even in the case of failure. This is to help the user to indicate that those objects were successfully activated without any issues.
When you open the job log, the summary list only shows those objects that are submitted for activation. It does not list all the affected objects. They are listed in detail section.

78.What is a Decision table?
It creates related business rules in a tabular format for automating the decisions. It helps in managing business rules, data validation, data quality rules without any language knowledge. The active version of the decision table can be used in applications.

You create decision table in a package just like any attribute view. You can create from scratch or from an existing decision table.

79.Where to see the detailed report of the decision table?
In the ‘Job Log’ section you can see the validation status and detailed report of the decision table.

80.How to execute the decision table?
The decision table is executed by calling the procedure.

CALL “<schema name>”.”<procedure name>”;

CALL “<schema name>”.”<procedure name>”(<IN parameter>, …… , <IN parameter>, ?);
for Condition as parameters and Action as parameters.

On execution of the procedure, if no parameters are used then physical table is updated based on the data you enter in the form of condition values and action values.

81.Are there any restrictions on Decision table to preview the data?
Data preview is supported only if:
Decision table is based on physical table and has at-least one parameter as action
Decision table is based on Information view and parameter(s) as action.

82.How can you change the layout of a decision table?
You can change the layout by arranging the condition and action columns. By default all the conditions appear as vertical columns in the decision table and you can mark a condition as a horizontal condition under the Decision table editor, choose ‘Change Layout’.

83.Can you switch ownership of objects?
We can take the ownership of objects from other user’s workspace only if it is inactive version of the object Authorization required is “Work in Foreign Workspace”. The active version is owned by the user who created and activated the object.

84.What is the difference between Switch Ownership and Take Over?
Switch Ownership: To take multiple inactive objects from other users.
Take Over: To take single inactive object from another workspace.

85.You are working on a inactive version of a object. How can you view changes made to the active version?
Select the required object in a package you are working
From the context menu, choose ‘open’
In the editor pane, choose ‘Show Active Version’
Compare the active and inactive versions of the object.

86.How can you view the version history of content objects?
Select the required object from the package
From the context menu choose ‘History’.

87.What is Refactoring Object?
Restructuring the Content objects without changing their behavior is call Refactoring.

88.What are the objects eligible for Refactoring?
Packages, Attribute views, Analytic views, Graphical Calculation views, and Analytical Privileges.

89.How do you validate models?
Quick launch menu → Validate
From the ‘Available’ list, select the required models that system must validate.
Choose Add
Click Validate.

90.How do you generate the documentation for the objects you created?
By using ‘Auto Documentation’ which captures the details of an information model or a package in a single document. Process to create is:
Quick Launch → Auto Documentation
In ‘Select Content Type’ choose ‘Model Details’ OR ‘Model List’
Add the required objects to the Target list
Browse the location where you want to save the file
Click finish.

91.How to identify whether an information model is referenced by any other information model?
We can check the model references by using ‘Where Used’. Process is:
Go to the package
Select the required object
From the context menu, choose ‘Where Used’.

92.What is the difference among Raw Data, Distinct values and Analysis while doing the Data Preview?
Raw Data : It displays all attributes along with data in tabular format.
Distinct Values: It displays all attributes along with data in graphical format.
Analysis : It displays all attributes and measures in graphical format.

93.What are the different types of functions can be used in expressions?
Conversion, String, Mathematical, Date and Misc functions.

if(”SCORE” > 7, “SELECTED”, IF(”SCORE” > 4, “ONHOLD”, “REJECTED”));
returns REJECTED if the SCORE is <= 4.

case(“CODE”,1,’NEW’,2,’VENDOR REBUILT’,3,’SHOP REBUILT’,’INVALID’);
if the value of CODE is other than 1/2/3 then a default value of ‘INVALID’ will be selected.

94.How to search Tables, Models, and Column views?
In the Modeler search field, enter the object you want
Select the system in dropdown
Click search.

The matching objects are listed in results pane with 3 tab pages: Tables, Models, and Column views.

95.Is it possible to Import SAP Netweaver BW objects?
Yes it is possible to import SAP BW objects.

96.How to Import BW models?
The process to Import BW models:
File menu → Import
Expand SAP HANA Content node, choose ‘Import SAP NetWeaver BW Models’
In ‘Source System’ enter BW credentials
Select the target system
Select BW InfoProviders
If you want to import selected models along with display atributes for IMO Cube and IMO DSO, select ‘Include Display Attributes’
We can select analysis authorizations associated with InfoProviders/Role based.
Click finish.

97.What is the reason for going In-memory?
One reason is the number of CPU cycles per second is increasing and the cost of processors is decreasing. For managing the data in memory, there is five-minute rule which is based on the suggestion that it costs more to wait for the data to be fetched from disk than it costs to keep data in memory so it depends on how often you fetch the data.
For example there is a table and no matter how large it is and this table is touched by a query at least once every 55 minutes, it is less expensive (in hardware costs) to keep it in memory than to read it from memory and if it is frequently accessed it is less expensive to store it in memory.

98.What is a Five-minute rule?
It is a rule of thumb for deciding whether a data item should be kept in memory, or stored on disk and read back into memory when required. The rule is “randomly accessed disk pages of cache are re-used every 5 minutes”.

99.What is multi-core CPU?
Multiple CPU’s on one chip or in one package is called multi-core CPU. .

Traditional databases for online transaction processing (OLTP) do not use current hardware efficiently.

100.What is Stall?
Waiting for data to be loaded from main memory into the CPU cache is called as Stalls.

101.What is SAP In-Memory Appliance (SAP HANA)?
HANA is an in-memory technique to store data that is particularly suited for handling very large amounts of tabular, or relational, data with extra ordinary performance. Common databases store tabular data row-wise. Reorganizing the data in memory column-wise brings a tremendous speed increase when accessing a subset of the data in each table row.

102.What are the components or products of HANA?
SAP HANA contains the following components.

• SAP HANA DATABASE
• SAP HANA Studio SAP HANA CLIENT
SAP HOST AGENT 7.2
• SAP HANA INFORMATION COMPOSER
• DIAGNOSTIC AGENT 7.3
SAP HANA client package for MS excel
SAP HANA UI for Information Access (INA)
SAP HANA AFL 1.0
Software Update Manager for SAP HANA
SAP LT Replication AddOn
SAP LT Replication Server
SAP HANA Direct Extractor Connection (DXC)
SAP Data Services 4.0

103..What are the different editions available in HANA appliance software?
Platform and Enterprise edition.

Platform edition is intended for customers who want to use ETL-based replication and already have a license for SAP BO Data Services.
Enterprise edition is intended for customers who want to use either trigger-based replication or ETL-based replication and do not already have all of the necessary licenses for SAP BO Data Services.

104..What is columnar and Row-Based Data Storage?

Fig: Row and Column-based storage
A database table contains data in the form of rows and columns. However Computer memory is organized as a linear structure. To store a table in linear memory, there are two options. A row-based storage stores a table as a sequence of records, each of which contains the fields of one row. In a columnar storage the entries of a column are stored in contiguous memory locations.

The SAP HANA database allows to specify whether a table is to be stored column-wise or row-wise. It is also possible to alter an existing table from columnar to row-based and vice versa.
Search operations in tabular data can be accelerated by organizing data in columns instead in rows.

105..What are the advantages of Column based tables?
Calculations are typically executed on single or a few columns only.
The table is searched based on values of a few columns.
The table has a large number of columns.
The table has a large number of rows and columnar operations are required (aggregate, scan, etc.).
High compression rates can be achieved because the majority of the columns contain only few distinct values (compared to number of rows).
106..What are the advantages of Row-based tables?
The application needs to only process a single record at one time (many selects and/or updates of single records).
The application typically needs to access a complete record (or row).
The columns contain mainly distinct values so that the compression rate would be low.
Neither aggregations nor fast searching are required.
The table has a small number of rows (e. g. configuration tables).

107.In which case the data to be stored in columnar storage?
To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression mechanisms it is recommended that transaction data to be stored in a column-based table.

108.Is it possible to join tables of row-based with column-based tables?
Yes

109.What are the advantages of Columnar tables?

Higher Data Compression Rates
Higher Performance for Column Operations
Elimination of Additional Indexes
Parallelization
Elimination of Materialized Aggregates

110..What are the different Compression Techniques you know?
Run-length encoding
Cluster encoding
Dictionary encoding

111.Why materialized aggregates are not required?
With a scanning speed of several gigabytes per millisecond, in-memory column stores, make it possible to calculate aggregates on large amounts of data on the fly with high performance. This is expected to eliminate the need for materialized aggregates in many cases.

112.What are the advantages of Eliminating materialized aggregates?
No additional tables for storing aggregate results means:
Simplified data model
Simplified application logic
Higher level of concurrency and
With the fly Aggregation we have aggregated values up to date

113.What is parallelization?
Column-based storage makes it easy to execute operations in parallel using multiple processor cores. In a column store data is already vertically partitioned means that operations on different columns can easily be processed in parallel. If multiple columns need to be searched or aggregated, each of these operations can be assigned to a different processor core. In addition operations on one column can be parallelized by partitioning the column into multiple sections that can be processed by different processor cores (core 3 and 4 below).
SAP Landscape Transformation

114.What are the different types of replication techniques?
1.ETL based replication using BODS
2.Trigger based replication using SLT
3.Extractor based data acquisition using DXC

115.What is SLT?
SLT stands for SAP Landscape Transformation which is a trigger based replication. SLT replication server is the replication technology to pass data from source system to the target system. The source can be either SAP or non-SAP. Target system is SAP HANA system which contains HANA database.

116.Is it possible to load and replicate data from one source system to multiple target database schemas of HANA system?
Yes. It is possible for up to 4.

117.Is it possible to specify the type of data load and replication?
Yes either in real time, or scheduled by time or by interval.

118.What is Configuration in SLT?
The information to create the connection between the source system, SLT system, and the SAP HANA system is specified within the SLT system as a Configuration. You can define a new configuration in Configuration & Monitoring Dashboard (transaction LTR).

119.Is there any pre-requisite before creating the configuration and replication?
For the SAP source systems DMIS add-on is installed in SLT replication server. User for RFC connection has the role IUUC_REPL_REMOTE assigned but not DDIC.
For non-SAP source systems DMIS add-on is not required and grant a database user sufficient authorization for data replication.

120.What is Configuration and Monitoring Dashboard?
It is an application that runs on SLT replication server to specify configuration information (such as source system, target system, and relevant connections) so that data can be replicated. It can also use it to monitor the replication status (transaction LTR).
Status Yellow: It may occur due to triggers which are not yet created successfully.
Status Red: It may occur if master job is aborted (manually in transaction SM37).

121.What is advanced replication settings?
A transaction that runs on SLT replication server to specify advanced replication settings like
Modifying target table structures,
Specifying performance optimization settings
Define transformation rules

122.What is Latency?
It is the length of time to replicate data (a table entry) from the source system to the target system.

123.What is logging table?
A table in the source system that records any changes to a table that is being replicated. This ensures that SLT replication server can replicate these changes to the target system.

124.What are Transformation rules?
A rule specified in the Advanced Replication settings transaction for source tables such that data is transformed during the replication process. Example you can specify rule to
Convert fields
Fill empty fields
Skip records

125.What happens when you set-up a new configuration?
The database connection is automatically created along with GUID and Mass transfer id (MT_ID).

A schema GUID ensures that configurations with the same schema name can be created.
The Mass transfer ID is used in the naming of SLT jobs and the system can uniquely identify a schema.

126.What factors influence the change/increase the number of jobs?
Number of configurations managed by the SLT replication server
Number of tables to be loaded/replicated for each configuration
Expected speed of initial load
Expected replication latency time. As a rule of thumb, one BDG job should be used for each 10 tables in replication to achieve acceptable latency times.

127.When to change the number of Data Transfer jobs?
If the speed of the initial load/replication latency time is not satisfactory
If SLT replication server has more resources than initially available, we can increase the number of data transfer and/or initial load jobs
After the completion of the initial load, we may want to reduce the number of initial load jobs

128.What are the jobs involved in replication process?
1. Master Job (IUUC_MONITOR_<MT_ID>)
2. Master Controlling Job (IUUC_REPLIC_CNTR_<MT_ID>)
3. Data Load Job (DTL_MT_DATA_LOAD_<MT_ID>_<2digits>)
4.Migration Object Definition Job (IUUC_DEF_MIG_OBJ_<2digits>)
5.Access Plan Calculation Job (ACC_PLAN_CALC_<MT_ID>_<2digits>)

129.What is the relation between the number of data transfer jobs in the configuration settings and the available BGD work processes?
Each job occupies 1 BGD work processes in SLT replication server. For each configuration, the parameter Data Transfer Jobs restricts the maximum number of data load job for each mass transfer ID (MT_ID).

A mass transfer ID requires at least 4 background jobs to be available:
One master job
One master controller job
At least one data load job
One additional job either for migration/access plan calculation/to change configuration settings in “Configuration and Monitoring Dashboard”.

130.If you set the parameter “data transfer jobs” to 04 in a configuration “SCHEMA1”, a mass transfer ID 001 is assigned. Then what jobs should be in the system?
1 Master job (IUUC_MONITOR_SCHEMA1)
1 Master Controller job (IUUC_REPL_CNTR_001_0001)
At most 4 parallel jobs for MT_ID 001 (DTL_MT_DATA_LOAD_001_ 01/~02/~03/~04)

Performance: If lots of tables are selected for load / replication at the same time, it may happen that there are not enough background jobs available to start the load procedure for all tables immediately. In this case you can increase the number of initial load jobs, otherwise tables will be handled sequentially.

For tables with large volume of data, you can use the transaction “Advanced Replication Settings (IUUC_REPL_CONT)” to further optimize the load and replication procedure for dedicated tables.