background preloader

Rajarp

Facebook Twitter

Version control in datastage. CTS. LSMW (Legacy System Migration Workbench) Currently Being Moderated Frequently Asked Questions What is the LSM workbench? The LSMW (Legacy System Migration Workbench) is a tool based on SAP software that supports single or periodic data transfer from non-SAP to SAP systems (and with restriction from SAP to SAP system).

Its core functions are:Importing legacy data from PC spreadsheet tables or sequential filesConverting data from its original (legacy system) format to the target (SAP) formatImporting the data using the standard interfaces of SAP (IDoc inbound processing, Batch Input, Direct Input) Which data can be migrated using the LSMW? By means of standard transfer programs: a wide range of master data (e.g. Is the imported data consistent? Yes. Are conversions carried out identically across the applications? Yes. Is an extensive knowledge of ABAP necessary to use the LSMW? No. Do I have to migrate table by table? No. Can I transfer data that is on my PC? Yes. Is the LSMW part of the standard SAP system? Yes. Average User Rating. Datastage Transformer Stage Looping concept. You can use the Transformer stage to add aggregated information to output rows.

Aggregation operations make use of a cache that stores input rows. You can monitor the number of entries in the cache by setting a threshold level in the Loop Variable tab of the Stage Properties window. If the threshold is reached when the job runs, a warning is issued into the log, and the job continues to run. Input row group aggregation included with input row data You can save input rows to a cache area, so that you can process this data in a loop. For example, you have input data that has a column holding a price value. You want to add a column to the output rows.

In the example, the data is sorted and is grouped on the value in Col1. The percentage for each row in the group where Col1 = 1000 is calculated by the following expression. The percentage for each row in the group where Col1 = 2000 is calculated by the following expression. The output is shown in the following table. Stage variable NumSavedRows. SQL Interview Questions. SQL Interview Questions(few questions are repeated with small differences in their answers) What Is SQL?

SQL (pronounced as the letters S-Q-L or as sequel) is an abbreviation for Structured Query Language. SQL is a language designed specifically for communicating with databases. SQL is designed to do one thing and do it well—provide you with a simple and efficient way to read and write data from a database. Which command displays the SQL command in the SQL buffer, and then executes it?

What is the difference between Truncate & Drop? Answer Explain SQL Having? Answer Difference between SQL Having & Where? Answer Difference between SQL IN/SQL Exists? Answer Difference between SQL NOT IN/SQL NOT Exists? Answer Difference between SQL UNION/SQL UNION ALL? Answer Explain SQL TOP. Answer How to delete duplicate records in a table?

Answer How to find duplicate records with the number they are duplicated? SELECT Id, count (*) as num_records from table group by id having count (*) > 1 What is a PRIMARY KEY? Answer Yes. DEV'S DATASTAGE TUTORIAL,GUIDES,TRAINING AND ONLINE HELP 4 U. UNIX, ETL, DATABASE RELATED SOLUTIONS: Frequently Used Unix Commands. 50 Most Frequently Used UNIX / Linux Commands (With Examples) This article provides practical examples for 50 most frequently used commands in Linux / UNIX. This is not a comprehensive list by any means, but this should give you a jumpstart on some of the common Linux commands. Bookmark this article for your future reference. Did I miss any frequently used Linux commands? Leave a comment and let me know. 1. tar command examples Create a new tar archive. $ tar cvf archive_name.tar dirname/ Extract from an existing tar archive. $ tar xvf archive_name.tar View an existing tar archive. $ tar tvf archive_name.tar More tar examples: The Ultimate Tar Command Tutorial with 10 Practical Examples 2. grep command examples Search for a given string in a file (case in-sensitive search). $ grep -i "the" demo_file Print the matched line, along with the 3 lines after it. $ grep -A 3 -i "example" demo_text Search for a given string in all files recursively $ grep -r "ramesh" * More grep examples: Get a Grip on the Grep!

3. find command examples # find -iname "MyCProgram.c" $ awk '! 14 Good design tips in Datastage. 1) When you need to run the same sequence of jobs again and again, better create a sequencer with all the jobs that you need to run. Running this sequencer will run all the jobs. You can provide the sequence as per your requirement. 2) If you are using a copy or a filter stage either immediately after or immediately before a transformer stage, you are reducing the efficiency by using more stages because a transformer does the job of both copy stage as well as a filter stage 3) Use Sort stages instead of Remove duplicate stages. Sort stage has got more grouping options and sort indicator options. 4) Turn off Runtime Column propagation wherever it’s not required. 5) Make use of Modify, Filter, and Aggregation, Col. 6)Avoid propagation of unnecessary metadata between the stages. 7)Add reject files wherever you need reprocessing of rejected records or you think considerable data loss may happen. 8)Make use of Order By clause when a DB stage is being used in join.

To prepare. PLSQL. Oracle is a relational database technology developed by Oracle. PLSQL stands for "Procedural Language extensions to SQL", and is an extension of SQL that is used in Oracle. PLSQL is closely integrated into the SQL language, yet it adds programming constructs that are not native to SQL. Our tutorial will start with the basics of Oracle such as how to retrieve and manipulate data. Then we will move to the more advanced topics such as how to create tables, functions, procedures, triggers, tablespaces, and schemas. With this tutorial, you should be on your way to becoming proficient in Oracle and PLSQL. There are no prequisities for this Oracle tutorial.

Now, let's get started! Start Tutorial. Osh and score and logic behind the processes. Datastage Stored Procedure Stage Guide. Interview-questions-datawarehouse-part-2. 1NF, 2NF, 3NF and BCNF in Database Normalization. Normalization of Database Normalization is a systematic approach of decomposing tables to eliminate data redundancy and undesirable characteristics like Insertion, Update and Deletion Anamolies. It is a two step process that puts data into tabular form by removing duplicated data from the relation tables. Normalization is used for mainly two purpose, Eliminating reduntant(useless) data.Ensuring data dependencies make sense i.e data is logically stored.

Problem Without Normalization Without Normalization, it becomes difficult to handle and update the database, without facing data loss. Normalization Rule Normalization rule are divided into following normal form. First Normal FormSecond Normal FormThird Normal FormBCNF First Normal Form (1NF) A row of data cannot contain repeating group of data i.e each column must have a unique value. Student Table : You can clearly see here that student name Adam is used twice in the table and subject math is also repeated. New Student Table : Subject Table :

Sql

Fullouter join.