Our Client is leading insurance company in Japan anddeveloping a system that will providepayroll system a new interface for the data required for payroll. For integrating different source systems like Mainframe, SQL Server we are going to use Informatica PC & PWX tools.
The project is to read Mainframe, RDBMS data& write to the RDBMS, Web application, Mainframe system.
A PowerCenter mapping can read from an EBCDIC flat file using either a VSAM source or a non-relational PowerExchange source. Hi, Whilw we are working with VSAM Sources in Informatica, As the souce file conatins multiple occurrences for different fileds (for eg field1 conatisn 9 occurs,.
Informatica ETL developer should have:
- In depth knowledge of Informatica 9.x/10.x Power Center, Informatica Power Exchange, UNIX, SQL.
- Strong understanding of ETL, data structures and data flow.
- Experience with different data sources such as Oracle, Flat files, Mainframe VSAM files, SQL Server etc.
- Experience in data ingestionusing web services.
- Both Japanese and English should be Business level.
Roles and Responsibilities:
- Understanding user requirements and translating those requirements into well designed solutions.
- Understand Mainframe data and ability to read & write complex mainframe data using Informatica.
- Create data maps in Power exchange for reading/writing data to Mainframe.
- Create Complex Informatica mappings in Power Center using UNIX, SQL.
- Involve in Unit and Integration testing, validating test cases, Performance tuning techniques.
- Involve in SIT, UAT support.
- Work on Identifying and resolving various major production fixes and Provide innovative solutions to meet the needs of the business.
Location: Tokyo, Japan
Ref:
2019-JP-148
Posted on:
December 12, 2019
Experience level:
![Informatica Informatica](http://images.all-free-download.com/images/graphicthumb/solving_informatica_141766.jpg)
Experienced (non-manager)
Education level:
Bachelor's degree or equivalent
Contract type:
Permanent
Location:
Tokyo
Department:
IT Solutions
What is Normalizer Transformation?Normalizer is an active transformation, used to convert a single row into multiple rows and vice versa. It is a smart way of representing your data in more organized manner.
If in a single row there is repeating data in multiple columns, then it can be split into multiple rows. Sometimes we have data in multiple occurring columns. For example
Student Name | Class 9 Score | Class 10 Score | Class 11 Score | Class 12 Score |
Student 1 | 50 | 60 | 65 | 80 |
Student 2 | 70 | 64 | 83 | 77 |
In this case, the class score column is repeating in four columns. Using normalizer, we can split these in the following data set.
Student Name | Class | Score |
Student 1 | 9 | 50 |
Student 1 | 10 | 60 |
Student 1 | 11 | 65 |
Student 1 | 12 | 80 |
Student 2 | 9 | 70 |
Student 2 | 10 | 64 |
Student 2 | 11 | 83 |
Student 2 | 12 | 77 |
Step 1 – Create source table 'sales_source' and target table 'sales_target' using the script and import them in Informatica
Step 2 – Create a mapping having source 'sales_source' and target table 'sales_target'
Step 3 – From the transformation menu create a new transformation
- Enter name, 'nrm_sales'
Step 4 – The transformation will be created, select done option
Step 5 – Double click on the normalizer transformation, then
- Select normalizer tab
- Enter column names
- Set number of occurrence to 4 for sales and 0 for store name
Columns will be generated in the transformation. You will see 4 number of sales column as we set the number of occurrences to 4.
Step 6 – Then in the mapping
- Link the four column of source qualifier of the four quarter to the normalizer columns respectively.
- Link store_name & sales columns from normalizer to target table
- Link GK_sales column from normalizer to target table
Save the mapping and execute it after creating session and workflow. For each quarter sales of a store, a separate row will be created by the normalizer transformation.
The output of our mapping will be like –
Store Name | Quarter | Sales |
DELHI | 1 | 150 |
DELHI | 2 | 240 |
DELHI | 3 | 455 |
DELHI | 4 | 100 |
MUMBAI | 1 | 100 |
MUMBAI | 2 | 500 |
MUMBAI | 3 | 350 |
MUMBAI | 4 | 340 |
The source data had repeating columns namely QUARTER1, QUARTER2, QUARTER3, and QUARTER4. With the help of normalizer, we have rearranged the data to fit into a single column of QUARTER and for one source record four records are created in the target.
In this way, you can normalize data and create multiple records for a single source of data.