Friday, November 15, 2013

Running Scenarios asynchronously/parallel in ODI

Running Scenarios asynchronously/parallel in ODI 

In this post, I will demonstrate how to run interfaces, procedures or packages asynchronously inside a ODI package. 

By default for interfaces, procedures or packages, ODI doesn't provide an option to run them asynchronously (run it in parallel along with the initiating session) . To run them asynchronously, we need to covert interface, procedure or package to scenarios and scenarios can be run asynchronously inside a package or through command.

In this post, I will demonstrate how to asynchronously run the scenarios through package as well as through command.

1. In my tutorial, I am going to use five interfaces which does table to table loading for regions, countries, locations, departments and employees from hr schema to data_target schema. 
2. I will convert these five interfaces to scenarios and run them asynchronously inside a package and through procedure command.

Following are the five interfaces, I will use in this tutorial. I will also use the Jython execution report procedure (created in my previous post  Jython execution report) to write the execution report to file after the parallel run.


Lets convert all the interfaces to scenarios so that we can run them asynchronously. Right click on each interface and select Generate scenario.




1. Running the scenarios asynchronously inside a package 

Lets create a package and drag and drop the generated scenarios into the package in sequence by connecting them with ok in the regions, countries, locations, departments and employees order. After employee, connect it to wait for child session and then to the execution report procedure.

wait for child session event detection will wait for all child sessions (asynchronous sessions) to finish before putting the control to the next step which execution report procedure here.

I have presented them vertically in the diagram since we are going to make them run in parallel. But we will connect them together with ok sequentially.



Lets configure each scenario in the package diagram to run asynchronously. Select each scenario and select the synchronous/asynchronous option to asynchronous for each scenario in the package. By default this mode is set to synchronous.



Also we can set options for the odi wait for child sessions event detection. If we don't specify a session id, it takes the current session id (which is the package in which it is running)

Keywords property can be used to wait for child sessions which match the selected keyword here. Keyword can be set for each scenario in its properties and we can group the scenarios with keywords and wait for child session will wait only for scenario child sessions which match the keyword.


Lets run the package and see the results. Run is successful.
We can see that child session is created for each asynchronous scenario run but no child session session created for the procedure, it is part of the parent session. Each parellel run is assigned a new session id which is child to the parent session id 29001.


Lets check the execution report to see the timing in which each interface triggered. we could see that all the scenarios started at the same time and waitforchildsession event detector waited until the last scenario parallel run has completed.




2. Running the scenarios asynchronously using command in procedure 

In this case, I want to solve a use case where we should be able to run any number of scenarios dynamically in parallel but we will come to know the scenarios only  at run time.

In this use case, a table will be created at run time with all the scenario names with versions which need to be run in parallel. We can simply read this table in procedure source section and run the scenario asynchronously in target section through procedure binding (executed the command in target section for each row from the source query).

This is how the dynamic table which stores the scenario name and version looks like.


Lets create the procedure with source and target sections.

Source runs the query on the dynamic table with scenario and version names.


For each from source query target start scenario command is executed in asynchronous mode for the scenario.


Lets run the procedure and verify the results. we can see that child session is created for each scenario and it ran in parallel.



Thursday, November 14, 2013

Writing execution results to File using Jython in ODI


Writing execution results to File using Jython in ODI


In this post, I am going to show how to write files in customized format by leveraging Jython technology in ODI. Out of the box, ODI provides support for delimted, fixed length, xml and complex files. But if there is a need to produce pdf or custom formated output, we can use Jython to achieve.

To demonstrate this capability, I am taking the example of write execution details of ODI to a custom formated file. The file format will look like this following.
/*------------------------------------------------------------------------------------------------------------------------------------------------*/



                    EXECUTION REPORT FOR SCENARIO : WRITE_EXECUTION_REPORT  ( THU NOV 14 10:34:33 2013 )                    



/*------------------------------------------------------------------------------------------------------------------------------------------------*/



StepNo StepName                                   StepType               BeginningTime     EndTime           Duration      Status     Error_Msg



------ --------------------------------------- ------------------- --------------------- --------------------- ---------- -------------- ----------



0      int_file_table                          Interface           14-NOV-2013 10:34:11  14-NOV-2013 10:34:12  1          Success        None



1      write_execution_report                  Procedure           14-NOV-2013 10:34:12  None                  0          UNKNOWN        None



/*------------------------------------------------------------------------------------------------------------------------------------------------*/

Following are the steps I am going to follow to demonstrate this use case.
1. I will create a procedure which will connect to ODI internal tables and get the execution details for the current session_id
2. I am going to create a file_to_table interface just to have a success scenario and a procedure which raises PL/SQL exception to demonstrate the error scenario.
3. We will create a package and place the interfaces and procedure for success/error scenario and also have the execution report procedure we developed in first step as a last steps for success as well as error scenario to extract the execution details to file with formatting.

Before we write the procedure, Lets understand the ODI internal tables a bit on how they store the execution details.
1. SNP_SESSION - stores the session_id and detailed related to session
2. SNP_SESS_STEP - All interface, package, scenario  and procedure details.
3. SNP_STEP_LOG - This has the steps within each odi object.

Just to put the table relation ships into perspective, showing the sample data for a sample session here. This data is for the below specified package in ODI.

SNP_SESSION  - Has one row per execution at the package level here.
SNP_SESS_STEP - represents objects inside a package with design time details.
SNP_STEP_LOG - represents objects inside a package with run time details (including error_message).
SNP_SESS_TASK - represents the KM, details related to interface,procedure, tools in a package with design time details .
SNP_SESS_TASK_LOG - represents the KM, details related to interface,procedure, tools in a package with run time details (including error message)

In the execution report, I developed I am not showing task level details, instead it stops at the step level. But I wanted to show here that task level details can also be captured in the log.

Execution Report Procedure

Create a procedure and add one command in the details with the following Jython code. Make sure that Jython technology is selected in the target tab. This procedure should be added as a last step in the package so that it takes all the execution details and puts it into formatted file.

This can be used for successful as well as for error conditions for producing the log.


Please find below the complete code for using it in your implementation.

1. It take the connection for current work repository
2. Using the current session_no to retrieve execution details using the query
3. Spools the query data with formatting to a file.
import string
import java.sql as sql
import java.lang as lang
import re
import time
sourceConnection = odiRef.getJDBCConnection("WORKREP")
output_write=open('c:/temp/Execution_Report_<%=odiRef.getSession("SESS_NO")%>.out','w')
sqlstring = sourceConnection.createStatement()
localtime = time.asctime( time.localtime(time.time()) )
print >> output_write, "/*------------------------------------------------------------------------------------------------------------------------------------------------*/\n"
print >> output_write,'%s\n' % (str('  Execution Report for scenario : '+'<%=odiRef.getSession("SESS_NAME")%>'+'  ( '+localtime+' ) ').upper().center(124))
print >> output_write, "/*------------------------------------------------------------------------------------------------------------------------------------------------*/\n"

sqlstmt="select rownum, \
       sess.sess_no, \
       sess.sess_name, \
       step_log.nno, \
       step.step_name, \
       case when step.step_type='F' THEN 'Interface' \
            when step.step_type='VD' THEN 'Variable declaration' \
            when step.step_type='VS' THEN 'Set//Increment variable' \
            when step.step_type='VE' THEN 'Evaluate variable' \
            when step.step_type='V' THEN 'Refresh variable '\
            when step.step_type='T' THEN 'Procedure' \
            when step.step_type='OE' THEN 'OS command' \
            when step.step_type='SE' THEN 'ODI Tool' \
            when step.step_type='RM' THEN 'Reverse-engineer model' \
            when step.step_type='CM' THEN 'Check model' \
            when step.step_type='CS' THEN 'Check sub-model' \
            when step.step_type='CD' THEN 'Check datastore' \
            when step.step_type='JM' THEN 'Journalize model' \
            when step.step_type='JD' THEN 'Journalize datastore' \
            ELSE 'UNKNOWN' END as step_type, \
      to_char(step_log.step_beg,'DD-MON-YYYY HH24:MI:SS') as step_beg, \
      to_char(step_log.step_end,'DD-MON-YYYY HH24:MI:SS') as step_end, \
      step_log.step_dur, \
      case when step_log.step_status='D' THEN 'Success' \
            when step_log.step_status='E' THEN 'Error' \
            when step_log.step_status='Q' THEN 'Queued' \
            when step_log.step_status='W' THEN 'Waiting' \
            when step_log.step_status='M' THEN 'Warning' \
            ELSE 'UNKNOWN' END as step_status, \
      step_log.nb_row, \
      step_log.nb_ins, \
      step_log.nb_upd, \
      step_log.nb_del, \
      step_log.nb_err, \
     dbms_lob.substr( TRANSLATE ( step_log.error_message, 'x'||CHR(10)||CHR(13), 'x'), 600, 1 )  as error_message \
from snp_session sess right outer join snp_sess_step step on sess.sess_no = step.sess_no \
                      inner join snp_step_log step_log on step.sess_no = step_log.sess_no and step.nno = step_log.nno \
where step_log.sess_no  = <%=odiRef.getSession("SESS_NO")%> \
      order by step_log.nno"
print >> output_write, "StepNo StepName                                StepType            BeginningTime  EndTime        Duration   Status  Error_Msg\n"
print >> output_write, "------ --------------------------------------- ------------------- --------------------- --------------------- ---------- -------------- ----------\n"
result=sqlstring.executeQuery(sqlstmt)
while (result.next()):
    rownum=result.getInt("rownum")
    nno=result.getInt("nno")
    step_name=result.getString("step_name")
    step_type=result.getString("step_type")
    step_beg=result.getString("step_beg")
    step_end=result.getString("step_end")
    step_dur=result.getInt("step_dur")
    step_status=result.getString("step_status")
    error_message=result.getString("error_message")
    print >> output_write,'%7s%40s%20s%22s%22s%11s%15s%s\n' % (str(nno).ljust(7),step_name.ljust(40),step_type.ljust(20),str(step_beg).ljust(22),str(step_end).ljust(22),str(step_dur).ljust(11),step_status.ljust(15),str(error_message))
print >> output_write, "/*------------------------------------------------------------------------------------------------------------------------------------------------*/\n"
sourceConnection.close()
output_write.close() 

The only restriction in this method is that we will not able to get the status of the execution report into the file since query is executed while it is run.

If we don't need this procedure details, we can restrict it in the where clause so that you wont see it in the report.

Sample Examples.

I will try to show a successful run as well as error run samples files here.

1. In this package, I am running an interface and on successful run, it produces the log.

/*------------------------------------------------------------------------------------------------------------------------------------------------*/

                    EXECUTION REPORT FOR SCENARIO : WRITE_EXECUTION_REPORT  ( THU NOV 14 10:34:33 2013 )                    

/*------------------------------------------------------------------------------------------------------------------------------------------------*/

StepNo StepName                                StepType            BeginningTime  EndTime        Duration   Status  Error_Msg

------ --------------------------------------- ------------------- --------------------- --------------------- ---------- -------------- ----------

0      int_file_table                          Interface           14-NOV-2013 10:34:11  14-NOV-2013 10:34:12  1          Success        None

1      write_execution_report                  Procedure           14-NOV-2013 10:34:12  None                  0          UNKNOWN        None

/*------------------------------------------------------------------------------------------------------------------------------------------------*/


2. For a error run a sample report is shown here. In this case, I run the interface, run a PL/SQL block which raises exception to produce error and on error run the execution report.


Lets look at the log for the error.


/*------------------------------------------------------------------------------------------------------------------------------------------------*/

                    EXECUTION REPORT FOR SCENARIO : WRITE_EXECUTION_REPORT  ( THU NOV 14 14:50:17 2013 )                    

/*------------------------------------------------------------------------------------------------------------------------------------------------*/

StepNo StepName                                StepType            BeginningTime  EndTime        Duration   Status  Error_Msg

------ --------------------------------------- ------------------- --------------------- --------------------- ---------- -------------- ----------

0      int_file_table                          Interface           14-NOV-2013 14:50:08  14-NOV-2013 14:50:15  7          Success        None

1      proc_call_stored_procedure              Procedure           14-NOV-2013 14:50:15  14-NOV-2013 14:50:16  1          Error          ODI-1226: Step proc_call_stored_procedure fails after 1 attempt(s).ODI-1232: Procedure proc_call_stored_procedure execution fails.ODI-1228: Task proc_call_stored_procedure (Procedure) fails on the target ORACLE connection oracle_xe_hr.Caused By: java.sql.SQLException: ORA-20001: This is a custom errorORA-06512: at "HR.RAISE_EXCEPTION", line 15ORA-06512: at line 2 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:457) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:889) at oracle.jdbc.driver.T4CTTIfun.receive(T

2      write_execution_report                  Procedure           14-NOV-2013 14:50:16  None                  0          UNKNOWN        None

/*------------------------------------------------------------------------------------------------------------------------------------------------*/ 

Sunday, November 10, 2013

Reading from XML File in Oracle Data Integrator - Part2

Reading from XML File in Oracle Data Integrator - Part2


In my last post part-1 of this series Write to XML, I showed how to reverse engineer xml chema  into ODI and write table data to XML File. In this post, I will continue the part 2 of the series and show how to read data from XML file and load to external tables.

If you are just looking for information just to read from xml file, the pre-requisite is to complete the topology and data model configuration in the part 1 of this series  Write to XML before continuing this tutorial .

So lets get started...

Since we have the topology and model already configured in part1, we just need to load the xml file to XML_STAGING schema and then use table to table mapping for the tables from xml_staging to target tables.

I am going to use data_target schema (new schema created) as target schema and i will be creating the region, countries, locations, departments and employees table in this schema without data. We will load data to these tables from XML. Here are the DDLs for these tables. Please note here that I am continuing with country_num new columns created for country and location tables as I showed in part 1.

CREATE TABLE "DATA_TARGET"."REGIONS"
  (
    "REGION_ID"   NUMBER NOT NULL ENABLE,
    "REGION_NAME" VARCHAR2(25 BYTE)
  )

CREATE TABLE "DATA_TARGET"."COUNTRIES"
  (
    "COUNTRY_ID"   CHAR(2 BYTE) NOT NULL ENABLE,
    "COUNTRY_NAME" VARCHAR2(40 BYTE),
    "REGION_ID"    NUMBER,
    "COUNTRY_NUM"  NUMBER(16,0)
  )

CREATE TABLE "DATA_TARGET"."LOCATIONS"
  (
    "LOCATION_ID"    NUMBER(4,0),
    "STREET_ADDRESS" VARCHAR2(40 BYTE),
    "POSTAL_CODE"    VARCHAR2(12 BYTE),
    "CITY"           VARCHAR2(30 BYTE) NOT NULL ENABLE,
    "STATE_PROVINCE" VARCHAR2(25 BYTE),
    "COUNTRY_ID"     CHAR(2 BYTE),
    "COUNTRY_NUM"    NUMBER(16,0)
  )
CREATE TABLE "DATA_TARGET"."DEPARTMENTS"
  (
    "DEPARTMENT_ID"   NUMBER(4,0),
    "DEPARTMENT_NAME" VARCHAR2(30 BYTE) NOT NULL ENABLE,
    "MANAGER_ID"      NUMBER(6,0),
    "LOCATION_ID"     NUMBER(4,0)
  )
CREATE TABLE "DATA_TARGET"."EMPLOYEES"
  (
    "EMPLOYEE_ID"  NUMBER(6,0),
    "FIRST_NAME"   VARCHAR2(20 BYTE),
    "LAST_NAME"    VARCHAR2(25 BYTE) NOT NULL ENABLE,
    "EMAIL"        VARCHAR2(25 BYTE) NOT NULL ENABLE,
    "PHONE_NUMBER" VARCHAR2(20 BYTE),
    "HIRE_DATE" DATE NOT NULL ENABLE,
    "JOB_ID"         VARCHAR2(10 BYTE) NOT NULL ENABLE,
    "SALARY"         NUMBER(8,2),
    "COMMISSION_PCT" NUMBER(2,2),
    "MANAGER_ID"     NUMBER(6,0),
    "DEPARTMENT_ID"  NUMBER(4,0)
  )

The process of loading xml data to data_target involves creating 3 steps.

1. Create a Procedure in ODI and issue a command to synchronize data from xml to xml_staging schema (Yes. All it needs is a single command to load xml to xml_staging schema)
2. Create interfaces for table to table loading of regions, countries, locations, departments and employees from xml_staging to data_target.
3. Create a package to sequence these objects and run the package.

I have created a new folder XML_TO_TABLE under PRJ_TEST project. All objects we create in this post go inside this folder.

1. Loading XML to XML_STAGING schema

Under xml_to_table folder, right click on procedures and select new procedure. Provide a name and select XML as technology.
Go to details tab and add new step line. Provide a name, select XML technology and select xml logical schema. In the command section provide SYNCHRONIZE FROM FILE and save.
SYNCHRONIZE FROM FILE - This command instructs ODI to load the data from xml file (directory and xml file name defined in toplogy) and synchronize the data to odi_staging schema.
If we need to load new xml files every time, place the xml file with same name as we defined in topology in the same directory we defined toplogy and then run the procedure with this command by selecting the xml logical schema. Boom... xml data will be synchronized to xml_staging.

If we need to load big xml files, external database schema should be used for good performance and debugging of data if needed in the xml jdbc url.

2. Loading from XML_STAGING to Target tables

we need to create interface for regions, countries, locations, departments and employees. The procedure is same for all interfaces. I will show mapping and flow tab for regions and mapping for the interfaces here.

Create regions interface. Provide name

for regions, go to mapping tab, put the xml_staging regions as source and data_target regions as target and do the mapping

In the flow tab, select IKM SQL to Oracle


Lets create interface for countries. In the mapping tab, put xml_staging countries as source and data_target countries as target. I will have introduce a lookup table (green color) since I need to populate the varchar country code (because of country_num i added before). The lookup table is countries from HR schema joined with country name. Lookup tables can be added by selecting the search like icon in the mapping tab above source tables area.


This is the look up table join condition used in countries.

Lets create locations interface. Drop xml_staging location table in source and data_target locations table in target and do the mapping. here also i used data_target countries table we populated in previous step just to populate varchar country_id. The join condition is based on the country_num.


Lets create the departments interface. Drop xml_staging departments table into source and data_target departments table into target and do the mapping in mapping tab.


Lets create the employees interface. Drop xml staging employees into source and data_target employees into target and do the mapping in mapping tab. For the data_hire column, I have used to to_date and substtr functions to convert incoming varchar data into date.


Lets create a package to put everything together. Create a package and put procedure, interfaces of regions, countries, locations, departments and employees in sequence.


Lets run the package and see the results. Package run is successful


Lets check the xml_staging table and target table to see the results. I am just showing employees table from xml_staging and data_target.




With this we have completed the loading xml file to ODI. Please post if you have any questions or suggestions.

Saturday, November 9, 2013

Reading and Writing Complex XML files in ODI - Part1

Reading and Writing Complex XML files in ODI - Part1

In my last post Read write Files, I showed how to read and write delimited files and fixed length files in ODI. In this blog, I will show, how to read and write to complex files in ODI.

For this post, I would like to take the HR schema tables for xml schema. The following database diagram shows the relationship between tables in the HR schema. we will represent this Region->Country->Location->Departments->Employees relationships as XML.

I have added a addtional colum COUNTRY_NUM (numeric) in the country and location tables of HR schema to use for element ordering in XML. This is needed as COUNTRY_ID which exists already in country table is varchar. I have also provided numeric values for this column in countries and location.

This data hierarchy will be converted into the following xml hierarchy using xsd.

I have created the xsd for this representation and it can be downloaded here. xsd
I have created a sample xml also using the xsd and it can be downloaded here. xml

Lets get started with writing xml first with ODI and use the same output to load it into different set of tables to demonstrate loading.

Creating Topology and XML Model configuration

As usual we need to create a physical schema in ODI for the schema similar to table and file examples.
This xml schema is created as tables in ODI to represent xml. Each complex type in xml is considered as a table. Each element and attribute inside a complex element are represented as columns in the table. So ODI will represent the hierarchy with region, country, location, department, employees table internally when we create a physical schema for the XSD.
The table representation by default is created in memory but it can be created using external database also and external database schema is the suggested approach for production environments so that they are accessible any time for debugging as opposed to memory approach.
To use external database schema approach we need to use properties file to tell ODI the schema, connection details to store the table representation of xsd.
For this purpose, I have created a new schema XML_STAGING in my XE database and I have created the properties file. The property file looks as follows. I have named the property file as all_employees.properties. Use the encode tool to encode the password to specify here. The encode tool is available in agent/bin directory.


driver=oracle.jdbc.driver.OracleDriver
url=jdbc:oracle:thin:@localhost:1521:xe
user=XML_STAGING
password=g0yXfhlSfzhz4bro5P7HCBQ
schema=XML_STAGING
drop_on_connect=Y
create_tables=AUTO
create_indexes=Y
truncate_before_load=Y
ids_in_db=Y
drop_tables_on_drop_schema=Y
use_prepared_statements=Y
use_batch_update=Y
batch_update_size=30
commit_periodically=Y
num_inserts_before_commit=1000
reserve_chars_for_column=3
reserve_chars_for_table=3

This all_employees.properties need to be placed in the classpath odi client or agents classpath wherever xml is used so that it can use this file to identify the xml_staging.
For windows odi client class path is : <UserHome>/AppData/Roaming/odi/oracledi/userlib
For standalone agent this classpath is: <standalone agen dir>/oracledi/agent/drivers

Since i am using windows odi client, I have placed in appdata odi userlib directory.
Lets create the topology physical schema
Under Topology tab under physical architecture section, right click on XML technology and select create data server. Provide name for the data server
Go to jdbc tab and provide jdbc details. Select jdbc driver for xml and provide jdbc url.
jdbc url provided is 
jdbc:snps:xml?f=C:\temp\odi\input\all_employees.xml&d=C:\temp\odi\input\all_employees.xsd&s=EMPLOYEES&re=regions&dp=all_employees.properties&lf=C:\temp\odi\input\all_employees.log&ll=31

Here
f - represents xml sample file
d - represents xsd file
re- root element name in xsd (regions in our case)
s - logical schema name for this xml
dp - property file for external database schema to store xsd representation
lf - log file to redirect logs
ll - log level (31 is the highest level)
Check the connection by testing it.

Lets create a physical schema now. Right click on data server and select create physical schema.

Lets create a logical schema and map the physical schema to logical schema.
Lets go ahead and create model now for the xml. Go to designer tab, under data model section, create a new model. Provide names, xml as technology and select logical schema we created earlier.

Click on reverse engineer button at the top. ODI converts all complex types to tables and puts them in the model. all elements and attributes becomes columns for the table. There are addition things to note here that those tables have some special columns.
They are
<element>Order column and FK column.
1. elementOrder columns decide the order in which elements need to appear in the xml since xsd made the sequence mandatory through xs:sequence for complex element.
2. FK column stores the id of parent so that child comes correctly under the parent. In our snapeshot it is regionfk for country.

Actually the tables shown in the model are physically present in the XML_STAGING table. when we do reverse engineer those tables are created in the schema and the meta data of those tables are represented as data stores in the model. EMPLOYEES represent the schema name we provided in the s attribute of jdbc url of the data server.

Please also increase the column sizes in data store and xml_staging schema tables  if needed that the input xml or the tables in hr schema has bigger column sizes.

Writing XML File

Lets create series of interfaces for loading each complex types in xml from hr tables. First lets create region interface. I am creating another folder xml under PRJ_TEST project and placing all interfaces and package here.

Import LKM SQL to SQL and IKM XML Control Append knowledge modules before creating the interface.
1. LKM SQL to SQL - loads data from source to staging
2. IKM XML Control Append - writes data from staging to target.

Create interface for region. Provide name and set staging different from target. select xml logical schema for staging.


In the mapping tab, provide the mapping for region interface with region of hr as source and region of xml model as target.

1. Note that here regions parent (REGIONSFK)  is only one element and so it is mapped to 0
2. Name is the first element in region and so it is set to 0
3. REGIONORDER is set to source region id since it is a sequence and so it will automatically maintain the order of region under regions based on this source regions id.


In the flow tab, select source and staging tables and select  LKM SQL to SQL as knowledge modules.
Select target and select IKM XML Control Append as knowledge module.
Set the following properties and these need to be set only for the top element (region) under ROOT (regions)
1. Select TRUNCATE_SCHEMA to yes (this truncates all xml data available in previous run)
2. Set ROOT_TABLE to regions (to show it is the root node in xml)
We need to create similar interfaces for country, locations, department and employees.
For country, location and department we should NOT set any options for IKM XML Control Append leave all the options default. So I am just showing here the mapping for country, location and department.

Mapping for country, Note here that newly added numeric column country_num is used for countrypk and countryorder as country_id is varchar which i cant use for order column. In the flow tab, make sure all options are default for IKM XML Control Append.

Mapping for location, similar to country,
1. staging different from target, set xml logical schema
2. for mapping note the sequence is provided from 0 to 3 for elements under location.
3. locationpk and locationorder are set to locationid from source since it is a numeric and it is a sequence.
4. leave deafult option for IKM XML control append for target in flow tab.
Mapping for department, similar to country,
1. staging different from target, set xml logical schema
2. for mapping note the sequence is provided from 0 to 1 for elements under department.
3. departmentpk and departmentorder are set to dpeartmentid from source since it is a numeric and it is a sequence.
4. leave deafult option for IKM XML control append for target in flow tab.
Mapping for employee,
flow tab for employee, make sure that CREATE_XML option is set to true here as it is the last interface which will be populated and it needs to create the xml file to disk. Optionally we can set XML_PATH option to full path of the xml file so that xml file is written here. I have set this to C:\temp\odi\input\employees_<%=odiRef.getSession("SESS_NO")%>.xml in this example.
<%=odiRef.getSession("SESS_NO")%> get the currenr session number for the odi run appends to the file name. so if session number is 300, file will be created as C:\temp\odi\input\employees_300.xml

Lets put everything together in a package so that they run in a sequence and produce a file.
Right click on packages and select new package. Provide a name.
select diagram tab, drag and drop region, country, location , department and employees interfaces in sequence and connect them with ok for success flow. save the package.

Lets run the package and verify the results. Right click on package and execute. Select global and no agent and click ok. package execution is successful.

Lets verify the file in the directory and it is created successfully. File name also has the session number appended to it.

Lets open the file and verify the data. All data is successfully produced in the file and it is a well formed xml file.


With this we have completed the xml writing part.Since this is a big, I am separating reading from xml file into a seperate post. You can continue reading xml file post here. Part2 - Read from xml file