Posts

Showing posts from May 17, 2009

How to cleanup orphaned datapump jobs from DBA_DATAPUMP_JOBS

In many cases you sometimes stop data pump job or in case of an abnormal end of the Data Pump job (the orphaned job) or using undocumented parameter KEEP_MASTER=Y, the master table remain in the database. Though this topic is related to cleanup orphaned datapump jobs. But it is good to know several things before doing cleanup jobs. 1) You can check the orphaned data pump from the state column of the view dba_datapump_jobs and DBA_DATAPUMP_JOBS is based on gv$datapump_job, obj$, com$, and user$. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs. 2) For a new data pump job without any job name it is used a system generated name. From the dba_datapump_jobs it is checked for existing data pump jobs and then obtain a unique new system generated jobname. 3) Data pump jobs are different from DBMS_JOBS and they are maintained differently. Jobs created with DBMS_JOB...

ORA-39000, ORA-39143 dump file may be an original export dump file

Problem Description E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:07:00 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-39143: dump file "E:\oracle\Test\testexp_07_03_09.dmp" may be an original export dump file Cause of the problem The above problem happened whenever you try to use the Import Data Pump client (impdp) to import a dumpfile that was created with the original Export client (exp). Though not related but similar error occured. Like whenever you try to import from an empty file, E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:16:52 Copyright (c) 2003, 2005, Orac...

Expdp fails with ORA-39001,ORA-39169,ORA-39006,ORA-39022

Problem Description Connecting to local 10.2.01 database whenever I want to take data pump export into remote 11g database machine using NETWORK_LINK parameter data pump export fails with ORA-39001, ORA-39169 as below. [oracle@localhost bin]$ ./expdp system/a NETWORK_LINK=maestro.net schemas=maximsg VERSION=10.2 Export: Release 10.2.0.1.0 - Production on Wednesday, 04 March, 2009 5:27:39 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-39001: invalid argument value ORA-39169: Local version of 10.2.0.1.0 cannot work with remote version of 11.1.0.6.0. Similarly, connecting to local 10.1.0.* database whenever I want to take data pump export into remote 11g database machine using NETWORK_LINK parameter data pump export fails with ORA-39006: internal error ORA-39022: Database version 11.1.0.6.0 is not supported. Doing expdp/impdp connected to a ...

How to get timing details on data pump processed objects

We can get the number of objects processed and timing information needed to process object types in data pump jobs. We can achieve this goal by using the undocumented parameter METRICS. By setting parameter METRICS to y we can get timing details. An example is given below. E:\Documents and Settings\ shaik >expdp schemas=shaik userid= shaik /a dumpfile= shaik _30_04.dmp logfile= shaik _20_04.log metrics=y Export: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 7:24:07 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options Starting "SHAIK"."SYS_EXPORT_SCHEMA_03": schemas= shaik userid= shaik /******** dumpfile= shaik _30_04.dmp logfile= shaik _20_04.lo g metrics=y Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 1024 KB Processing obj...

Data pump Process Architecture -Master Table, Worker process

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of the progress. For every Data Pump Export and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations. Let's see an example. 1)Starting data pump jobs in one session. expdp schemas=shaik userid=shaik/a dumpfile=shaik_30_04_2009.dmp logfile=arju_20_04_09.log 2)Query from dba_datapump_session to know the data pump job status. set lines 150 pages 100 col program for a20 col username for a5 col spid for a7 col job_name for a25 select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.status, s.username, d.job_name, p.spid, s.serial#, p.pid from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr; DATE PROGRAM ...

How to trace/diagnosis oracle data pump jobs

Whenever you issue, impdp help=y or expdp help=y you can see a list of parameters that can be used for oracle data pump export/import jobs. From there you don't see any parameter by which you can trace data pump jobs. But tracing datapump job is an important issue in case of diagnosing incorrect behavior and/or troubleshooting Data Pump errors. The undocumented parameter TRACE is really useful to troubleshoot data pump jobs. The tracing of data pump is done by TRACE parameter. This parameter takes value as 7 digit hexadecimal number. Specifying the parameter value follow some rules. Out of 7 digit hexadecimal number, - first 3 digits are responsible to enable tracing for a specific data pump component. - Rest 4 digits are usually 0300 - Specifying more than 7 hexadecimal number is not allowed. Doing so will result, UDE-00014: invalid value for parameter, 'trace'. - Specifying leading 0x (hexadecimal specification characters) is not allowed. - Value to be specified in hexade...

Remove duplicate successive line using uniq utility

With uniq utility you can remove duplicate line if they are within successive line. For example you have successive identical line in the file, then with uniq you can discard all but one of successive identical lines from the file. Consider you have following lines within with files. # vi student_data.txt Roll number is 024401 His Name is SHAIK His Name is SHAIK He is 24 years old. Roll number is 12345 Then using use uniq utility as below will yield following result. # uniq student_data.txt Roll number is 12345 His Name is SHAIK He is 24 years old. Note that within file there was two duplicate lines. One is, "Roll number is 12345" and another is "His Name is SHAIK". Using the "uniq" output only "His Name is SHAIK" line is omitted because they are successive identical lines. However "Roll number is 12345" text line is not removed because they are not successive though they are identical. So uniq utility is used to remove adjacent identic...

Edit file on linux using sed utility

With sed utility you can edit text inside file directly. It is not needed to open the file using any editor and then do editing task. sed is stream editor for filtering and transforming text. Below is my student_grade.txt # cat student_grade.txt 024401 4.98 024402 4.95 024403 4.95 024404 4.50 024405 4.95 Now using sed utility we will replace first few digits of "student_id" that is 0244 by "Roll:". The syntax of using sed utility is, sed {expression} {file} Now using sed utility we want to replace "0244" with "Roll:" # sed '/0244/s//Roll:/g' student_grade.txt Roll:01 4.98 Roll:02 4.95 Roll:03 4.95 Roll:04 4.50 Roll:05 4.95 Let's now understand about the command. within single quote, /0244 indicates search for string 024434. /s means substitute or replace work. //Roll: means replace the word "0244" by "Roll:" /g means make the changes global

Data manipulation using awk utility in linux

With awk utility you can scan a pattern within file record and then process the record as you want. Suppose you have the student_grade.txt like below. First field is student id and second field is grade. Both field is separated by tab delimiter. # cat student_grade.txt 024401 4.98 024402 4.95 024403 4.95 024404 4.50 024405 4.95 Now we want to find out those student id whose grade is 4.95. With awk utility we can do this. We will search each record for the grade 4.95 and then print the 1st field. The syntax of usage awk utiity is, awk 'pattern_action' {file_name} Now we can extract the student id whose grade is 4.95 by, # awk '/4.95/{print $1}' student_grade.txt 024402 024403 024405 It will search for 4.95 within each record of file student_grade.txt and then by command "print $1" it will print first field. Meta characters used in awk To search for a pattern in awr you can use various meta characters. The list of meta characters along with their meaning is...

Translate or replace characters using tr utility

With tr command you can translate letter from uppercase to lowercase and vice-versa. In other word, with tr command you can replace a letter by another letter. The syntax for using tr command is, tr {source_pattern} {destination_pattern} Each letter in source_pattern is replaced by corresponding letter in destination_pattern. For example if you write tr "a6" "7y" then from the string or file every "a" will be replaced by "7", and every "6" will be replaced by "y". Let's see an example. My names.txt looks like below. # cat names.txt momin arju bony tany azmeri queen tasreen Now we want letter "o" will be replaced by number "0". Letter "i" will be replaced by number "1". Small letter "e" will be replaced by capital letter "E". # tr "oie" "01E" m0m1n arju b0ny tany azmEr1 quEEn tasrEEn We can also capitalize all letters inside names.txt with singl...

Join utility in linux

To merge line by lines(line 1 of file1 is merged to line 1 of file2 and etc) within two files. But the join utility is used to merge if there is common field in both file and if values are identical to each other. Join does not work line by line. It works with all the lines between file to search for identical values. A example will make you more clear. We have id_age.txt and id_dept.txt file and data are shown below. Both join and paste are shown below. # cat id_age.txt 024401 28 024402 26 024434 23 # cat id_dept.txt 024401 CIT 024434 CSE 024438 EEE # paste id_age.txt id_dept.txt 024401 28 024401 CIT 024402 26 024434 CSE 024434 23 024438 EEE # join id_age.txt id_dept.txt 024401 28 CIT 024434 23 CSE Note that the id 024434 is on the 2nd line of the id_dept.txt but the id 024434 is on the 3rd line of the id_age.txt and merge is done successfully.

getopts command in linux shell script

The getopts command in shell script is used to check the valid command line arguments passed into script. The syntax of using getopts inside shell script is, getopts {optsring} {variable_name} From the manual page, "optstring contains the option letters to be recognized; if a letter is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. Each time it is invoked, getopts places the next option in the shell variable variable_name, When an option requires an argument, getopts places that argument into the variable OPTARG. On errors getopts diagnostic messages are printed when illegal options or missing option arguments are encountered. If an illegal option is seen, getopts places ? into variable_name." For example you have a shell script named student_info that would be run by, ./student_info -i 024434 -a 23 -d CIT -s male where student_info is the shell script name -i is used for the student id. -a is used for age. -...