Posts

Showing posts from 2009

ORA-00939: too many arguments -when case arguments exceed 255

Problem Description In all Oracle 10.1g and oracle 10.2.0.1, 10.2.0.2, 10.2.0.3 whenever there is more than 255 arguments inside CASE statements then following error is returned. ORA-00939: too many arguments for function But surprisingly on Oracle 9.2 database the same code does not return any error. Cause of the Problem As we see the same code return error in 10g but it is well on 9.2. In fact this is problem of oracle 9.2 that it does not return any error. It should return error if there is more than 255 arguments inside CASE statements. In oracle 9.2 this is bug number 2909927 and this bug is fixed in oracle 10.1 and 10.2. That's why in 10.2 and 10.1 error returns if arguments exceed 255. Up to Oracle 10.2.0.3 and earlier version, the arguments of CASE statement is limit to 255 and it does not support exceed over 255 and hence error return if it exceed. However in Oracle 10.2.0.4 and later the arguments limit of CASE statement has increased to 32767. Solution of the Problem Opt

How to determine the name of the trace file to be generated

In many cases we need to find out the name of the latest trace file generated in the USER_DUMP_DEST directory. What we usually do is go to physically to the USER_DUMP_DEST with the operating system browser and sort all the files by date and look for latest files. We can remove this hassle easily if we know what would be the trace file name in advance. For example issuing, D:\>sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 10 21:58:15 2009 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options SQL> alter database backup controlfile to trace; Database altered. SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE; will generated a trace file inside USER_DUMP_DEST. Let's see the location of USER_DUMP_DEST. If you use Sql*plus then issue, SQL> show parameter user_dump_dest NAME TYPE

ORA-27100: shared memory realm already exists

However if you merely see ORA-27100 error upon startup then proceed with this post. Problem Description When you try to startup your database instance even you issue startup nomount ORA-27100 is reported. SQL> startup nomount ORA-27100: shared memory realm already exists Similarly, if you try to shutdown your oracle instance then ORA-27100 is reported. SQL> shutdown immediate ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Restarting the service also does not help any. And even in some cases, rebooting the server does not help as well. Current Changes in the Server We had sga_max_size to 600M and then we change it to 1G. After that whenever we restart oracle database above message rises. Cause of the Problem Some changes are made in sga_max_size or sga parameters. After that database is abnormally shut down whenever you issue startup, ORA-27100 is received as a result of changes made to parameters affecting the SGA size. Solution of the Problem In mo

Oracle 10g New Features

The G stands for Grid Computing. A common missconception seems to be that grid is just the new name for RAC (having improved RAC) This is not the case. 10g comes with both RAC and grid. One will be able to install 10g with RAC only, with grid only, without either and with both. There is a profound difference between grid and RAC. 10g is said to have 149 new features. 10g provides a wealth of features that can be used to automate almost every aspect of its database administration. It is important to note that these automation features are optional, and they are not intended to replace standard DBA activities. Rather, the Oracle10g automation features are aimed at shops that do not have the manpower or expertise to manually perform the tasks. Oracle Enhancements by Oracle Release New Utilities in Oracle10g release 10.1.0: • Oracle10g Grid – RAC enhanced for Oracle10g dynamic scalability with server blades (extra-cost option) • Completely reworked 10g Enterprise Manager (OEM) •

RAC Design & Best Practices

RAC 10g R2 on Red Hat Linux Detail Design and Best Practices Details followed in below link RAC 10g

How to cleanup orphaned datapump jobs from DBA_DATAPUMP_JOBS

In many cases you sometimes stop data pump job or in case of an abnormal end of the Data Pump job (the orphaned job) or using undocumented parameter KEEP_MASTER=Y, the master table remain in the database. Though this topic is related to cleanup orphaned datapump jobs. But it is good to know several things before doing cleanup jobs. 1) You can check the orphaned data pump from the state column of the view dba_datapump_jobs and DBA_DATAPUMP_JOBS is based on gv$datapump_job, obj$, com$, and user$. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs. 2) For a new data pump job without any job name it is used a system generated name. From the dba_datapump_jobs it is checked for existing data pump jobs and then obtain a unique new system generated jobname. 3) Data pump jobs are different from DBMS_JOBS and they are maintained differently. Jobs created with DBMS_JOB

ORA-39000, ORA-39143 dump file may be an original export dump file

Problem Description E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:07:00 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-39143: dump file "E:\oracle\Test\testexp_07_03_09.dmp" may be an original export dump file Cause of the problem The above problem happened whenever you try to use the Import Data Pump client (impdp) to import a dumpfile that was created with the original Export client (exp). Though not related but similar error occured. Like whenever you try to import from an empty file, E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:16:52 Copyright (c) 2003, 2005, Orac

Expdp fails with ORA-39001,ORA-39169,ORA-39006,ORA-39022

Problem Description Connecting to local 10.2.01 database whenever I want to take data pump export into remote 11g database machine using NETWORK_LINK parameter data pump export fails with ORA-39001, ORA-39169 as below. [oracle@localhost bin]$ ./expdp system/a NETWORK_LINK=maestro.net schemas=maximsg VERSION=10.2 Export: Release 10.2.0.1.0 - Production on Wednesday, 04 March, 2009 5:27:39 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-39001: invalid argument value ORA-39169: Local version of 10.2.0.1.0 cannot work with remote version of 11.1.0.6.0. Similarly, connecting to local 10.1.0.* database whenever I want to take data pump export into remote 11g database machine using NETWORK_LINK parameter data pump export fails with ORA-39006: internal error ORA-39022: Database version 11.1.0.6.0 is not supported. Doing expdp/impdp connected to a

How to get timing details on data pump processed objects

We can get the number of objects processed and timing information needed to process object types in data pump jobs. We can achieve this goal by using the undocumented parameter METRICS. By setting parameter METRICS to y we can get timing details. An example is given below. E:\Documents and Settings\ shaik >expdp schemas=shaik userid= shaik /a dumpfile= shaik _30_04.dmp logfile= shaik _20_04.log metrics=y Export: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 7:24:07 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options Starting "SHAIK"."SYS_EXPORT_SCHEMA_03": schemas= shaik userid= shaik /******** dumpfile= shaik _30_04.dmp logfile= shaik _20_04.lo g metrics=y Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 1024 KB Processing obj

Data pump Process Architecture -Master Table, Worker process

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of the progress. For every Data Pump Export and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations. Let's see an example. 1)Starting data pump jobs in one session. expdp schemas=shaik userid=shaik/a dumpfile=shaik_30_04_2009.dmp logfile=arju_20_04_09.log 2)Query from dba_datapump_session to know the data pump job status. set lines 150 pages 100 col program for a20 col username for a5 col spid for a7 col job_name for a25 select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.status, s.username, d.job_name, p.spid, s.serial#, p.pid from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr; DATE PROGRAM

How to trace/diagnosis oracle data pump jobs

Whenever you issue, impdp help=y or expdp help=y you can see a list of parameters that can be used for oracle data pump export/import jobs. From there you don't see any parameter by which you can trace data pump jobs. But tracing datapump job is an important issue in case of diagnosing incorrect behavior and/or troubleshooting Data Pump errors. The undocumented parameter TRACE is really useful to troubleshoot data pump jobs. The tracing of data pump is done by TRACE parameter. This parameter takes value as 7 digit hexadecimal number. Specifying the parameter value follow some rules. Out of 7 digit hexadecimal number, - first 3 digits are responsible to enable tracing for a specific data pump component. - Rest 4 digits are usually 0300 - Specifying more than 7 hexadecimal number is not allowed. Doing so will result, UDE-00014: invalid value for parameter, 'trace'. - Specifying leading 0x (hexadecimal specification characters) is not allowed. - Value to be specified in hexade

Remove duplicate successive line using uniq utility

With uniq utility you can remove duplicate line if they are within successive line. For example you have successive identical line in the file, then with uniq you can discard all but one of successive identical lines from the file. Consider you have following lines within with files. # vi student_data.txt Roll number is 024401 His Name is SHAIK His Name is SHAIK He is 24 years old. Roll number is 12345 Then using use uniq utility as below will yield following result. # uniq student_data.txt Roll number is 12345 His Name is SHAIK He is 24 years old. Note that within file there was two duplicate lines. One is, "Roll number is 12345" and another is "His Name is SHAIK". Using the "uniq" output only "His Name is SHAIK" line is omitted because they are successive identical lines. However "Roll number is 12345" text line is not removed because they are not successive though they are identical. So uniq utility is used to remove adjacent identic

Edit file on linux using sed utility

With sed utility you can edit text inside file directly. It is not needed to open the file using any editor and then do editing task. sed is stream editor for filtering and transforming text. Below is my student_grade.txt # cat student_grade.txt 024401 4.98 024402 4.95 024403 4.95 024404 4.50 024405 4.95 Now using sed utility we will replace first few digits of "student_id" that is 0244 by "Roll:". The syntax of using sed utility is, sed {expression} {file} Now using sed utility we want to replace "0244" with "Roll:" # sed '/0244/s//Roll:/g' student_grade.txt Roll:01 4.98 Roll:02 4.95 Roll:03 4.95 Roll:04 4.50 Roll:05 4.95 Let's now understand about the command. within single quote, /0244 indicates search for string 024434. /s means substitute or replace work. //Roll: means replace the word "0244" by "Roll:" /g means make the changes global

Data manipulation using awk utility in linux

With awk utility you can scan a pattern within file record and then process the record as you want. Suppose you have the student_grade.txt like below. First field is student id and second field is grade. Both field is separated by tab delimiter. # cat student_grade.txt 024401 4.98 024402 4.95 024403 4.95 024404 4.50 024405 4.95 Now we want to find out those student id whose grade is 4.95. With awk utility we can do this. We will search each record for the grade 4.95 and then print the 1st field. The syntax of usage awk utiity is, awk 'pattern_action' {file_name} Now we can extract the student id whose grade is 4.95 by, # awk '/4.95/{print $1}' student_grade.txt 024402 024403 024405 It will search for 4.95 within each record of file student_grade.txt and then by command "print $1" it will print first field. Meta characters used in awk To search for a pattern in awr you can use various meta characters. The list of meta characters along with their meaning is

Translate or replace characters using tr utility

With tr command you can translate letter from uppercase to lowercase and vice-versa. In other word, with tr command you can replace a letter by another letter. The syntax for using tr command is, tr {source_pattern} {destination_pattern} Each letter in source_pattern is replaced by corresponding letter in destination_pattern. For example if you write tr "a6" "7y" then from the string or file every "a" will be replaced by "7", and every "6" will be replaced by "y". Let's see an example. My names.txt looks like below. # cat names.txt momin arju bony tany azmeri queen tasreen Now we want letter "o" will be replaced by number "0". Letter "i" will be replaced by number "1". Small letter "e" will be replaced by capital letter "E". # tr "oie" "01E" m0m1n arju b0ny tany azmEr1 quEEn tasrEEn We can also capitalize all letters inside names.txt with singl

Join utility in linux

To merge line by lines(line 1 of file1 is merged to line 1 of file2 and etc) within two files. But the join utility is used to merge if there is common field in both file and if values are identical to each other. Join does not work line by line. It works with all the lines between file to search for identical values. A example will make you more clear. We have id_age.txt and id_dept.txt file and data are shown below. Both join and paste are shown below. # cat id_age.txt 024401 28 024402 26 024434 23 # cat id_dept.txt 024401 CIT 024434 CSE 024438 EEE # paste id_age.txt id_dept.txt 024401 28 024401 CIT 024402 26 024434 CSE 024434 23 024438 EEE # join id_age.txt id_dept.txt 024401 28 CIT 024434 23 CSE Note that the id 024434 is on the 2nd line of the id_dept.txt but the id 024434 is on the 3rd line of the id_age.txt and merge is done successfully.

getopts command in linux shell script

The getopts command in shell script is used to check the valid command line arguments passed into script. The syntax of using getopts inside shell script is, getopts {optsring} {variable_name} From the manual page, "optstring contains the option letters to be recognized; if a letter is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. Each time it is invoked, getopts places the next option in the shell variable variable_name, When an option requires an argument, getopts places that argument into the variable OPTARG. On errors getopts diagnostic messages are printed when illegal options or missing option arguments are encountered. If an illegal option is seen, getopts places ? into variable_name." For example you have a shell script named student_info that would be run by, ./student_info -i 024434 -a 23 -d CIT -s male where student_info is the shell script name -i is used for the student id. -a is used for age. -

All about Statistics

All about Statistics Intruduction on Statistics The CBO makes its explain-plan decisions based on statistics. Statistics provide critical input in order for CBO to work properly; these includes information such as the number of rows in a table, distinct values in a column, number of leaf blocks in an index, etc. The more accurate the statistics, the more efficient the results provided by Optimizer. Statistics may be exact or estimated: Statistics generated with a COMPUTE clause analyzes all of the data in the object. This gives the optimizer accurate information to work on and arrive at a good execution plan. Statistics generated with an ESTIMATE clause analyzes data in the object to the extent of sample size mentioned. Sample size may be specified as number of rows or percentage of rows that should be randomly analyzed to generate the statistics. Optionally block sampling may also be specified. Statistics are stored in a data dictionary tables owned by SYS user. The following views d