tag:blogger.com,1999:blog-24350177437226362862024-03-13T17:02:13.217+05:30World of Oracle DBAkhaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.comBlogger420125tag:blogger.com,1999:blog-2435017743722636286.post-48849708094234887632009-08-31T10:15:00.001+05:302009-08-31T10:15:44.894+05:30ORA-00939: too many arguments -when case arguments exceed 255<span style="font-weight: bold;"><u>Problem Description</u></span><br />In all Oracle 10.1g and oracle 10.2.0.1, 10.2.0.2, 10.2.0.3 whenever there is more than 255 arguments inside CASE statements then following error is returned.<br /><br />ORA-00939: too many arguments for function<br /><br />But surprisingly on Oracle 9.2 database the same code does not return any error.<br /><br /><span style="font-weight: bold;"><u>Cause of the Problem</u></span><br />As we see the same code return error in 10g but it is well on 9.2. In fact this is problem of oracle 9.2 that it does not return any error. It should return error if there is more than 255 arguments inside CASE statements. In oracle 9.2 this is bug number 2909927 and this bug is fixed in oracle 10.1 and 10.2. That's why in 10.2 and 10.1 error returns if arguments exceed 255.<br /><br />Up to Oracle 10.2.0.3 and earlier version, the arguments of CASE statement is limit to 255 and it does not support exceed over 255 and hence error return if it exceed. However in Oracle 10.2.0.4 and later the arguments limit of CASE statement has increased to 32767.<br /><br /><span style="font-weight: bold;"><u>Solution of the Problem</u></span><br /><span style="font-weight: bold;">Option 01:</span><br />-If you are in oracle version 10.2.0.3 or earlier then enforce a restriction so that arguments of CASE statement does not exceed 255.<br /><br />-Another thing to keep in mind that CASE statements can be nested. So if possible you can nest CASE statements and thus support your application to work.<br /><br /><span style="font-weight: bold;">Option 02:</span><br />Upgrade to oracle database 10.2.0.4 or higher where the arguments limit of CASE statement has increased to 32767.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com75tag:blogger.com,1999:blog-2435017743722636286.post-12953139666515474422009-08-31T10:12:00.000+05:302009-08-31T10:13:31.363+05:30How to determine the name of the trace file to be generatedIn many cases we need to find out the name of the latest trace file generated in the USER_DUMP_DEST directory. What we usually do is go to physically to the USER_DUMP_DEST with the operating system browser and sort all the files by date and look for latest files. We can remove this hassle easily if we know what would be the trace file name in advance.<br /><br />For example issuing,<br /><span style="font-weight: bold;">D:\>sqlplus / as sysdba<br /></span><br />SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 10 21:58:15 2009<br /><br />Copyright (c) 1982, 2005, Oracle. All rights reserved.<br /><br /><br />Connected to:<br />Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br /><br /><span style="font-weight: bold;">SQL> alter database backup controlfile to trace;<br /></span><br />Database altered.<br /><br />SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;<br />will generated a trace file inside USER_DUMP_DEST. Let's see the location of USER_DUMP_DEST.<br /><br />If you use Sql*plus then issue,<br /><span style="font-weight: bold;">SQL> show parameter user_dump_dest<br /></span><br />NAME TYPE VALUE<br />------------------------------------ ----------- ------------------------------<br />user_dump_dest string E:\ORACLE\PRODUCT\10.2.0\ADMIN<br /> \ORCL\UDUMP<br /><br />Querying from v$parameter,<br /><span style="font-weight: bold;">SQL> select value from v$parameter where name='user_dump_dest';<br /></span><br />VALUE<br />-------------------------------------------------------------------<br />E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP<br /><br />On windows changes to directory E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP and issue,<br />E:\oracle\product\10.2.0\admin\orcl\udump>dir /OD<br /><br />The latest files are for latest trace.<br /><br />Now it would be quite easy task if we knew the name of the trace file to be generated by ALTER DATABASE command.<br /><br />In advance we can get the trace file name by,<br /><pre style="font-size: 12px; color: black;"><br /><span style="font-weight: bold;">SQL> set linesize 130<br />SQL> COL trace_file FOR A60<br />SQL> SELECT s.sid,<br /> s.serial#,<br /> pa.value || '\' || LOWER(SYS_CONTEXT('userenv','instance_name')) ||<br /> '_ora_' || p.spid || '.trc' AS trace_file<br /> FROM v$session s,<br /> v$process p,<br /> v$parameter pa<br /> WHERE pa.name = 'user_dump_dest'<br /> AND s.paddr = p.addr<br /> AND s.audsid = SYS_CONTEXT('USERENV', 'SESSIONID');</span><br /><br /> SID SERIAL# TRACE_FILE<br />---------- ---------- ------------------------------------------------------------<br /> 146 11 E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP\orcl_ora_4968.trc<br /></pre><br />So the trace file to be generated now will be named as orcl_ora_4968.trc<br /><br />So now issuing, "alter database backup controlfile to trace" will generate the file named E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP\orcl_ora_4968.trc.<br /><br />On windows you can issue like this,<br /><span style="font-weight: bold;">SQL> alter database backup controlfile to trace;</span><br /><br />Database altered.<br /><br /><span style="font-weight: bold;">SQL> host notepad E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP\orcl_ora_4968.trc</span>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com10tag:blogger.com,1999:blog-2435017743722636286.post-20297134934847520052009-08-31T10:11:00.000+05:302009-08-31T10:12:04.540+05:30ORA-27100: shared memory realm already existsHowever if you merely see ORA-27100 error upon startup then proceed with this post.<br /><br /><span style="font-weight: bold;"><u>Problem Description</u></span><br />When you try to startup your database instance even you issue startup nomount ORA-27100 is reported.<br /><br /><span style="font-weight: bold;">SQL> startup nomount</span><br />ORA-27100: shared memory realm already exists<br /><br />Similarly, if you try to shutdown your oracle instance then ORA-27100 is reported.<br /><br /><span style="font-weight: bold;">SQL> shutdown immediate</span><br />ORA-01034: ORACLE not available<br />ORA-27101: shared memory realm does not exist<br /><br />Restarting the service also does not help any. And even in some cases, rebooting the server does not help as well.<br /><br /><span style="font-weight: bold;"><u>Current Changes in the Server</u></span><br />We had sga_max_size to 600M and then we change it to 1G. After that whenever we restart oracle database above message rises.<br /><br /><span style="font-weight: bold;"><u>Cause of the Problem</u></span><br />Some changes are made in sga_max_size or sga parameters. After that database is abnormally shut down whenever you issue startup, ORA-27100 is received as a result of changes made to parameters affecting the SGA size.<br /><br /><span style="font-weight: bold;"><u>Solution of the Problem</u></span><br />In most cases, ORA-27100 can be resolved by simply restarting Windows. However if avaialibility is too important and you can't afford the time of starting windows then,<br />- Move/ Rename database spfile so that next time starting database service can invoke old pfile to startup.<br /><br />- Start the database service. Make sure pfile exist with old sga parameter settings.<br /><br />- Now the service will start the database using pfile , with old SGA settings.<br />and you have started your database successfully.<br /><br />- Create a new spfile from pfile.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-91045264652972199372009-06-18T11:25:00.003+05:302009-06-18T12:21:49.576+05:30Oracle 10g New FeaturesThe G stands for Grid Computing. A common missconception seems to be that grid is just the new name for RAC (having improved RAC) This is not the case. 10g comes with both RAC and grid. One will be able to install 10g with RAC only, with grid only, without either and with both. There is a profound difference between grid and RAC. 10g is said to have 149 new features. 10g provides a wealth of features that can be used to automate almost every aspect of its database administration. It is important to note that these automation features are optional, and they are not intended to replace standard DBA activities. Rather, the Oracle10g automation features are aimed at shops that do not have the manpower or expertise to manually perform the tasks.<br /><br />Oracle Enhancements by Oracle Release<br />New Utilities in Oracle10g release 10.1.0:<br />• Oracle10g Grid – RAC enhanced for Oracle10g dynamic scalability with server blades<b> (extra-cost option)</b><br />• Completely reworked 10g Enterprise Manager (OEM)<br />• AWR and ASH tables incorporated into OEM Performance Pack and Diagnostic Pack options<b> (extra-cost option)</b><br />• Automated Session History (ASH) materializes the Oracle Wait Interface over time <b>(extra-cost option)</b><br />• Data Pump replaces imp utility with impdp<br />• Automatic Database Diagnostic Monitor (ADDM) <b>(extra-cost option)</b><br />• Automatic Storage Management (ASM) introduced Stripe And Mirror Everywhere (SAME) standard<br />• Automatic Workload Repository (AWR) replaces STATSPACK <b>(extra-cost option)</b><br />• SQLTuning Advisor<br />• SQLAccess Advisor<br />• Rolling database upgrades (using Oracle10g RAC)<br />• dbms_scheduler package replaces dbms_job for scheduling<br />• Set Database Default Tablespace syntax<br />• Rename Tablespace command<br />• Introduced RECYCLEBIN command for storing objects before they are dropped. Required new PURGE <span style=""> </span>command for maintenance.<br />• sqlplus / as sysdba accessibility without quote marks<br />• SYSAUX tablespace<br />• Multiple Temporary Tablespaces supported to reduce stress on sorting in TEMP<br />• RMAN introduces compression for backups<br />• New drop database syntax<br />• New alter database begin backup syntax and alter database end backup. You don't need to specify the tablespaces one by one!!<br />• Oracle10g Data Guard Broker introduced<br />• Oracle10g RAC supports secure Redo Log transport<br />• Flashback enhancements for flashback database and flashback table syntax<br />• SQL Apply feature<br />• Cross Platform Transportable Tablespaces<br />• External Table unload utility<br />• SQL Regular Expression Support with the evaluate syntax<br />• New ROW TIMESTAMP column<br />• Automatic Database Tuning of Checkpoints, Undo Segments and shared memory<br />• Automated invoking of dbms_stats for CBO statistics collection<br />• Oracle Application Builder supports HTML DB<br />• Browser Based Data Workshop and SQL Workshop<br />• PL/SQL Compiler enhanced for compile-time Warnings in utl_mail and utl_compress<br /><!--[if !supportLineBreakNewLine]--><br /><!--[endif]-->Click the Following Link for Full information <span style="font-size:130%;"><a href="http://www.mediafire.com/?yny4mtzzgnm">Oracle 10g </a></span>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com1tag:blogger.com,1999:blog-2435017743722636286.post-67169803510871350772009-06-18T10:40:00.003+05:302009-06-18T11:21:09.573+05:30RAC Design & Best PracticesRAC 10g R2 on Red Hat Linux Detail Design and Best Practices<br /><br />Details followed in below link<br /><br /><span style="font-size:180%;"><a href="http://www.mediafire.com/?m4zyzn2yoyw">RAC 10g</a></span>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com5tag:blogger.com,1999:blog-2435017743722636286.post-9802383604497177802009-05-19T11:28:00.000+05:302009-05-19T11:30:58.169+05:30How to cleanup orphaned datapump jobs from DBA_DATAPUMP_JOBSIn many cases you sometimes stop data pump job or in case of an abnormal end of the Data Pump job (the orphaned job) or using undocumented parameter KEEP_MASTER=Y, the master table remain in the database.<br /><br />Though this topic is related to cleanup orphaned datapump jobs. But it is good to know several things before doing cleanup jobs.<br /><br />1) You can check the orphaned data pump from the state column of the view dba_datapump_jobs and DBA_DATAPUMP_JOBS is based on gv$datapump_job, obj$, com$, and user$. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.<br /><br />2) For a new data pump job without any job name it is used a system generated name. From the dba_datapump_jobs it is checked for existing data pump jobs and then obtain a unique new system generated jobname.<br /><br />3) Data pump jobs are different from DBMS_JOBS and they are maintained differently. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es).<br /><br />4) If you drop the master table while doing the data pump export or data pump import operation then the scenario is discussed below.<br /><br />In case of export if you drop data pump export operation then export process will abort.<br /><br />In case of import if you drop data pump import operation then import process will abort while it leads an incomplete import.<br /><br />If the data pump job is completed and master table exist (a common if you do export operation with KEEP_MASTER=y) then it is safe to drop the master table.<br /><br />Step by step cleanup orphaned datapump jobs is discussed below.<br /><br /><span style="font-weight: bold;">Step 01:</span> Check the orphaned datapump jobs.<br /><pre style="font-size: 12px; color: black;"><br />sqlplus / as sysdba<br />SET lines 140<br />COL owner_name FORMAT a10;<br />COL job_name FORMAT a20<br />COL state FORMAT a12<br />COL operation LIKE owner_name<br />COL job_mode LIKE owner_name<br />SELECT owner_name, job_name, operation, job_mode,<br />state, attached_sessions<br />FROM dba_datapump_jobs;<br /><br />OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED_SESSIONS<br />---------- -------------------- ---------- ---------- ------------ -----------------<br />SHAIK SYS_EXPORT_SCHEMA_01 EXPORT SCHEMA NOT RUNNING 0<br />SHAIK SYS_EXPORT_SCHEMA_02 EXPORT SCHEMA NOT RUNNING 0<br /></pre><br /><span style="font-weight: bold;">Step 02:</span> Check the state field. For orphaned jobs the state will be NOT RUNNING. So from the output we can say both are orphaned jobs.<br /><br /><span style="font-weight: bold;">Step 03:</span> Drop the master table.<br />DROP TABLE SHAIK.SYS_EXPORT_SCHEMA_01;<br />DROP TABLE SHAIK.SYS_EXPORT_SCHEMA_02;<br /><br /><span style="font-weight: bold;">Step 04:</span> Check for existing data pump jobs by query issued in step 01. If objects are in recyclebin bin then purge the objects from the recyclebin.<br /><pre style="font-size: 12px; color: black;"><br />SQL> SELECT owner_name, job_name, operation, job_mode,<br /> state, attached_sessions<br /> FROM dba_datapump_jobs;<br /><br />OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED_SESSIONS<br />---------- -------------------- ---------- ---------- ------------ -----------------<br />SHAIK BIN$xMNQdACzQ6yl22kj EXPORT SCHEMA NOT RUNNING 0<br /> 9U0B8A==$0<br />SHAIK BIN$BmUy4r5MSX+ojxFk EXPORT SCHEMA NOT RUNNING 0<br /> sw8ocg==$0<br /><br />SQL> PURGE TABLE SHAIK.SYS_EXPORT_SCHEMA_01;<br /><br />Table purged.<br /><br />SQL> PURGE TABLE SHAIK.SYS_EXPORT_SCHEMA_02;<br /><br />Table purged.<br /></pre><br />Check if there is any orphaned jobs again.<br />SQL> SELECT owner_name, job_name, operation, job_mode,<br /> state, attached_sessions<br /> FROM dba_datapump_jobs;<br /><br />no rows selected<br /><br /><span style="font-weight: bold;">Step 05:</span> In this stage you did not get any orphaned jobs if the jobs have a master table. If there are still jobs listed in dba_datapump_jobs do cleanup process like below.<br /><pre style="font-size: 12px; color: black;"><br />SET serveroutput on<br />SET lines 100<br />DECLARE<br /> job1 NUMBER;<br />BEGIN<br /> job1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_01','SHAIK');<br /> DBMS_DATAPUMP.STOP_JOB (job1);<br />END;<br />/<br /><br />DECLARE<br /> job2 NUMBER;<br />BEGIN<br /> job2 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_02','SHAIK');<br /> DBMS_DATAPUMP.STOP_JOB (job2);<br />END;<br />/</pre>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com4tag:blogger.com,1999:blog-2435017743722636286.post-42133724013430437862009-05-19T11:27:00.002+05:302009-05-19T11:31:09.321+05:30ORA-39000, ORA-39143 dump file may be an original export dump file<span style="font-weight: bold;"><u>Problem Description</u></span><br />E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a<br /><br />Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:07:00<br /><br />Copyright (c) 2003, 2005, Oracle. All rights reserved.<br /><br />Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br />ORA-39001: invalid argument value<br />ORA-39000: bad dump file specification<br />ORA-39143: dump file "E:\oracle\Test\testexp_07_03_09.dmp" may be an original export dump file<br /><br /><span style="font-weight: bold;"><u>Cause of the problem</u></span><br />The above problem happened whenever you try to use the Import Data Pump client (impdp) to import a dumpfile that was created with the original Export client (exp).<br /><br />Though not related but similar error occured.<br />Like whenever you try to import from an empty file,<br /><span style="font-weight: bold;">E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a</span><br /><br />Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:16:52<br /><br />Copyright (c) 2003, 2005, Oracle. All rights reserved.<br /><br />Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br />ORA-39001: invalid argument value<br />ORA-39000: bad dump file specification<br />ORA-31619: invalid dump file "E:\oracle\Test\testexp_07_03_09.dmp"<br />ORA-27091: unable to queue I/O<br />ORA-27070: async read/write failed<br />OSD-04006: ReadFile() failure, unable to read from file<br />O/S-Error: (OS 38) Reached the end of the file.<br /><br />Whenever you try to import from invalid dump file or corrupted dump file then you may get following error,<br /><br /><span style="font-weight: bold;">E:\>impdp directory=test dumpfile=testexp_07_03_09.dmp userid=shaik/a</span><br /><br />Import: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 10:17:23<br /><br />Copyright (c) 2003, 2005, Oracle. All rights reserved.<br /><br />Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br />ORA-39001: invalid argument value<br />ORA-39000: bad dump file specification<br />ORA-31619: invalid dump file "E:\oracle\Test\testexp_07_03_09.dmp"<br /><br /><span style="font-weight: bold;"><u>Solution of the Problem</u></span><br />It is obvious that,<br /><span style="font-weight: bold;">-Data pump export client(impdp) can't read file created by the original export client (exp).</span><br /><br />and,<br /><br /><span style="font-weight: bold;">-Data exported by data pump client(expdp) can't be readable by original import client(imp).</span><br /><br />So to import dumpfile that was created with the original Export client (exp) you must have to use imp.<br /><br />Like,<br /><span style="font-weight: bold;">imp userid=shaik/a file=e:\oracle\testexp_07_03_09.dmp</span><br /><br />Otherwise you can export the dumpfile using expdp and then try impdp while importing.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com2tag:blogger.com,1999:blog-2435017743722636286.post-83045642275846200812009-05-19T11:27:00.001+05:302009-05-19T11:31:09.321+05:30Expdp fails with ORA-39001,ORA-39169,ORA-39006,ORA-39022<span style="font-weight: bold;"><u>Problem Description</u></span><br />Connecting to local 10.2.01 database whenever I want to take data pump export into remote 11g database machine using NETWORK_LINK parameter data pump export fails with ORA-39001, ORA-39169 as below.<br />[oracle@localhost bin]$ ./expdp system/a NETWORK_LINK=maestro.net schemas=maximsg VERSION=10.2<br /><br />Export: Release 10.2.0.1.0 - Production on Wednesday, 04 March, 2009 5:27:39<br /><br />Copyright (c) 2003, 2005, Oracle. All rights reserved.<br /><br />Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br />ORA-39001: invalid argument value<br />ORA-39169: Local version of 10.2.0.1.0 cannot work with remote version of 11.1.0.6.0.<br /><br />Similarly, connecting to local 10.1.0.* database whenever I want to take data pump export into remote 11g database machine using NETWORK_LINK parameter data pump export fails with<br />ORA-39006: internal error<br />ORA-39022: Database version 11.1.0.6.0 is not supported.<br /><br />Doing expdp/impdp connected to a local 10.2.0.3 database fails with,<br /><br />ORA-39127: unexpected error from call to export_string <br />:=SYS.DBMS_CUBE_EXP.schema_info_exp('SHAIK',1,1,'11.01.00.00.00',newblock)<br />ORA-37118: The OLAP API library was not preloaded.<br />ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205<br />ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280<br />ORA-06512: at line 1<br />ORA-06512: at "SYS.DBMS_METADATA", line 5980<br />ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA <br />while calling DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS]<br />ORA-06502: PL/SQL: numeric or value error: invalid LOB locator specified: <br />ORA-22275 <br />ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105<br />ORA-06512: at "SYS.KUPW$WORKER", line 6234<br /><br />Doing expdp/impdp connecting to a local 10.2.0.4.0 database<br />...<br />ORA-39126: Worker unexpected fatal error in KUPW$WORKER.MAIN [INSERT<br />INTO sys.ku$_list_filter_temp@tiger SELECT process_order, duplicate,<br />object_name, base_process_order FROM "SYSTEM"."SYS_EXPORT_SCHEMA_01" <br />WHERE process_order = :1]<br />ORA-00947: not enough values<br />ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95<br />ORA-06512: at "SYS.KUPW$WORKER", line 6345<br /><br /><span style="font-weight: bold;"><u>Cause of the Problem</u></span><br />The error occurred due to several oracle bugs. Whenever it is attempted to export data over a database link initiated from lower database compatibility level to a higher compatibility level it fails due to bug. That is if data pump export operation is started on a local 10.x database with NETWORK_LINK parameter and then the database link connects to a remote 11.x database then the Data Pump job may fail due to several defects.<br /><br />As you see above, error messages displayed vary from version to version.<br /><br />The errors ORA-39006 and ORA-39022 displayed if a Data Pump job with NETWORK_LINK is started on a local 10.1.x database and the database link connects to a remote 10.2.x or 11.x source database.<br /><br />The errors ORA-39001 and ORA-39169 displayed if a Data Pump job with NETWORK_LINK is started on a local 10.2.0.1 or 10.2.0.2 database and the database link connects to a remote 11.x source database.<br /><br />The errors ORA-39127 and ORA-37118 displayed if a Data Pump job with NETWORK_LINK is started on a local 10.2.0.3 database and the database link connects to a remote 11.x source database.<br /><br />The errors ORA-39126 and ORA-00947 displayed if a Data Pump job with NETWORK_LINK is started on a local 10.2.0.4 database and the database link connects to a remote 11.x source database.<br /><br />Note that compatibility settings of remote database is not an issue here.<br /><br /><br /><span style="font-weight: bold;"><u>Solution of the Problem</u></span><br /><span style="font-weight: bold;"><u>Solution 01:</u></span><br />- Do data pump export operation without NETWORK_LINK parameter.<br />- Transfer the data pump export file on the the other server.<br /><span style="font-weight: bold;"><u>Solution 02:</u></span><br />With usage of NETWORK_LINK parameter,<br />- Make sure that local connecting database is a 10.2.0.3.0 or higher release database.<br />- Explicitly use VERSION=10.2 while doing expdp operation.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com1tag:blogger.com,1999:blog-2435017743722636286.post-76484903706477341392009-05-19T11:24:00.000+05:302009-05-19T11:30:58.169+05:30How to get timing details on data pump processed objectsWe can get the number of objects processed and timing information needed to process object types in data pump jobs.<br /><br />We can achieve this goal by using the undocumented parameter METRICS. By setting parameter METRICS to y we can get timing details.<br /><br />An example is given below.<br /><br /><span style="font-weight: bold;">E:\Documents and Settings\</span><span style="font-weight: bold;">shaik</span><span style="font-weight: bold;">>expdp schemas=shaik userid=</span><span style="font-weight: bold;">shaik</span><span style="font-weight: bold;">/a dumpfile=</span><span style="font-weight: bold;">shaik</span><span style="font-weight: bold;">_30_04.dmp logfile=</span><span style="font-weight: bold;">shaik</span><span style="font-weight: bold;">_20_04.log metrics=y</span><br /><pre style="font-size: 12px; color: black;"><br />Export: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 7:24:07<br /><br />Copyright (c) 2003, 2005, Oracle. All rights reserved.<br /><br />Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br />Starting "SHAIK"."SYS_EXPORT_SCHEMA_03": schemas=<span style="font-weight: bold;">shaik</span> userid=<span style="font-weight: bold;">shaik</span>/******** dumpfile=<span style="font-weight: bold;">shaik</span>_30_04.dmp logfile=<span style="font-weight: bold;">shaik</span>_20_04.lo<br />g metrics=y<br />Estimate in progress using BLOCKS method...<br />Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA<br />Total estimation using BLOCKS method: 1024 KB<br />Processing object type SCHEMA_EXPORT/USER<br /> Completed 1 USER objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/SYSTEM_GRANT<br /> Completed 1 SYSTEM_GRANT objects in 1 seconds<br />Processing object type SCHEMA_EXPORT/ROLE_GRANT<br /> Completed 1 ROLE_GRANT objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/DEFAULT_ROLE<br /> Completed 1 DEFAULT_ROLE objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA<br /> Completed 1 PROCACT_SCHEMA objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE<br /> Completed 1 SEQUENCE objects in 3 seconds<br />Processing object type SCHEMA_EXPORT/TABLE/TABLE<br /> Completed 11 TABLE objects in 4 seconds<br />Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX<br /> Completed 9 INDEX objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT<br /> Completed 8 CONSTRAINT objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS<br /> Completed 10 INDEX_STATISTICS objects in 0 seconds<br />Processing object type SCHEMA_EXPORT/TABLE/COMMENT<br /> Completed 2 COMMENT objects in 1 seconds<br />Processing object type SCHEMA_EXPORT/VIEW/VIEW<br /> Completed 3 VIEW objects in 2 seconds<br />Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS<br /> Completed 11 TABLE_STATISTICS objects in 0 seconds<br />. . exported "SHAIK"."SYS_EXPORT_SCHEMA_02" 214.3 KB 1125 rows<br />. . exported "SHAIK"."SYS_EXPORT_SCHEMA_01" 31.55 KB 12 rows<br />. . exported "SHAIK"."AUTHOR" 5.835 KB 14 rows<br />. . exported "SHAIK"."BOOKAUTHOR" 5.609 KB 20 rows<br />. . exported "SHAIK"."BOOKS" 7.781 KB 14 rows<br />. . exported "SHAIK"."BOOK_CUSTOMER" 8.234 KB 21 rows<br />. . exported "SHAIK"."BOOK_ORDER" 8.398 KB 21 rows<br />. . exported "SHAIK"."ORDERITEMS" 6.742 KB 32 rows<br />. . exported "SHAIK"."PROMOTION" 5.710 KB 4 rows<br />. . exported "SHAIK"."PUBLISHER" 6.265 KB 8 rows<br />. . exported "SHAIK"."T" 4.914 KB 1 rows<br />Master table "SHAIK"."SYS_EXPORT_SCHEMA_03" successfully loaded/unloaded<br />******************************************************************************<br />Dump file set for SHAIK.SYS_EXPORT_SCHEMA_03 is:<br />E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\SHAIK_30_04.DMP<br />Job "SHAIK"."SYS_EXPORT_SCHEMA_03" successfully completed at 07:24:41<br /></pre><br />Note that with usage of additional undocumented parameter METRICS=Y, the following output is displayed in the logfile as well as on the screen.<br />Completed %n %s objects in %n secondskhaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-13500272403198205452009-05-19T11:20:00.000+05:302009-05-19T11:30:58.169+05:30Data pump Process Architecture -Master Table, Worker process<span class="Apple-style-span" style="font-weight: bold;">Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of the progress. </span>For every Data Pump Export and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.<br /><br />Let's see an example.<br />1)Starting data pump jobs in one session.<br /><span style="font-weight: bold;">expdp schemas=shaik userid=shaik/a dumpfile=shaik_30_04_2009.dmp logfile=arju_20_04_09.log</span><br /><br />2)Query from dba_datapump_session to know the data pump job status.<br /><pre style="font-size: 12px; color: black;"><br /><span style="font-weight: bold;">set lines 150 pages 100<br />col program for a20<br />col username for a5<br />col spid for a7<br />col job_name for a25<br />select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,<br /> s.status, s.username, d.job_name, p.spid, s.serial#, p.pid<br />from v$session s, v$process p, dba_datapump_sessions d<br />where p.addr=s.paddr and s.saddr=d.saddr;</span><br />DATE PROGRAM SID STATUS USERN JOB_NAME SPID SERIAL# PID<br />------------------- -------------------- ------- -------- ----- ------------------------- ------- ------- -------<br />2009-04-30 16:49:46 expdp.exe 148 ACTIVE SHAIK SYS_EXPORT_SCHEMA_01 3164 14 16<br />2009-04-30 16:49:46 ORACLE.EXE (DM00) 144 ACTIVE SHAIK SYS_EXPORT_SCHEMA_01 5376 20 17<br /><br />SQL> /<br /><br />DATE PROGRAM SID STATUS USERN JOB_NAME SPID SERIAL# PID<br />------------------- -------------------- ------- -------- ----- ------------------------- ------- ------- -------<br />2009-04-30 16:49:50 expdp.exe 148 ACTIVE SHAIK SYS_EXPORT_SCHEMA_01 3164 14 16<br />2009-04-30 16:49:50 ORACLE.EXE (DM00) 144 ACTIVE SHAIK SYS_EXPORT_SCHEMA_01 5376 20 17<br />2009-04-30 16:49:50 ORACLE.EXE (DW01) 141 ACTIVE SHAIK SYS_EXPORT_SCHEMA_01 7352 7 20<br /><br />SQL> /<br /><br />no rows selected<br /></pre><br />You see in the above output I ran the same query three times. First one is just after submitting data pump jobs. Second query I ran few seconds after data pump job is submitted. And third one is after data pump job is completed.<br /><br /><span style="font-weight: bold;">In the first output of the query just a master table is created as well as master process(DM00).</span> This master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.<br /><br /><span style="font-weight: bold;">In the second output, worker process is created (DW01).</span> Since we did not set any parallelism so by default PARALLEL is 1 (one) and there is only one worker process. If we set PARALLEL to more than 1, then we would see multiple worker process like DW01, DW02 etc. The worker processes are responsible for updating the master table with information on the status (pending, completed, failed) of each object being processed. This information is used to provide the detailed information required to restart stopped Data Pump jobs.<br /><br /><span style="font-weight: bold;">In the third output we see no rows</span> as data pump process is disappeared when the data pump job is completed or it stops.<br /><br />If you say architecture of data pump job then,<br />data pump job is initiated after creating Master Table. This table is created at the beginning of a Data Pump operation and is dropped at the end of the successful completion of a Data Pump operation. However if you kill the job by command kill_job in the interactive prompt then the Master Table can also be dropped. If a job is stopped using the stop_job interactive command or if the job is terminated unexpectedly, the Master Table will be retained.<br /><br />Note that during data pump jobs we can set the keep_master parameter Y which ensure to retain the Master Table at the end of a successful job.<br /><br />Suppose my datapump job command is,<br /><span style="font-weight: bold;">C:>expdp schemas=shaik userid=shaik/a dumpfile=shaik_30_04_.dmp logfile=shaik_20_04_09.log keep_master=y</span><br />Export: Release 10.2.0.1.0 - Production on Thursday, 30 April, 2009 17:11:09<br /><br />Copyright (c) 2003, 2005, Oracle. All rights reserved.<br /><br />Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production<br />With the Partitioning, OLAP and Data Mining options<br />Starting "SHAIK"."SYS_EXPORT_SCHEMA_02": schemas=shaik userid=shaik/******** dumpfile=shaik_30_04_.dmp logfile=shaik_20_04_0<br />9.log keep_master=y<br /><br />As we set keep_master=y so we can see master table "SHAIK"."SYS_EXPORT_SCHEMA_02" anytime after data pump jobs is completed.<br /><br />Structure of a master table is shown below.<br /><pre style="font-size: 12px; color: black;"><br /><span style="font-weight: bold;">SQL> desc "SHAIK"."SYS_EXPORT_SCHEMA_02";</span><br />Name Null? Type<br />----------------------------------------------------- -------- ------------------------------------<br />PROCESS_ORDER NUMBER<br />DUPLICATE NUMBER<br />DUMP_FILEID NUMBER<br />DUMP_POSITION NUMBER<br />DUMP_LENGTH NUMBER<br />DUMP_ALLOCATION NUMBER<br />COMPLETED_ROWS NUMBER<br />ERROR_COUNT NUMBER<br />ELAPSED_TIME NUMBER<br />OBJECT_TYPE_PATH VARCHAR2(200)<br />OBJECT_PATH_SEQNO NUMBER<br />OBJECT_TYPE VARCHAR2(30)<br />IN_PROGRESS CHAR(1)<br />OBJECT_NAME VARCHAR2(500)<br />OBJECT_LONG_NAME VARCHAR2(4000)<br />OBJECT_SCHEMA VARCHAR2(30)<br />ORIGINAL_OBJECT_SCHEMA VARCHAR2(30)<br />PARTITION_NAME VARCHAR2(30)<br />SUBPARTITION_NAME VARCHAR2(30)<br />FLAGS NUMBER<br />PROPERTY NUMBER<br />COMPLETION_TIME DATE<br />OBJECT_TABLESPACE VARCHAR2(30)<br />SIZE_ESTIMATE NUMBER<br />OBJECT_ROW NUMBER<br />PROCESSING_STATE CHAR(1)<br />PROCESSING_STATUS CHAR(1)<br />BASE_PROCESS_ORDER NUMBER<br />BASE_OBJECT_TYPE VARCHAR2(30)<br />BASE_OBJECT_NAME VARCHAR2(30)<br />BASE_OBJECT_SCHEMA VARCHAR2(30)<br />ANCESTOR_PROCESS_ORDER NUMBER<br />DOMAIN_PROCESS_ORDER NUMBER<br />PARALLELIZATION NUMBER<br />UNLOAD_METHOD NUMBER<br />GRANULES NUMBER<br />SCN NUMBER<br />GRANTOR VARCHAR2(30)<br />XML_CLOB CLOB<br />NAME VARCHAR2(30)<br />VALUE_T VARCHAR2(4000)<br />VALUE_N NUMBER<br />IS_DEFAULT NUMBER<br />FILE_TYPE NUMBER<br />USER_DIRECTORY VARCHAR2(4000)<br />USER_FILE_NAME VARCHAR2(4000)<br />FILE_NAME VARCHAR2(4000)<br />EXTEND_SIZE NUMBER<br />FILE_MAX_SIZE NUMBER<br />PROCESS_NAME VARCHAR2(30)<br />LAST_UPDATE DATE<br />WORK_ITEM VARCHAR2(30)<br />OBJECT_NUMBER NUMBER<br />COMPLETED_BYTES NUMBER<br />TOTAL_BYTES NUMBER<br />METADATA_IO NUMBER<br />DATA_IO NUMBER<br />CUMULATIVE_TIME NUMBER<br />PACKET_NUMBER NUMBER<br />OLD_VALUE VARCHAR2(4000)<br />SEED NUMBER<br />LAST_FILE NUMBER<br />USER_NAME VARCHAR2(30)<br />OPERATION VARCHAR2(30)<br />JOB_MODE VARCHAR2(30)<br />CONTROL_QUEUE VARCHAR2(30)<br />STATUS_QUEUE VARCHAR2(30)<br />REMOTE_LINK VARCHAR2(4000)<br />VERSION NUMBER<br />DB_VERSION VARCHAR2(30)<br />TIMEZONE VARCHAR2(64)<br />STATE VARCHAR2(30)<br />PHASE NUMBER<br />GUID RAW(16)<br />START_TIME DATE<br />BLOCK_SIZE NUMBER<br />METADATA_BUFFER_SIZE NUMBER<br />DATA_BUFFER_SIZE NUMBER<br />DEGREE NUMBER<br />PLATFORM VARCHAR2(101)<br />ABORT_STEP NUMBER<br />INSTANCE VARCHAR2(60)<br /></pre><br />Let's see the number of rows populated inside master table.<br /><span style="font-weight: bold;">SQL> select count(*) from "SHAIK"."SYS_EXPORT_SCHEMA_02";</span><br /><br />COUNT(*)<br />--------<br />1125<br /><br />We see in the master table the total rows are 1125. In fact master table contains all data pump log messages. It is used to track the detailed progress information of a Data Pump job - which is more than the log messages. It conatins following information.<br /><br />- Completed rows of a table.<br /><br />- Total number of errors during data pump operation.<br /><br />- Elapsed time for each table to do data pump export/import operation.<br /><br />- The current set of dump files.<br /><br />- The current state of every object exported or imported and their locations in the dump file set.<br /><br />- The job's user-supplied parameters.<br /><br />- The status of every worker process.<br /><br />- The state of current job status and restart information.<br /><br />- The dump file location, the directory name information.<br /><br />And many other useful information.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-28541470680321382742009-05-19T11:18:00.000+05:302009-05-19T11:30:58.169+05:30How to trace/diagnosis oracle data pump jobsWhenever you issue, impdp help=y or expdp help=y you can see a list of parameters that can be used for oracle data pump export/import jobs. From there you don't see any parameter by which you can trace data pump jobs. But tracing datapump job is an important issue in case of diagnosing incorrect behavior and/or troubleshooting Data Pump errors. The undocumented parameter TRACE is really useful to troubleshoot data pump jobs.<br /><br /><br />The tracing of data pump is done by TRACE parameter. This parameter takes value as 7 digit hexadecimal number. Specifying the parameter value follow some rules.<br />Out of 7 digit hexadecimal number,<br />- first 3 digits are responsible to enable tracing for a specific data pump component.<br /><br />- Rest 4 digits are usually 0300<br /><br />- Specifying more than 7 hexadecimal number is not allowed. Doing so will result,<br />UDE-00014: invalid value for parameter, 'trace'.<br /><br />- Specifying leading 0x (hexadecimal specification characters) is not allowed.<br /><br />- Value to be specified in hexadecimal. You can't specify it in decimal.<br /><br />- Leading zero can be omitted. So it may be less than 7 hexadecimal digit.<br /><br />- Values are not case sensitive.<br /><br />Before starting tracing be sure you have large enough value setting of the MAX_DUMP_FILE_SIZE initialization parameter because this size is used to capture all the trace information. The default value is UNLIMITED which is ok.<br /><br /><span style="font-weight: bold;">SQL> show parameter max_dump_file</span><br />NAME TYPE VALUE<br />------------------------------------ ----------- -------------------<br />max_dump_file_size string UNLIMITED<br /><br /><span style="font-weight: bold;">SQL> select value from v$parameter where name='max_dump_file_size';</span><br /><br />VALUE<br />--------------------------------------------------------------------<br />UNLIMITED<br /><br />If it is not unlimited then you can set it by,<br /><span style="font-weight: bold;">SQL> alter system set max_dump_file_size=UNLIMITED;</span><br /><br />System altered.<br /><br />The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only.<br /><br />In case of standard tracing trace files are generated in BACKGROUND_DUMP_DEST. In case of standard tracing,<br /><br />- If it is Master Process trace file then generated file name is,<br /><span style="font-weight: bold;"><sid>_dm<number>_<process_id>.trc</span><br /><br />- If it is Worker Process trace file then generated file name is,<br /><span style="font-weight: bold;"><sid>_dw<number>_<process_id>.trc</span><br /><br />In case of full tracing two trace files are generated in BACKGROUND_DUMP_DEST just like standard tracing. And one trace file is generated in USER_DUMP_DEST.<br /><br />Shadow Process trace file: <sid>_ora_<process_id>.trc <br /><br />The list of trace level in data pump is shown below.<br /><pre style="font-size: 12px; color: black;"><br /> Trace DM DW ORA Lines<br /> level trc trc trc in<br /> (hex) file file file trace Purpose<br />------- ---- ---- ---- ------ -----------------------------------------------<br /> 10300 x x x SHDW: To trace the Shadow process (API) (expdp/impdp)<br /> 20300 x x x KUPV: To trace Fixed table<br /> 40300 x x x 'div' To trace Process services<br /> 80300 x KUPM: To trace Master Control Process (MCP) (DM)<br /> 100300 x x KUPF: To trace File Manager<br /> 200300 x x x KUPC: To trace Queue services<br /> 400300 x KUPW: To trace Worker process(es) (DW)<br /> 800300 x KUPD: To trace Data Package<br />1000300 x META: To trace Metadata Package<br />--- +<br />1FF0300 x x x 'all' To trace all components (full tracing)<br /></pre><br />Individual tracing level values in hexadecimal are shown except last one in the list. You can use individual value or combination of values. If you sum all the individual values you will get 1FF0300 which is full tracing.<br /><br />To use full level tracing issue data pump export as,<br /><span style="font-weight: bold;">expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300</span><br /><br />To use full level tracing for data pump import operation issue import as,<br /><span style="font-weight: bold;">impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300</span><br /><br />However for most cases full level tracing is not required. As trace 400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP). So combining them is trace 480300 and by using trace 480300 you will be able to trace both Master Control process (MCP) and the Worker process(es). This would serve the purpose.<br /><br />So to solve any data pump export problem issue,<br /><span style="font-weight: bold;">expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300</span><br /><br />To solve any data pump import problem issue,<br /><span style="font-weight: bold;">impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300</span>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-265857360462049432009-05-19T11:17:00.002+05:302009-05-19T11:31:33.209+05:30Remove duplicate successive line using uniq utilityWith uniq utility you can remove duplicate line if they are within successive line. For example you have successive identical line in the file, then with uniq you can discard all but one of successive identical lines from the file. Consider you have following lines within with files.<br /><br /><span style="font-weight: bold;"># vi student_data.txt</span><br />Roll number is 024401<br />His Name is SHAIK<br />His Name is SHAIK<br />He is 24 years old.<br />Roll number is 12345<br /><br />Then using use uniq utility as below will yield following result.<br /><br /><span style="font-weight: bold;"># uniq student_data.txt</span><br />Roll number is 12345<br />His Name is SHAIK<br />He is 24 years old.<br /><br />Note that within file there was two duplicate lines. One is, "Roll number is 12345" and another is "His Name is SHAIK". Using the "uniq" output only "His Name is SHAIK" line is omitted because they are successive identical lines. However "Roll number is 12345" text line is not removed because they are not successive though they are identical. So uniq utility is used to remove adjacent identical line only.<br /><br />With the help of "sort" command uniq can be used to remove all duplicate lines within a file regardless of they are successive or not. Following is an example which will remove all duplicate lines within file student_data.txt and save it as sort_student.txt.<br /><br /><span style="font-weight: bold;"># sort student_data.txt | uniq > sort_student.txt</span><br /><br /><span style="font-weight: bold;"># cat sort_student.txt</span><br />He is 24 years old.<br />His Name is SHAIK<br />Roll number is 12345khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-79710627084518720962009-05-19T11:17:00.001+05:302009-05-19T11:31:33.209+05:30Edit file on linux using sed utilityWith sed utility you can edit text inside file directly. It is not needed to open the file using any editor and then do editing task. sed is stream editor for filtering and transforming text.<br /><br />Below is my student_grade.txt<br /><span style="font-weight: bold;"># cat student_grade.txt</span><br />024401 4.98<br />024402 4.95<br />024403 4.95<br />024404 4.50<br />024405 4.95<br /><br />Now using sed utility we will replace first few digits of "student_id" that is 0244 by "Roll:".<br /><br />The syntax of using sed utility is,<br /><span style="font-weight: bold;">sed {expression} {file}</span><br /><br />Now using sed utility we want to replace "0244" with "Roll:"<br /><span style="font-weight: bold;"># sed '/0244/s//Roll:/g' student_grade.txt</span><br />Roll:01 4.98<br />Roll:02 4.95<br />Roll:03 4.95<br />Roll:04 4.50<br />Roll:05 4.95<br /><br />Let's now understand about the command.<br />within single quote,<br />/0244 indicates search for string 024434.<br />/s means substitute or replace work.<br />//Roll: means replace the word "0244" by "Roll:"<br />/g means make the changes globalkhaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-54360959673022365522009-05-19T11:16:00.002+05:302009-05-19T11:31:33.209+05:30Data manipulation using awk utility in linuxWith awk utility you can scan a pattern within file record and then process the record as you want. Suppose you have the student_grade.txt like below. First field is<br />student id and second field is grade. Both field is separated by tab delimiter.<br /><pre style="font-size: 12px; color: black;"><br /><span class="Apple-style-span" style="font-weight: bold;"># cat student_grade.txt</span><br />024401 4.98<br />024402 4.95<br />024403 4.95<br />024404 4.50<br />024405 4.95<br /></pre><br />Now we want to find out those student id whose grade is 4.95. With awk utility we can do this. We will search each record for the grade 4.95 and then print the 1st field.<br /><br />The syntax of usage awk utiity is,<br /><span style="font-weight: bold;">awk 'pattern_action' {file_name}</span><br /><br />Now we can extract the student id whose grade is 4.95 by,<br /><span style="font-weight: bold;"># awk '/4.95/{print $1}' student_grade.txt</span><br />024402<br />024403<br />024405<br /><br />It will search for 4.95 within each record of file student_grade.txt and then by command "print $1" it will print first field.<br /><br /><span style="font-weight: bold;"><u>Meta characters used in awk</u></span><br />To search for a pattern in awr you can use various meta characters. The list of meta characters along with their meaning is given below.<br /><br />1). (Dot): Match any character<br />2)* : Match zero or more character<br />3)^ : Match beginning of line<br />4)$ : Match end of line<br />5)\ : Escape character following<br />6)[ ] : Match any of the list of characters<br />7){ } : Match range of instance<br />8)+ : Match one more preceding<br />9)? : Match zero or one preceding<br />10)| : Separate choices to match<br /><br /><span style="font-weight: bold;"><u>Predefined variables in awk</u></span><br />1)FILENAME : Name of current input file<br />2)RS : Input record separator character (Default is new line)<br />3)OFS : Output field separator string (Blank is default)<br />4)ORS : Output record separator string (Default is new line)<br />5)NF : Number of input record<br />6)NR : Number of fields in input record<br />7)OFMT : Output format of number<br />8)FS : Field separator character (Blank & tab is default)khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-5806002414534463422009-05-19T11:16:00.001+05:302009-05-19T11:31:33.210+05:30Translate or replace characters using tr utilityWith tr command you can translate letter from uppercase to lowercase and vice-versa. In other word, with tr command you can replace a letter by another letter.<br /><br />The syntax for using tr command is,<br /><span style="font-weight: bold;">tr {source_pattern} {destination_pattern}</span><br /><br />Each letter in source_pattern is replaced by corresponding letter in destination_pattern. For example if you write<br /><br /><span style="font-weight: bold;">tr "a6" "7y"</span><br /><br />then from the string or file every "a" will be replaced by "7",<br />and every "6" will be replaced by "y".<br /><br />Let's see an example. My names.txt looks like below.<br /><span style="font-weight: bold;"># cat names.txt</span><br />momin<br />arju<br />bony<br />tany<br />azmeri<br />queen<br />tasreen<br /><br />Now we want letter "o" will be replaced by number "0".<br />Letter "i" will be replaced by number "1".<br />Small letter "e" will be replaced by capital letter "E".<br /><br /><span style="font-weight: bold;"># tr "oie" "01E" <names.txt></names.txt></span><br />m0m1n<br />arju<br />b0ny<br />tany<br />azmEr1<br />quEEn<br />tasrEEn<br /><br />We can also capitalize all letters inside names.txt with single statement. We can convert to both lowercase to uppercase and vice-versa.<br /><br />In the following example, letters within names.txt is converted to all capitals.<br /><br /><span style="font-weight: bold;"># tr "a-z" "A-Z" <> names_capital.txt</span><br /><br /><span style="font-weight: bold;"># cat names_capital.txt</span><br />SHAIK<br />ABDUL<br />VIJAY<br />RAOkhaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-16482123368999924882009-05-19T11:14:00.002+05:302009-05-19T11:31:33.210+05:30Join utility in linuxTo merge line by lines(line 1 of file1 is merged to line 1 of file2 and etc) within two files. But the join utility is used to merge if there is common field in both file and if values are identical to each other. Join does not work line by line. It works with all the lines between file to search for identical values.<br /><br />A example will make you more clear. We have id_age.txt and id_dept.txt file and data are shown below. Both join and paste are shown below.<br /><br /><span style="font-weight: bold;"># cat id_age.txt</span><br />024401 28<br />024402 26<br />024434 23<br /><br /><span style="font-weight: bold;"># cat id_dept.txt</span><br />024401 CIT<br />024434 CSE<br />024438 EEE<br /><br /><span style="font-weight: bold;"># paste id_age.txt id_dept.txt</span><br />024401 28 024401 CIT<br />024402 26 024434 CSE<br />024434 23 024438 EEE<br /><br /><span style="font-weight: bold;"># join id_age.txt id_dept.txt</span><br />024401 28 CIT<br />024434 23 CSE<br /><br />Note that the id 024434 is on the 2nd line of the id_dept.txt but the id 024434 is on the 3rd line of the id_age.txt and merge is done successfully.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-45946864329941460312009-05-19T11:14:00.001+05:302009-05-19T11:31:33.210+05:30getopts command in linux shell scriptThe getopts command in shell script is used to check the valid command line arguments passed into script. The syntax of using getopts inside shell script is,<br /><br /><span style="font-weight: bold;">getopts {optsring} {variable_name}</span><br /><br />From the manual page,<br />"optstring contains the option letters to be recognized; if a letter is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. Each time it is invoked, getopts places the next option in the shell variable variable_name, When an option requires an argument, getopts places that argument into the variable OPTARG. On errors getopts diagnostic messages are printed when illegal options or missing option arguments are encountered. If an illegal option is seen, getopts places ? into variable_name."<br /><br />For example you have a shell script named student_info that would be run by,<br /><span style="font-weight: bold;">./student_info -i 024434 -a 23 -d CIT -s male</span><br />where student_info is the shell script name<br />-i is used for the student id.<br />-a is used for age.<br />-d is used for department.<br />-s is used for sex.<br /><br />Now if you see user is giving wrong argument rather than these, then you can show user about the script usage information.<br /><br />Let's look at the code.<br /><pre style="font-size: 12px; color: black;"><br /><span style="font-weight: bold;"># vi getopts.sh</span><br />{<br />echo "Usage Syntax: $0 -i -a -d -s"<br />echo "-i Student ID"<br />echo "-a Age"<br />echo "-d Department"<br />echo "-s Sex"<br />exit 1<br />}<br />if [ $# -lt 1 ]<br />then<br />help_menu<br />fi<br />while getopts i:a:d:s: option<br />do<br />case "$option" in<br /> i) id="$OPTARG";;<br /> a) age="$OPTARG";;<br /> s) sex="$OPTARG";;<br /> d) dept="$OPTARG";;<br /> \?) help_menu<br />esac<br />done<br />echo "Student ID: $id ,Age: $age ,Sex: $sex ,Department: $dept "<br /></pre><br /><span style="font-weight: bold;"># chmod +x getopts.sh</span><br /><br />If you run getopts.sh without any argument then it will display the usage syntax.<br /><span style="font-weight: bold;"># ./getopts.sh</span><br />Usage Syntax: ./getopts.sh -i -a -d -s<br />-i Student ID<br />-a Age<br />-d Department<br />-s Sex<br /><br />If you give correct arguments then it will display as below.<br /><span style="font-weight: bold;"># ./getopts.sh -i 024434 -a 23 -s male -d CIT</span><br />Student ID: 024434 ,Age: 23 ,Sex: male ,Department: CITkhaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-80946724005823997182009-04-29T14:08:00.000+05:302009-05-19T11:31:51.285+05:30All about Statistics<div style="text-align: center;"><span style="color:#990000;"><big><big><big><big><u><b><span style="font-size:180%;">All about Statistics</span><br /></b></u></big></big></big></big></span><div style="text-align: left;"><br /><big><big><span style="font-weight: bold; text-decoration: underline;">Intruduction on Statistics</span></big></big><br />The CBO makes its explain-plan decisions based on statistics.<br />Statistics provide <span style="font-weight: bold; text-decoration: underline;">critical </span>input in order for CBO to work properly; these includes information such as the number of rows in a table, distinct values in a column, number of leaf blocks in an index, etc.<br />The more accurate the statistics, the more efficient the results provided by Optimizer.<br /><br />Statistics may be exact or estimated:<br /><ul><li>Statistics generated with a COMPUTE clause analyzes all of the data in the object. This gives the optimizer accurate information to work on and arrive at a good execution plan.</li><li>Statistics generated with an ESTIMATE clause analyzes data in the object to the extent of sample size mentioned. Sample size may be specified as number of rows or percentage of rows that should be randomly analyzed to generate the statistics. Optionally block sampling may also be specified. </li></ul> Statistics are stored in a data dictionary tables owned by SYS user. The following views display the statistics collected for tables, columns and indexes.<br /><br /><span style="text-decoration: underline;">For Tables: DBA_TABLES</span><br />NUM_ROWS - Number of rows<br />BLOCKS - Number of blocks below HW (used or not)<br />EMPTY_BLOCKS - Number of empty blocks<br />AVG_SPACE - Average free space (in bytes) in blocks allocated to the table. All empty and free blocks are considered for this<br />CHAIN_CNT - Number of chained rows (used by ANALYZE command only)<br />AVG_ROW_LEN - Average row length in bytes<br />LAST_ANALYZED - Date when the table was last analyzed<br />SAMPLE_SIZE - Sample size provided for ESTIMATE statistics. Equal to NUM_ROWS if COMPUTE<br />GLOBAL_STATS - For partitioned tables, YES - statistics collected as a whole, NO - statistics are estimated from statistics<br />USER_STATS - Set to YES if user has explicitly set the statistics for the table<br />Statistics for individual partitions of a table can be seen from DBA_TAB_PARTITIONS. Cluster statistics is available from DBA_CLUSTERS.<br /><br /><span style="text-decoration: underline;">For Indexes DBA_INDEXES</span><br />BLEVEL - Depth of the index, from root to leaf<br />LEAF_BLOCKS - Number of leaf blocks<br />DISTINCT KEYS - Number of distinct index values<br />AVG_LEAF_BLOCKS_PER_KEY - Average number of leaf blocks in which each distinct key appears, should be 1 for unique indexes<br />AVG_DATA_BLOCKS_PER_KEY - Average number of blocks in the table that are pointed to by a distinct key<br />CLUSTERING_FACTOR - A count that determines the ordering of the index. Index is ordered if count is closer to the number of blocks, i.e entries in single leaf tend to point to rows in same blocks in the table. Index is randomly ordered if closer to the number of rows, i.e., entries in single leaf are pointing to rows spread across multiple blocks<br />NUM_ROWS - Number of rows indexed<br />SAMPLE_SIZE - Sample size provided for ESTIMATE statistics. Equal to NUM_ROWS if COMPUTE<br />LAST_ANALYZED - Date when the table was last analyzed<br />GLOBAL_STATS - For partitioned indexes, YES - statistics collected as a whole, NO - statistics are estimated from statistics<br />USER_STATS - Set to YES if user has explicitly set the statistics for the index<br />PCT_DIRECT_ACCESS - For secondary indexes on IOTs, percentage of rows with valid guess<br />Statistics for individual partitions of indexes can be seen from DBA_IND_PARTITIONS and DBA_IND_SUBPARTITIONS.<br /><br /><span style="text-decoration: underline;">For Columns: DBA_TAB_COLUMNS</span><br />NUM_DISTINCT - Number of distinct values<br />LOW_VALUE - Lowest value<br />HIGH_VALUE - Highest value<br />DENSITY - Density of the column<br />NUM_NULLS - Number of records with null value for the concerned column<br />NUM_BUCKETS - Number of buckets in histograms. Refer Histograms section<br />SAMPLE_SIZE - Sample size provided for ESTIMATE statistics. Equal to total rows if COMPUTE<br />LAST_ANALYZED - Date when the table was last analyzed<br />DBA_TAB_COL_STATISTICS shows similar data. Partitioned Table column statistics can be viewed from DBA_PART_COL_STATISTICS and DBA_SUBPART_COL_STATISTICS<br /><br /><span style="text-decoration: underline; font-weight: bold;"></span><br /><span style="text-decoration: underline;font-family:verdana,helvetica,arial;font-size:85%;" ><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong><a name="Important_Initialization_parameters"></a><big>Important Initialization parameters that affect the CBO</big></strong></span></span></span><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><br />Apart from generating statistics, the setup of the below mentioned parameters play a very crucial role in dictating how your system will work. Setting these will depend a lot on what kind of environment you want to create, Online, Batch processing, Data warehousing or a combination of more than one.</span></span></span> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>1) OPTIMIZER_MODE</strong><br />This will decide the mode in which the optimizer engine should run in. Valid values are RULE, CHOOSE, ALL_ROWS, FIRST_ROWS (<em>_n</em>).<br />Optionally, the OPTIMIZER_MODE can be set to CHOOSE. This is kind of an intermediate option between RBO and CBO. In fact, it tries to run the query in either CBO or RBO depending on the availability or unavailability of statistics. Therefore, if the tables present in the query have statistics generated on them, CBO (ALL_ROWS only) is preferred or RBO is taken up.<br />e.g.: optimizer_mode = first_rows<br /></span></span></span>For an OLTP system, FIRST_ROWS would be the ideal option for fast response time. ALL_ROWS is meant for batch-oriented applications. Note, a plan that produces the first n rows with the fastest response time might not be an optimal plan if requirement is to obtain the entire result, so decide as per the need of the application<br /></p> <p style="font-style: italic;"><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong style="color: rgb(204, 0, 0);">2) OPTIMIZER_INDEX_COST_ADJ</strong><br /></span></span></span><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;">Optimizer_Index_Cost_Ad takes a value between 1 and 10000 (default 100). Setting this value makes the index more or less friendly to the optimizer. For example setting it to 50 tells the optimizer that an execution plan that uses index(es) is half as expensive as other execution plans that are not using them. The lower the value (less than 100), the less full table scan executions will take place in the system.<br />Use of indexes in OLTP is a crucial factor for deciding the response time, but sometimes, full table scans provide better throughput. Nested loop joins are influenced by this parameter value. Set this parameter to a value that is optimal for your application. <span style="font-weight: bold;">I recommend set this parameter between 10 - 20 for OLTP and 50 for DSS Systems..</span><br />If you are having slow performance because the CBO first_rows optimizer mode is favoring too many full-table scans, you can reset the optimizer_index_cost_adj parameter to immediately tune all of the SQL in your database to favor index scans over full-table scans. This is a "silver bullet" that can improve the performance of an entire database in cases where the database is OTLP and you have verified that the full-table scan costing is too low.<br /></span></span></span></p> <span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"> </span></span></span> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong style="color: rgb(204, 0, 0);">3) OPTIMIZER_INDEX_CACHING</strong><br />This tells optimizer to favor nested loops and IN-list iterators over sort-merge and hash joins. The default value is 100 and makes nested loops and IN-list operator look less expensive and more appealing to Optimizer. The value basically indicates the percentage of index blocks that the optimizer should assume are present in cache. </span></span></span><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;"> <span style="font-weight: bold;">I recommend set this parameter to 85.</span></span></span></span><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><br /></span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>4) DB_FILE_MULTIBLOCK_READ_COUNT</strong><br />This parameter determines the number of database blocks read in one input/output operation during a full table scan. The value set is used in computing the cost of full table scans. OLTP systems will not benefit by setting a high value for this parameter as the computed cost for full table scan execution plans would reduced. The maximum size is Operating system dependent.<br />e.g.: db_file_multiblock_read_count = 1 (may be set to a higher value)</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>5) SORT_AREA_SIZE</strong><br />This parameter defines the maximum memory space allocated per user process that requires sorting of data and for insert and updates to bitmap indexes. Optimizer uses the set value for determining the cost of sorting in a query. This defaults to 64K. Normally a size of 64K to 1M is appropriate for OLTP systems.<br />Sort-merge joins are influenced by this parameter value. The bigger the size the more appealing will be sort-merge joins over nested-loop and hash joins. The lower the size the more sorting will take place in temporary segments. So assign a value that has been evaluated for your setup.<br />Oracle recommends the use of PGA_AGGREGATE_TARGET instead of this parameter from Oracle 9<em>i</em>.<br /></span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>6) SORT_MULTIBLOCK_READ_COUNT</strong><br />This specifies the number of database blocks to be read each time a sort performs a read from a temporary segment. The default value is 2. General recommendation is to keep the default value. However, if you intend to hike up the SORT_AREA_SIZE considerably, you may evaluate the improvement by changing this parameter.<br />e.g.: sort_multiblock_read_count = 2<br /></span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>7) HASH_JOIN_ENABLED</strong><br />Hash joins are available only in CBO. In hash joins, a hash table is created on the join key of the smallest sized table. It then joins the other tables to find the match. Hash joins may prove to be faster than other type of joins in some conditions, especially when the index is missing or search criteria is not very selective. Hash joins require a large amount of memory as the hash tables are retained; this may sometimes result in memory swapping.<br />Nested-loop joins return the first row faster than sort-merge and hash joins and are preferred for OLTP, but other types of joins cannot be ignored for running other aspects of the applications.<br />e.g.: hash_join_enabled = true</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>8) HASH_AREA_SIZE</strong><br />This specifies the maximum amount of memory in bytes to be used for a hash join per process. It is defaulted to 2 times SORT_AREA_SIZE.<br />Oracle recommends the use of PGA_AGGREGATE_TARGET instead of this parameter from Oracle 9<em>i</em>.<br />e.g.: hash_area_size = 2097152<br />Setting this to a very low number may sometimes result in the following error.<br />ORA-6580: Hash Join ran out of memory while keeping large rows in memory.</span></span></span></p> <span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"> </span></span></span> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>9) HASH_MULTIBLOCK_IO_COUNT</strong><br />This specifies how many sequential blocks a hash join reads and writes in one Input/Output activity. Oracle recommends not changing or assigning a value to this parameter; this will let oracle decide on the appropriate value for each individual query. In such casees, the value of the parameter will appear as 0 in the V$PARAMETER view.<br />This parameter is renamed to _HASH_MULTIBLOCK_IO_COUNT as of Oracle 9<em>i</em>.<br />e.g.: hash_multi_block_io_count = 0<br /></span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>10) BITMAP_MERGE_AREA_SIZE</strong><br />This parameter is relevant for systems using bitmap indexes. It specifies the amount of memory Oracle uses to merge bitmaps retrieved from a range scan of a bitmap index. The default value is 1 MB, which is considered sufficient for most setups.<br />Oracle recommends use of PGA_AGGREGATE_TARGET instead of this parameter from Oracle 9<em>i</em>.<br />e.g.: bitmap_merge_area_size = 1048576</span></span></span></p> <p style="font-style: italic;"><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>11) QUERY_REWRITE_ENABLED</strong><br />This parameter is relevant for systems using Materialized views, Function based indexes and stored outlines. Setting this parameter enables query rewrite for the database. The materialized views should also be query rewrite enabled to allow the optimizer to redirect a query to it rather than the actual tables listed in the FROM clause. Query rewriting is internally done by the optimizer based on what results are expected and whether these are already present in materialized view form.<br />e.g.: query_rewrite_enabled = true</span></span></span></p> <p style="font-style: italic;"><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>12) QUERY_REWRITE_INTEGRITY</strong><br />This parameter determines the extent to which the optimizer must enforce query rewriting to Materialized views. This determines the accuracy of the query rewrite. It can be set to ENFORCED, TRUSTED or STALE_TOLERATED. ENFORCED option uses Materialized views if they contain fresh data, thus guarantying the integrity and consistency of data. TRUSTED uses relationships that have been defined in the system for rewriting the query. STALE_TOLERATED allows the use of relationships that are not enforced and may use stale data. For OLTP systems, ENFORCED would be the ideal setup, as reports need to be up-to-date.<br />e.g.: query_rewrite_integrity = enforced</span></span></span></p> <span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"> </span></span></span> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>13) ALWAYS_ANTI_JOIN</strong><br />This parameter specifies the join method for anti-joins, for example when a NOT IN operator is present in your query. It can be set to NESTED_LOOPS, MERGE or HASH. It is defaulted to NESTED_LOOPS in Oracle 8<em>i</em> and CHOOSE in Oracle 9<em>i</em>.<br />This parameter is renamed to _ALWAYS_ANTI_JOIN as of Oracle 9<em>i</em>.<br />e.g.: always_anti_join = nested_loops</span></span></span></p> <span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"> </span></span></span> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>14) ALWAYS_SEMI_JOIN</strong><br />This parameter specifies the join method for semi-joins. These types of joins are carried out by Optimizer after transforming a query. In such joins, duplicate values from the inner table are removed and then the type of join specified in the parameter is used to perform a semi-join. It can be set to NESTED_LOOPS, MERGE or HASH. In Oracle 8<em>i,</em> it is defaulted to STANDARD and in Oracle 9<em>i</em> it is defaulted to CHOOSE, to pick up an appropriate join.<br />This parameter is renamed to _ALWAYS_SEMI_JOIN as of Oracle 9<em>i</em>.<br />e.g.: always_semi_join = nested_loops</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>15) STAR_TRANSFORMATION_ENABLED</strong><br />This specifies whether query transformation will be applied to star queries. It can be set to TRUE, FALSE or TEMP_DISABLE (transformation will take place but will not use temporary tables). I presently set it to FALSE due to some known issues of sub-optimal queries being generated. If you intend to use this, please upgrade your version to 8.1.7.4 and above.<br />e.g.: star_transformation_enabled = false</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>16) PARALLEL_BROADCAST_ENABLED</strong><br />This parameter refers to parallel executions in cluster databases. It is meant for improving hash and sort-merge join operations where a very large result set is joined with a very small result set. When this option is enabled, the optimizer broadcasts a copy of all rows in the smaller result set to all cluster databases that are processing some rows of the larger result set.<br />It is obsolete in release 9.2.0.<br />e.g.: parallel_broadcast_enabled = false</span></span></span></p> <p style="font-style: italic;"><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>17) OPTIMIZER_DYNAMIC_SAMPLING</strong><br />This parameter is introduced in release 9<em>i</em>. It is meant for situations where tables are not analyzed. As CBO depends heavily on statistics, the parameter tells the optimizer to sample the unanalyzed tables that are being used in a query. A level of 0 to 10 can be specified, the higher the value the more time optimizer spends in sampling.<br />e.g.: optimizer_dynamic_sampling = 1</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>18) PARTITION_VIEW_ENABLED</strong><br />This parameter is meant for backward compatibility to support partition views. Oracle recommends use of partition tables rather than partition views. If you are migrating to CBO, chances are that you may not be using partition views.<br />e.g.: partition_view_enabled = false</span></span></span></p> <p style="font-style: italic;"><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong>19) CURSOR_SHARING</strong><br />This parameter determines what kind of SQL statements can share the same cursor. It can be set to FORCE, SIMILAR or EXACT. FORCE will try to squeeze statements that may differ in some literals to share the same cursor. SIMILAR is somewhat the same but will try to maintain the plan optimization for identical statements. EXACT allows statements with exact identical text to share a cursor.<br />Using FORCE may sometimes result in unexpected results.<br />e.g.: cursor_sharing = exact</span></span></span></p> <span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><strong style="font-style: italic;">20) PGA_AGGREGATE_TARGET</strong><br /> <span style="font-style: italic;">Introduced in Oracle 9</span><em style="font-style: italic;">i</em><span style="font-style: italic;">, this parameter specifies the aggregate PGA memory available to all server processes attached to an instance. This parameter can be set for automatic sizing of SQL working areas. It replaces other existing parameters like SORT_AREA_SIZE, BITMAP_MERGE_AREA_SIZE and HASH_AREA_SIZE.</span><br /> <span style="font-style: italic;">It can be set to a value between 10 MB to 4000 GB, depending on the setup requirement.<br /></span></span></span></span><span style="text-decoration: underline;">1- Make a first estimate for PGA_AGGREGATE_TARGET</span> based on the following rule:<br /><span style="color: rgb(153, 0, 0);"> PGA_AGGREGATE_TARGET = (Total Physical Memory * 0.16) - For OLTP systems (16% of Server Physical Mem)</span><br /> <span style="color: rgb(51, 102, 255);"><span style="color: rgb(153, 0, 0);"> PGA_AGGREGATE_TARGET = (Total Physical Memory * 0 .4 ) - For DSS systems</span> </span><span style="color: rgb(51, 102, 255);"><span style="color: rgb(153, 0, 0);"> (40% of Server Physical Mem)</span></span><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-style: italic;"><br /></span>More Information <a href="http://www.pafumi.net/New_Features_Oracle9i.htm#AutomaticSQLExecmanagement_and_Tune_PGA">HERE</a><br /><br /></span></span></span><br /><br /><big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Gatering_Statistics_Tables"></a>Gatering Statistics for Tables</span></big></big><br />Oracle strongly recommends that you use the DBMS_STATS package rather than ANALYZE to collect optimizer statistics. That package lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. Further, the cost-based optimizer will eventually use only statistics that have been collected by DBMS_STATS.<br /><br />However, you must use the ANALYZE statement rather than DBMS_STATS for statistics collection not related to the cost-based optimizer, such as:<br /> * To use the VALIDATE or LIST CHAINED ROWS clauses<br /> * To collect information on freelist blocks<br /><h4 style="text-decoration: underline;"><span style="font-family: arial,helvetica;"><span style="font-weight: bold;"></span></span></h4> In order to use the DBMS_STATS Package, the user needs to get the execute privilege. Run the following as SYS:<span style="font-weight: bold;"><br /> grant execute on DBMS_STATS to user_xyz;<br /></span><br />DBMS_STATS.GATHER_TABLE_STATS gathers statistics for a table and its columns, and optionally the associated indexes. Syntax:<br /><span style="font-family: monospace;">dbms_stats.gather_table_stats(ownname, tabname, partname, </span><span style="font-family: monospace;">estimate_percent, block_sample,</span><br /> <span style="font-family: monospace;"> method_opt, degree, granularity, </span><span style="font-family: monospace;">cascade, stattab, statid, statown);</span><br /><br />The first two parameters are mandatory, the rest are defaulted to a value.<br /> * ownname - owner<br /> * tabname - table name<br /> * partname - partition name<br /> * estimate_percent - sample percent ratio<br /> * block_sample - consider random blocks sampling rather than rows sampling. TRUE/FALSE<br /> * method_opt - method options. FOR ALL COLUMNS/FOR ALL INDEXED COLUMNS. Append the phase SIZE 1 if it is required to generate statistics in parallel.<br /> * degree - degree of parallelism.<br /> * granularity - for partitioned tables. DEFAULT/SUBPARTITION/PARTITION/GLOBAL/ALL.<br /> * cascade - gather statistics for indexes also. TRUE/FALSE<br /> * stattab, statid, statown - required for user statistics.<br /><br /><span style="text-decoration: underline;">Collect statistics for a table and its indexes:</span><br /><span style="font-family: monospace;">EXEC DBMS_STATS.gather_table_stats(USER, 'LOOKUP', cascade => TRUE);</span><span style="font-family: monospace;"><br /><br /></span><span style="text-decoration: underline;">Collect stats for a table, estimating 20%, with parallelism</span><br /><span style="font-family: monospace;">execute dbms_stats.gather_table_stats (ownname => 'SCOTT', tabname => 'DEPT', partname=> null, estimate_percent => 20, degree => 5, cascade => true, options => 'GATHER AUTO');</span><br /><br /><span style="text-decoration: underline;">Estimate statistics for columns in a table, this will also generate statistics for tables.</span><br /><span style="font-family: monospace;">exec dbms_stats.gather_table_stats(ownname => 'SCOTT', tabname => '</span><span style="font-family: monospace;">DEPT</span><span style="font-family: monospace;">', estimate_percent => 5, method_opt => 'FOR ALL COLUMNS');</span><br /><br /><span style="text-decoration: underline;">SQL Source - Dynamic Method for TABLES</span><br /><span style="font-family: monospace;">DECLARE</span><br /> <span style="font-family: monospace;">sql_stmt VARCHAR2(1024);</span><br /> <span style="font-family: monospace;">BEGIN</span><br /> <span style="font-family: monospace;"> FOR tab_rec IN (SELECT owner,table_name</span><br /> <span style="font-family: monospace;"> FROM all_tables WHERE owner like UPPER('&1') ) LOOP</span><br /> <span style="font-family: monospace;"> sql_stmt := 'BEGIN dbms_stats.gather_table_stats</span><span style="font-family: monospace;">(ownname => :1, </span><span style="font-family: monospace;">tabname => :2, </span><span style="font-family: monospace;">partname => null, </span><span style="font-family: monospace;">estimate_percent => 10, </span><span style="font-family: monospace;">degree => 3 ,</span><span style="font-family: monospace;">cascade => true); END;' ;</span><br /> <span style="font-family: monospace;"> EXECUTE IMMEDIATE sql_stmt USING tab_rec.owner, tab_rec.table_name ;</span><br /> <span style="font-family: monospace;"> END LOOP;</span><br /> <span style="font-family: monospace;">END;</span><br /> <span style="font-family: monospace;">/</span><br /><br /><span style="text-decoration: underline;">Collecting Stats in a Procedure</span><br /><span style="font-family: monospace;">CREATE OR REPLACE PROCEDURE analyze_any_schema ( p_inOwner IN all_users.username%TYPE)</span><br /> <span style="font-family: monospace;">IS</span><br /> <span style="font-family: monospace;">BEGIN</span><br /> <span style="font-family: monospace;"> FOR v_tabs IN (SELECT owner, table_name</span><br /> <span style="font-family: monospace;"> FROM all_tables</span><br /> <span style="font-family: monospace;"> WHERE owner = p_inOwner</span><br /> <span style="font-family: monospace;"> AND temporary <> 'Y')</span><br /> <span style="font-family: monospace;"> LOOP</span><br /> <span style="font-family: monospace;"> DBMS_OUTPUT.put_line ('EXEC DBMS_STATS.gather_table_stats('''||v_tabs.owner||</span><br /> <span style="font-family: monospace;"> ''','''||v_tabs.table_name||''',NULL,1);' );</span><br /> <span style="font-family: monospace;"> BEGIN</span><br /> <span style="font-family: monospace;"> DBMS_STATS.gather_table_stats(v_tabs.owner,v_tabs.table_name,NULL,1);</span><br /> <span style="font-family: monospace;"> DBMS_OUTPUT.put_line ('Analyzed '||v_tabs.owner||'.'||table_name||'... ');</span><br /> <span style="font-family: monospace;"> EXCEPTION</span><br /> <span style="font-family: monospace;"> WHEN OTHERS THEN</span><br /> <span style="font-family: monospace;"> DBMS_OUTPUT.put_line ('Exception on analysis of '||v_tabs.table_name||'!');</span><br /> <span style="font-family: monospace;"> DBMS_OUTPUT.put_line (SUBSTR(SQLERRM,1,255));</span><br /> <span style="font-family: monospace;"> END;</span><br /> <span style="font-family: monospace;"> END LOOP;</span><br /> <span style="font-family: monospace;">END;</span><br /> <span style="font-family: monospace;">/</span><br /> <span style="font-family: monospace;">CREATE OR REPLACE Procedure DB_Maintenance_Weekly is</span><br /> <span style="font-family: monospace;"> sql_stmt varchar2(1024);</span><br /> <span style="font-family: monospace;"> v_sess_user varchar2(30);</span><br /> <span style="font-family: monospace;">BEGIN</span><br /> <span style="font-family: monospace;"> select sys_context('USERENV','SESSION_USER') into v_sess_user</span><br /> <span style="font-family: monospace;"> from dual ;</span><br /> <span style="font-family: monospace;"> --Analyze all Tables</span><br /> <span style="font-family: monospace;"> FOR tab_rec IN (SELECT table_name</span><br /> <span style="font-family: monospace;"> FROM all_tables</span><br /> <span style="font-family: monospace;"> WHERE owner = v_sess_user</span><br /> <span style="font-family: monospace;"> and table_name not like 'TEMP_%') LOOP</span><br /> <span style="font-family: monospace;"> sql_stmt := 'BEGIN dbms_stats.gather_table_stats</span><br /> <span style="font-family: monospace;"> (ownname => :1,</span><br /> <span style="font-family: monospace;"> tabname => :2,</span><br /> <span style="font-family: monospace;"> partname => null,</span><br /> <span style="font-family: monospace;"> estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,</span><br /> <span style="font-family: monospace;"> degree => 3 ,</span><br /> <span style="font-family: monospace;"> cascade => true); END;' ;</span><br /> <span style="font-family: monospace;"> EXECUTE IMMEDIATE sql_stmt USING v_sess_user, tab_rec.table_name ;</span><br /> <span style="font-family: monospace;"> END LOOP;</span><br /> <span style="font-family: monospace;"> EXCEPTION</span><br /> <span style="font-family: monospace;"> WHEN NO_DATA_FOUND THEN</span><br /> <span style="font-family: monospace;"> NULL ;</span><br /> <span style="font-family: monospace;">end;</span><br /> <span style="font-family: monospace;">/</span><br /><br /><span style="text-decoration: underline;">Force Statistics to a Table</span><br />You can use the following sentence to force statistics to a Table:<br /><span style="font-family: monospace;"> exec dbms_stats.set_table_stats( user, 'EMP', numrows => 1000000, numblks => 300000 );</span><br /><br />No statistics are collected for Global Temporary Tables; handle these with care. Make use of hints to drive queries on these. Provide explicit statistics by using DBMS_STATS.SET_TABLE_STATS if your temporary table is used for processing huge amounts of data.<br /><br /><span style="text-decoration: underline;">How often to collect stats?</span><br /><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;">When there is a 10-20% change in data, the general convention is to generate fresh statistics. You can start of with a general rule of estimating statistics on a weekly basis. If the tables are giving real bad hits because of heavy activity, you may consider using the DML Monitoring option to update statistics every few hours for such tables. Statistics are not incremental and are regenerated every time. If there is no considerable change in data, there is no advantage in generating statistics too frequently.<br /><br /><span style="text-decoration: underline;">DML Monitoring</span><br />Used by dbms_stats to identify objects with "stale" statistics<br />- On by default in 10g, not in 9i<br /></span></span></span><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;"><span style="font-family: monospace;"> alter table <table_name> monitoring;</span></span></span></span><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;"><br />- For the Full Database:<br /></span></span></span><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;"><span style="font-family: monospace;"> dbms_stats.alter_database_tab_monitoring(monitoring=>true, sysobjs=>false); </span></span></span></span><span style="font-size:85%;"><span style="font-size:100%;"><span style="font-size:100%;"><br /><br />- Tracked in [DBA|ALL|USER]_TAB_MODIFICATIONS<br />- 9i and 10g use 10% change as the threshold to gather stats<br />In Oracle 10g, Oracle automatically gathers index statistics whenever the index is created or rebuilt.<br /></span></span></span><br /><br /><big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Gathering_statistics_Schema"></a>Gathering statistics for a Schema, a Database and System Stats</span></big></big><br /><span style="text-decoration: underline;">Collect statistics for my Schema (All the tables and Indexes), estimating 10%</span><br /><span style="font-family: monospace;">execute dbms_stats.gather_schema_stats(ownname => 'SCOTT', estimate_percent => 10, degree => 5, cascade => true);</span><br /><br /><span style="text-decoration: underline;">Collect statistics for the Database:</span><br /><span style="font-family: monospace;">execute dbms_stats.gather_database_stats(estimate_percent => 20, degree => 5, cascade => true);</span><br /><br /><span style="text-decoration: underline;">Generating System Statistics</span><br /> Gather statistics during the day. Gathering ends after 720 minutes and is stored in the mystats table:<br /><span style="font-family: monospace;">exec DBMS_STATS.GATHER_SYSTEM_STATS(gathering_mode => 'interval', interval => 720, stattab => 'mystats', statid => 'OLTP');</span><br /><br />Gather statistics during the night. Gathering ends after 720 minutes and is stored in the mystats table:<br /><span style="font-family: monospace;">DBMS_STATS.GATHER_SYSTEM_STATS(gathering_mode => 'interval', interval => 720, stattab => 'mystats', statid => 'OLAP');</span><br /><br />If appropriate, you can switch between the statistics gathered. It is possible to automate this process by submitting a job to update the dictionary with appropriate statistics. During the day, the following jobs import the OLTP statistics for the daytime run:<br /><span style="font-family: monospace;">VARIABLE jobno number;</span><br /> <span style="font-family: monospace;">BEGIN</span><br /> <span style="font-family: monospace;"> DBMS_JOB.SUBMIT(:jobno, </span><span style="font-family: monospace;">'DBMS_STATS.IMPORT_SYSTEM_STATS(''mystats'',''OLTP'');', </span><span style="font-family: monospace;">SYSDATE, 'SYSDATE + 1');</span><br /> <span style="font-family: monospace;"> COMMIT;</span><br /> <span style="font-family: monospace;">END;</span><br /> <span style="font-family: monospace;">/</span><br /><br />During the night, the following jobs import the OLAP statistics for the night time run:<br /><span style="font-family: monospace;">BEGIN</span><br /> <span style="font-family: monospace;"> DBMS_JOB.SUBMIT(:jobno, </span><span style="font-family: monospace;">'DBMS_STATS.IMPORT_SYSTEM_STATS(''mystats'',''OLAP'');', </span><span style="font-family: monospace;">SYSDATE + 0.5, 'SYSDATE + 1');</span><br /> <span style="font-family: monospace;"> COMMIT;</span><br /> <span style="font-family: monospace;">END;</span><br /> <span style="font-family: monospace;">/</span><br /><br /><span style="text-decoration: underline;">STATISTICS FOR THE DATA DICTIONARY</span><br />New in Oracle Database 10g is the ability to gather statistics for the data dictionary. The objective is to enhance the performance of queries. There are two basic types of dictionary base tables.<br />The statistics for normal base tables are gathered using GATHER_DICTIONARY STATISTICS. They may also be gathered using GATHER_SCHEMA_STATS for the SYS schema. Oracle recommends gathering these statistics at a similar frequency as your other database objects.<br />Statistics for fixed objects (the V$ views on the X$ tables) are gathered using the GATHER_FIXED_OBJECT_STATS procedure. The initial collection of these statistics is normally sufficient. Repeat only if workload characteristics have changed dramatically. The SYSDBA privilege or ANALYZE ANY DICTIONARY and ANALYZE ANY privileges are required to execute the procedures for gathering data dictionary statistics.<br /> <br /><br /><span style="text-decoration: underline;">The Option parameter</span><br />There are several values for the <span style="text-decoration: underline;">options </span>parameter that we need to know about:<br /> - gather - re-analyzes the whole schema.<br />- gather empty - Only analyze tables that have no existing statistics.<br />- gather stale - Only re-analyze tables with more than 10% modifications (inserts, updates, deletes). The table should be in monitor status first.<br />- gather auto - This will re-analyze objects which currently have no statistics and objects with stale statistics.The table should be in monitor status first.<br /> Using gather auto is like combining gather stale and gather empty.<br />Note that both gather stale and gather auto require monitoring. If you issue the "alter table xxx monitoring" command, Oracle tracks changed tables with the dba_tab_modifications view.<br />The most interesting of these options is the gather stale option. Because all statistics will become stale quickly in a robust OLTP database, we must remember the rule for gather stale is > 10% row change (based on num_rows at statistics collection time).<br />Hence, almost every table except read-only tables will be re-analyzed with the gather stale option. Hence, the gather stale option is best for systems that are largely read-only. For example, if only 5% of the database tables get significant updates, then only 5% of the tables will be re-analyzed with the "gather stale" option.<br />The CASCADE => TRUE option causes all indexes for the tables to also be analyzed. In Oracle 10g, set CASCADE to AUTO_CASCADE to let Oracle decide whether or not new index statistics are needed.<br /><br /><br /><big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Important_Notes"></a>Important Notes</span></big></big><br /><ul><li>You should analyze the table after creating a function-based index, to allow Oracle to collect column statistics equivalent information for the expression. Optionally, you can collect histograms for the index expressions by specifying <code>for</code><code>all</code> <code>hidden</code> <code>columns</code> <code>size</code> <em><code>number_of_buckets</code></em> in the <code>METHOD_OPT</code> argument to the <code>DBMS_STATS</code> procedures.</li><li>Oracle can gather some statistics automatically while creating or rebuilding a B-tree or bitmap index. The <code>COMPUTE</code> <code>STATISTICS</code> option of <code>CREATE</code> <code>INDEX</code> or <code>ALTER</code> <code>INDEX</code> ... <code>REBUILD</code> enables this gathering of statistics.</li><li>Oracle recommends setting the <code>ESTIMATE_PERCENT</code> parameter of the <code>DBMS_STATS</code> gathering procedures to <code>DBMS_STATS</code>.<code>AUTO_SAMPLE_SIZE</code> to maximize performance gains while achieving necessary statistical accuracy. <code>AUTO_SAMPLE_SIZE</code> lets Oracle determine the best sample size for good statistics.</li><li>By default DBMS_STATS will generate statistics for the table and not it's indexes (By default CASCADE => FALSE).</li><li>If you have a big table (around 2M rows), try to ESTIMATE statistics. In general, a sample size of 5% to 10% generates adequate statistics!</li><li>A table that is in the process of being analyzed cannot undergo DDL changes but DML activities can be carried on. Analyzing an index puts a <span style="font-weight: bold; text-decoration: underline;">shared lock</span> on the related table; hence, neither DML nor DDL can be performed. Preferably avoid all activities during the statistics generation phase<br /> </li></ul> <span style="font-weight: bold; text-decoration: underline;"></span><span style="font-family: monospace;"></span><br /><br /><br /><big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Verifying_Table_Statistics"></a>Verifying Table Statistics</span></big></big><br />Syntax:<br />dbms_stats.get_table_stats(ownname, tabname, partname, stattab, statid, numrows, numblks, avgrlen, statown);<br /><br /><span style="font-family: monospace;">declare</span><br /> <span style="font-family: monospace;"> l_numrows number;</span><br /> <span style="font-family: monospace;"> l_numblks number;</span><br /> <span style="font-family: monospace;"> l_avgrlen number;</span><br /> <span style="font-family: monospace;">begin</span><br /> <span style="font-family: monospace;"> dbms_stats.get_table_stats(ownname => 'SYS', tabname => 'AM01', numrows => l_numrows, numblks => l_numblks, avgrlen => l_avgrlen);</span><br /> <span style="font-family: monospace;"> dbms_output.put_line('No. of rows: ' || l_numrows);</span><br /> <span style="font-family: monospace;"> dbms_output.put_line('No. of blks: ' || l_numblks);</span><br /> <span style="font-family: monospace;"> dbms_output.put_line('Avg row length: ' || l_avgrlen);</span><br /> <span style="font-family: monospace;">end;</span><br /> <span style="font-family: monospace;">/</span><br />No. of rows: 4106860<br />No. of blks: 6219<br />Avg row length: 3<br /><br /><br /><big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Verifying_Index_Statistics"></a>Verifying Index Statistics</span></big></big><br /><br /><pre class="CE">SELECT INDEX_NAME "NAME", NUM_ROWS, DISTINCT_KEYS "DISTINCT",<br /> LEAF_BLOCKS, CLUSTERING_FACTOR "CF", BLEVEL "LEVEL",<br /> AVG_LEAF_BLOCKS_PER_KEY "ALFBPKEY"<br />FROM DBA_INDEXES<br />WHERE OWNER = 'SH'<br />ORDER BY INDEX_NAME;<br /></pre> <p class="BP">Typical output is:</p> <pre class="CEW">NAME NUM_ROWS DISTINCT LEAF_BLOCKS CF LEVEL ALFBPKEY<br />-------------------------- -------- -------- ----------- ------- ------- ----------<br />CUSTOMERS_PK 50000 50000 454 4405 2 1<br />PRODUCTS_PK 10000 10000 90 1552 1 1<br />PRODUCTS_PROD_CAT_IX 10000 4 99 4422 1 24<br />PRODUCTS_PROD_SUBCAT_IX 10000 37 170 6148 2 4<br />SALES_PROD_BIX 6287 909 1480 6287 1 1<br />SALES_PROMO_BIX 4727 459 570 4727 1 1<br /><br />declare<br /> l_numrows number;<br /> l_numlblks number;<br /> l_numdist number;<br /> l_avglblk number;<br /> l_avgdblk number;<br /> l_clstfct number;<br /> l_indlevel number;<br />begin<br /> dbms_stats.get_index_stats(ownname => 'SYS', indname => 'AM01_N1',<br /> numrows => l_numrows, numlblks => l_numlblks,<br /> numdist => l_numdist, avglblk => l_avglblk,<br /> avgdblk => l_avgdblk, clstfct => l_clstfct,<br /> indlevel => l_indlevel);<br /> dbms_output.put_line('No. of rows: ' || l_numrows);<br /> dbms_output.put_line('No. of blks: ' || l_numlblks);<br /> dbms_output.put_line('No. of distinct values: ' || l_numdist);<br /> dbms_output.put_line('Avg leaf blocks for distinct keys: ' || l_avglblk);<br /> dbms_output.put_line('Avg data blocks pointed to in the table: ' || l_avgdblk);<br /> dbms_output.put_line('Clustering factor: ' || l_clstfct);<br /> dbms_output.put_line('Index height: ' || l_indlevel);<br />end;<br />/<br />No. of rows: 3819711<br />No. of blks: 11092<br />No. of distinct values: 1<br />Avg leaf blocks for distinct keys: 11092<br />Avg data blocks pointed to in the table: 14616<br />Clustering factor: 14616<br />Index height: 2<br /><br /><br /></pre> <big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Verifying_Column_Statistics"></a>Verifying Column Statistics</span></big></big> <pre class="CE">SELECT COLUMN_NAME, NUM_DISTINCT, NUM_NULLS, NUM_BUCKETS, DENSITY<br />FROM DBA_TAB_COL_STATISTICS<br />WHERE TABLE_NAME ="PA_EXPENDITURE_ITEMS_ALL"<br />ORDER BY COLUMN_NAME;<br /></pre> <p class="BP">This returns the following data:</p> <pre class="CE">COLUMN_NAME NUM_DISTINCT NUM_NULLS NUM_BUCKETS DENSITY<br />------------------------------ ------------ ---------- ----------- ----------<br />BURDEN_COST 4300 71957 1 .000232558<br />BURDEN_COST_RATE 675 7376401 1 .001481481<br />CONVERTED_FLAG 1 16793903 1 1 poor<br />COST_BURDEN_DISTRIBUTED_FLAG 2 15796 1 .5 poor<br />COST_DISTRIBUTED_FLAG 2 0 1 .5 poor<br />COST_IND_COMPILED_SET_ID 87 6153143 1 .011494253<br />EXPENDITURE_ID 1171831 0 1 8.5337E-07 GOOD<br />TASK_ID 8648 0 1 .000115634<br />TRANSFERRED_FROM_EXP_ITEM_ID 1233787 15568891 1 8.1051E-07 GOOD<br /><br /><br />declare<br /> l_distcnt number;<br /> l_density number;<br /> l_nullcnt number;<br /> l_srec dbms_stats.statrec;<br /> l_avgclen number;<br />begin<br /> dbms_stats.get_column_stats(ownname => 'SYS', tabname => 'AM01',<br /> colname => 'COL1', distcnt => l_distcnt, density => l_density,<br /> nullcnt => l_nullcnt, srec => l_srec, avgclen => l_avgclen);<br /> dbms_output.put_line('No. of distinct values: ' || l_distcnt);<br /> dbms_output.put_line('Density: ' || l_density);<br /> dbms_output.put_line('Count of nulls: ' || l_nullcnt);<br /> dbms_output.put_line('Avg. column length: ' || l_avgclen);<br />end;<br />/<br />No. of distinct values: 2<br />Density: .5<br />Count of nulls: 0<br />Avg. column length: 3<br /><br /><br /></pre> <span style="font-weight: bold; text-decoration: underline;font-family:verdana,helvetica,arial;font-size:85%;" ><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;"><a name="Exporting_and_importing_statistics"></a>Exporting and importing statistics with DBMS_STATS</span></span></span> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;">DBMS_STATS also includes routines for gathering statistics and storing them outside the dictionary. This does not influence the optimizer. Most of the procedures in this package as mentioned above have three common parameters - STATID, STATTAB and STATOWN that are related to user processing of statistics.</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;">Advantages of this feature:</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;">1. Estimated statistics at different percentages could be stored and used for testing.</span></span></span></p> <p><span style="font-family:verdana,helvetica,arial;font-size:85%;"><span style="font-family:arial, helvetica;font-size:100%;"><span style="font-family:arial, helvetica;font-size:100%;">2. Statistics generated on one database could be transferred to another database.<br /></span></span></span></p><br /><h2 style="text-decoration: underline;" class="H1"><span style="font-family:Arial, Helvetica, sans-serif;color:#330099;"><a name="Using_Histograms"></a>Using Histograms</span></h2> <!--/TOC=h1--> <p class="BP">The cost-based optimizer can use data value histograms to get accurate estimates of the distribution of column data. A <strong class="GlossaryTerm">histogram</strong> partitions the values in the column into bands, so that all column values in a band fall within the same range. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with nonuniform data distributions.</p> <p class="BP">One of the fundamental tasks of the cost-based optimizer is determining the selectivity of predicates that appear in queries. Selectivity estimates are used to decide when to use an index and the order in which to join tables. Some attribute domains (a table's columns) are not uniformly distributed.</p> <p class="BP">The cost-based optimizer uses height-based histograms on specified attributes to describe the distributions of nonuniform domains. In a height-based histogram, the column values are divided into bands so that each band contains approximately the same number of values. The useful information that the histogram provides, then, is where in the range of values the endpoints fall.<br /></p> <h3 class="H2"><span style="font-family:Arial, Helvetica, sans-serif;color:#330099;">When to Use Histograms</span></h3> <!--/TOC=h2--> <p class="BP">Histograms can affect performance and should be used only when they substantially improve query plans. Histogram statistics data is persistent, so the space required to save the data depends on the sample size. In general, create histograms on columns that are used frequently in <code>WHERE</code> clauses of queries and have a highly skewed data distribution. For uniformly distributed data, the cost-based optimizer can make fairly accurate guesses about the cost of executing a particular statement without the use of histograms.</p> <p class="BP">Histograms, like all other optimizer statistics, are static. They are useful only when they reflect the current data distribution of a given column. (The data in the column can change as long as the <em class="Italic">distribution</em> remains constant.) If the data distribution of a column changes frequently, you must recompute its histogram frequently.</p> <p class="BP">Histograms are not useful for columns with the following characteristics:</p> <ul class="LB1"><li class="LB1" type="disc">All predicates on the column use bind variables.</li><li class="LB1" type="disc">The column data is uniformly distributed.</li><li class="LB1" type="disc">The column is unique and is used only with equality predicates.</li></ul> <!--TOC=h2-"35016"--> <h3 class="H2"><span style="font-family:Arial, Helvetica, sans-serif;color:#330099;">Creating Histograms</span></h3> <!--/TOC=h2--> <p class="BP">You generate histograms by using the <code>DBMS_STATS</code> package. You can generate histograms for columns of a table or partition. For example, to create a 10-bucket histogram on the <code>SAL</code> column of the <code>emp</code> table, issue the following statement:</p> EXECUTE DBMS_STATS.GATHER_TABLE_STATS ('scott','emp', METHOD_OPT => 'FOR COLUMNS SIZE 10 sal');<br /><br /><p class="BP">The <code>SIZE</code> keyword declares the maximum number of buckets for the histogram. You would create a histogram on the <code>SAL</code> column if there were an unusually high number of employees with the same salary and few employees with other salaries. You can also collect histograms for a single partition of a table.</p> <p class="BP">Oracle Corporation recommends using the <code>DBMS_STATS</code> package to have the database automatically decide which columns need histograms. This is done by specifying <code>SIZE</code> <code>AUTO</code>.<br /></p> <h4 class="ET"><span style="font-family:Arial, Helvetica, sans-serif;"><em>Example Using a Histogram to Improve an Execution Plan</em></span></h4> <span style="font-family: monospace;">UPDATE so_lines l </span><br /> <span style="font-family: monospace;">SET open_flag=null, </span><span style="font-family: monospace;">s6=10, </span><span style="font-family: monospace;">s6_date=sysdate, </span><br /> <span style="font-family: monospace;">WHERE l.line_type_code in ('REGULAR','DETAIL','RETURN')<br /> AND </span><span style="font-family: monospace;">l.open_flag = 'Y' AND NVL(l.shipped_quantity, 0)=0<br /> OR </span><span style="font-family: monospace;">NVL(l.shipped_quantity, 0) != 0<br /> AND </span><span style="font-family: monospace;">l.shipped_quantity +NVL(l.cancelled_quantity, 0)= l.ordered_quantity))<br /> AND </span><span style="font-family: monospace;">l.s6=18</span>;<br /><p class="BP">This query shows the skewed distribution of data values for <code>s6</code>. In this case, there are two distinct non-null values: 10 and 18. The majority of the rows consists of <code>s6</code> = 10 (1,589,464), while a small number of rows consist of <code>s6</code> = 18 (13,091).</p> <span style="font-family: monospace;">S6: COUNT(*)</span><br /> <span style="font-family: monospace;">======================</span><br /> <span style="font-family: monospace;">10 1,589,464</span><br /> <span style="font-family: monospace;">18 13,091</span><br /> <span style="font-family: monospace;">NULL 21,889 </span><br /><p class="BP">The selectivity of column <code>s6</code>, where <code>s6</code> <code>=</code> <code>18</code>:</p> S = 13,091 / (13,091 + 1,589,464) = 0.008<br /><p class="BP">If No Histogram is Used: The selectivity of column <code>s6</code> is assumed to be 50%, uniformly distributed across 10 and 18. This is not selective; therefore, <code>s6</code> is not an ideal choice for use as an index.</p> <p class="BP">If a Histogram is Used: The data distribution information is stored in the dictionary. This allows the optimizer to use this information and compute the correct selectivity based on the data distribution. In this example, the selectivity, based on the histogram data, is 0.008. This is a relatively high, or good, selectivity, which leads the optimizer to use an index on column <code>s6</code> in the execution plan.<br /></p> <p class="BP"><br /></p> <big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Tuning_by_Statistics_"></a>Tuning by Statistics</span></big></big><br /><br />As you know, is not good to resolve tuning issues using the old method of "ratios" only. So we will start understanding several views from Oracle that will help you to tune your database based on its use.<br /><p><span style="text-decoration: underline;font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;" ><b>Parsing and Read Consistency</b></span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />The view v$sysstat is the accumulated statistics for the complete system since the instance has started <i>except</i> for what is active now and shown in v$sesstat. The numbers in this view are held in "tens of milliseconds," (sounds like hundredths of seconds to me). So to convert to seconds, we divide the value by 100. Note that TIMED_STATISTICS must be set to true in the init.ora for these views to have timing numbers. So to see overall parsing and read consistency waits, we can use the following:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">column parsing heading 'Parsing|(seconds)'<br />column total_cpu heading 'Total CPU|(seconds)'<br />column waiting heading 'Read Consistency|Wait (seconds)'<br />column pct_parsing heading 'Percent|Parsing'<br /><br />select total_CPU, parse_CPU parsing,<br /> parse_elapsed-parse_CPU waiting,<br /> trunc(100*parse_elapsed/total_CPU,2) pct_parsing<br /> from (select value/100 total_CPU<br /> from v$sysstat where name = 'CPU used by this session'),<br /> (select value/100 parse_CPU<br /> from v$sysstat where name = 'parse time cpu'),<br /> (select value/100 parse_elapsed<br /> from v$sysstat where name = 'parse time elapsed');<br /><br /><br /><br />Total CPU Parsing Read Consistency Percent<br />(seconds) (seconds) Wait (seconds) Parsing<br />---------- ---------- ---------------- ----------<br />1236263327 21072.43 4277.93 0</span></pre> <span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">We can see from this output that this instance has consumed a lot of CPU seconds, but a relatively small amount of that was used for parsing. Of the parsing, about 20 percent of the time was waiting for read consistency, but since the parsing percent is so small, this is not impacting our system performance</span>.<br /><br /><p><big><span style="text-decoration: underline;font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;" ><big><b>Finding Free Buffers</b></big></span></big><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />If we need to scan buffers to find a free one, then we are wasting CPU resource. This can be checked with the script below:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">select inspected,requested,(inspected/requested) ratio<br /> from (select value inspected from v$sysstat where name = 'free buffer inspected') a,<br /> (select value requested from v$sysstat where name = 'free buffer requested') b;<br /><br /><br />INSPECTED REQUESTED RATIO<br />---------- ---------- ----------<br />5100 12288112 .000415035</span></pre> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">We see that, for the vast majority of the time, the first buffer we look at is available (low ratio). So there is no problem with getting free buffers as we need them. Let's look further.<br /><br /></span></p> <p><span style="text-decoration: underline;font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;" ><b>Active Waiting and Latch Contention</b></span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />Active waiting can be compared to waiting at the betting window with your two dollars in your hand. You are not just passively standing there, but are focused on all the windows to see when a resource becomes available.</span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"> We can see these active waits in v$system_event with the following:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">select event,time_waited/100 seconds<br /> from v$system_event<br /> where event not in<br /> ('rdbms ipc message'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">,'SQL*Net more data to client'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">,'SQL*Net message from client'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">,'SQL*Net message to client'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">,'pmon timer'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">,'smon timer'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">,'wakeup time manager')<br /> order by seconds;<br /><br /><br />EVENT SECONDS<br />---------------------------------------- ----------<br />buffer deadlock 0<br />. . .<br />latch free 3422.98<br />. . .<br />control file parallel write 80356.43<br />log buffer space 106812.85 </span></pre> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">The previous code shows me that we have waited the most time for log buffer space. I will check that I have the log buffer sized correctly. This is still a small fraction of the CPU seconds used by the instance as we saw above; it was over one billion, so we are way under one percent of the time spent waiting.</span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"> This also shows us that we have been waiting for latches. So let's see which latch we are not getting when needed with the following code:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">Column name format a35<br />SELECT name, gets, misses, immediate_gets, immediate_misses<br /> FROM v$latch l<br /> WHERE (misses > .1*gets)<br /> or (immediate_misses > .1*(immediate_misses+immediate_gets));<br /><br /><br /><br />NAME GETS MISSES IMMEDIATE_GETS IMMEDIATE_MISSES<br />--------------------------------- ------ -------- -------------- ----------------<br />channel operations parent latch 86845 6 43300 78881</span></pre> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">This is a latch that we can't tune and we also see that the number of misses is not increasing as we rerun this query, so we will leave it alone.</span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />Given this, we have to conclude that the instance is not using too much CPU resource for its own maintenance — so let's check the users.</span></p> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><b><br /><span style="text-decoration: underline;">User Code Execution</span></b></span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br /></span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">In our instance, we have checked that the environment for the instance is not forcing us to consume more CPU resources. So we will check the code that is being executed. Most often, the CPU consumption problem is caused by what a user is doing.</span></p> <p><span style="text-decoration: underline;font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;" ><b>Highest Consuming Sessions</b></span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />We can see which user is consuming the most CPU at this time by </span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">creating a table with the cumulative values </span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">so we can pull out the current usage:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">drop table t_CPU;<br />create table t_CPU storage<br /> (initial 64k next 64k pctincrease 0) pctfree 0<br /> as select a.sid,a.value<br /> from v$sesstat a,v$statname b<br /> where b.name = 'CPU used by this session'<br /> and a.statistic# = b.statistic#;<br /><br />column name format a25<br />column username format a20<br />column CPU heading 'CPU|Seconds'<br /><br />select a.sid, c.username,b.name,(a.value-t.value)/100 CPU<br /> from v$session c,v$sesstat a,v$statname b,t_CPU t<br /> where c.sid = a.sid<br /> and c.sid=t.sid<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">and b.name = 'CPU used by this session'<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">and a.statistic# = b.statistic#<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">and a.value > 0<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">and a.value - t.value > 0<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"> </span><span style="font-family:Courier New, Courier, mono;font-size:85%;">order by a.value-t.value;<br /></span><span style="font-family:Courier New, Courier, mono;font-size:85%;"><br />drop table t_CPU;</span><br /><span style="font-family:Courier New, Courier, mono;font-size:85%;"> CPU<br /> SID USERNAME NAME Seconds<br />------ -------------------- ------------------------- ----------<br /> 52 CPU used by this session .01<br /> . . .<br /> 23 SYSTEM CPU used by this session 35.82<br /> 177 ORADBA CPU used by this session 38.26<br /> 167 ORADBA CPU used by this session 52.07</span></pre> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">You can see that SID 167 is the current largest consumer. What we are looking for, then, is code that is in a tight loop in memory, just processing over and over. If this CPU is being consumed by normal usage, then no changes are needed. So let's see what SID 167 is running:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">select c.executions,trunc(a.last_call_et/60) min,b.sql_text<br /> from v$session a, v$sqltext b, v$sqlarea c<br /> where a.sid=167<br /> and a.sql_address = b.address<br /> and a.sql_address = c.address<br /> order by b.piece;<br /><br /><br />EXECUTIONS MIN SQL_TEXT<br />---------- ------ ----------------------------------------------------------------<br /> 94784 0 select daily_notes_id, center_code, TO_CHAR(day,'MM/DD/YYYY')<br /> day, notes from daily_notes where center_code = :1<br /> and day = TO_DATE(:2,'MM/DD/YYYY') order by 1</span><br /></pre> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">There is nothing wrong with this code and after watching for a while we see that this user is not running any code that jumps out as bad. We conclude that this session is just doing the most work.</span></p><br /><p><span style="font-weight: bold; text-decoration: underline;font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;" >Highest Consuming Statements</span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />We can also check for the statements with the most buffer gets. These are probably working the most in memory:</span></p> <pre><span style="font-family:Courier New, Courier, mono;font-size:85%;">set linesize 140<br />column executions format 999,999,999 heading "Executions"<br />column disk_reads format 999,999,999 heading "Block Reads"<br />column buffer_gets format 999,999,999 heading "Buffer Reads"<br />column sql_text format a90 heading "Statement"<br />break on executions on buffer_gets skip 1<br /><br />--we want the 2 largest consumers<br /><br />variable max_buffer_read number<br />begin<br />select max(buffer_gets) into :max_buffer_read<br /> from v$sqlarea<br /> where parsing_user_id >0;<br /><br />select max(buffer_gets) into :max_buffer_read<br /> from v$sqlarea<br /> where parsing_user_id >0<br /> and buffer_gets < :max_buffer_read;<br />end;<br />/<br /><br />select a.executions,a.buffer_gets,b.sql_text<br /> from v$sqlarea a,v$sqltext b<br /> where a.buffer_gets >= :max_buffer_read<br /> and a.address = b.address<br /> and a.parsing_user_id > 0<br /> order by a.buffer_gets desc, piece asc;<br /><br /><br />Executions Buffer Reads Statement<br />---------- ------------ -------------------------------------------------------------<br />6,300,634 51,221,671 SELECT DISTINCT B.FIRST_NAME || ' ' || B.LAST_NAME<br /> FROM BLOCK_PHYSICIAN A,PHYSICIAN B<br /> WHERE A.PHYSICIAN_FID=B.HYSICIAN_ID AND A.BLOCK_TIME_FID=:b1<br /><br /> 75,265 27,227,798 select client_id,greeting_lu,first_name,last_name<br /> ,home_phone,work_phone,current_status_lu,center_code<br /> ,start_time<br /> ,TO_CHAR(start_time, 'MM/DD/YYYY HH:MI AM') start_datetime<br /> ,appointment_type_lu, appointment_id , number_attending<br /> ,offsite, offsite_location,treatment_info, bgcolor<br /> ,(TO_NUMBER(TO_CHAR(start_time,'HH24'))*60)<br /> +TO_NUMBER(TO_CHAR(start_time,'MI')) time,physician<br /> ,TO_CHAR(start_time, 'HH:MI AM') start_display_time,note<br /> from appointment_view_no_phys<br /> where center_code = :1<br /> and start_time between TO_DATE(:2,'MM,DD,YYYY')<br /> and TO_DATE(:3,'MM,DD,YYYY')+1<br /> and appointment_type_lu in<br /> (:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14) <br /> and (current_status_lu not in<br /> ('Cancelled','Rescheduled','Attended','NoShowed'))<br /> order by start_time,last_name,first_name</span><br /></pre> <p><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;">Remember that these numbers are those that are emerging since the instance has come up.</span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"><br />We see that the first statement is doing less than 10 buffer reads per execution on an average. Because I know that these tables are in the keep buffers, this makes sense.</span><span style="font-family:Verdana, Arial, Helvetica, sans-serif;font-size:85%;"> The second statement is doing about 361 buffer gets per execution. This will be one to investigate and see how this view is joining. Maybe we can make this more efficient.</span></p><br /></div></div>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-8070042529554044922009-04-29T14:05:00.000+05:302009-05-19T10:59:08.007+05:30RAID Configuration<p align="center"><span style="font-size:130%;"> <b><u><span style="color: rgb(51, 102, 255);">RAID Configuration</span></u></b></span></p> <p> There is significant confusion in many organizations using ORACLE and RAID technologies. I will attempt to make this information very clear and understandable.<br /></p> <p><span style="font-weight: bold; text-decoration: underline;">What Is RAID?</span><br />RAID is the technology for expanding the capacity of the I-O system and providing the capability for data redundancy. It stands for Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks.<br />Conceptually, RAID is the use of 2 or more physical disks, to create 1 logical disk, where the physical disks operate in tandem to provide greater size and more bandwidth. RAID has become an indispensable part of the I-O fabric of any system today and is the foundation for storage technologies, supported by many mass storage vendors. The use of RAID technology has re-defined the design methods used for building storage systems that support Oracle databases.<br /></p> <p><br /><span style="font-weight: bold; text-decoration: underline;">The 3 Main Concepts In RAID</span><br />When you talk about RAID, there are 3 that terms are important and relevant.<br />• Striping<br />• Mirroring<br />• Parity.<br /><br /><span style="text-decoration: underline;">1- What Is Striping? </span><br />Striping is the process of breaking down data into pieces and distributing it across multiple disks that support a logical volume – “Divide, Conquer & Rule”. This often results in a “logical volume” that is larger and has greater I-O bandwidth than a single disk. It is purely based on the linear power of incrementally adding disks to a volume to increase the size and I-O bandwidth of the logical volume. The increase in bandwidth is a result of how read/write operations are done on a striped volume.<br />Imagine that you are in a grocery store. With you are about two hundred of your closest friends and neighbors all shopping for the week’s groceries. Now consider what it’s like when you get to the checkout area and find that only one checkout line is open. That poor clerk can only deal with a limited number of customers per hour. The line starts to grow progressively. The same is true of your I-O sub-system. A given disk can process a specific number of I-O operations per second. Anything more than that and the requests start to queue up. Now stop and think about how great it feels when you get to the front of the store and find that all 20 lines are open. You find your way to the shortest line and your headed out the door in no time.<br />Striping has a similar effect to your I-O system. By creating a single volume from pieces of data on several disks, we can increase the capacity to handle I-O requests in a linear fashion, by combining each disk’s I-O bandwidth. Now, when multiple I-O requests for a file on a striped volume is processed, they can be serviced by multiple drives in the volume, as the requests are sub-divided across several disks. This way all drives in the striped volume can engage and service multiple I/O requests in a more efficient manner. This “cohesive and independent” functioning of all the drives in a logical volume is relevant for both read and writes operations. It must be noted that striping by itself, does not reduce “response time” for servicing I-O requests. However, it does provide predictable response times and facilitates the notion of better performance, by balancing I-O requests across multiple drives in the striped volume.<br />Figure 1 depicts a 4-way striped volume (v1) with 4 disks (1-4). A given stripe of data (Data1) in a file on v1 will be split/striped across the 4 disks, into 4 pieces (Data11-Data14).<br /></p> <table border="1" cellpadding="2" cellspacing="2" width="100%"> <tbody> <tr> <td style="font-weight: bold;">Disk1 </td> <td style="font-weight: bold;">Disk2 </td> <td style="font-weight: bold;">Disk3 </td> <td style="font-weight: bold;">Disk4</td> </tr> <tr> <td>Data11 </td> <td>Data12 </td> <td>Data13 </td> <td>Data14</td> </tr> <tr> <td>Data21 </td> <td>Data22 </td> <td>Data23 </td> <td>Data2</td> </tr> </tbody> </table> <p>Figure 1<br /><br /><span style="text-decoration: underline;">2-What Is Mirroring?</span><br />Mirroring is the process of writing the same data, to another “member” of the same volume simultaneously. Mirroring provides protection for data by writing exactly the same information to every member in the volume. Additionally, mirroring can provide enhanced read operations because the read requests can be serviced from either “member” of the volume. If you have ever made a photocopy of a document before mailing the original then you have mirrored data. One of the common myths with mirroring, is that it takes “twice as long” to write. But in many performance measurements and benchmarks, the overhead has been observed to be around 15-20%.<br />Figure 2 illustrates a 4-way striped mirrored volume (v1) with 8 Disks (1-8). A given stripe of data (Data1) in a file on v1 will be split/striped across the Disks (1-4) and then mirrored across Disks (5-8). Disks (1-4) and (5-8) are called “Mirror Members” of the volume v1.<br /></p> <table border="1" cellpadding="2" cellspacing="2" width="100%"> <tbody> <tr> <td style="font-weight: bold;">Disk1 </td> <td style="font-weight: bold;">Disk2 </td> <td style="font-weight: bold;">Disk3 </td> <td style="font-weight: bold;">Disk4 </td> <td style="font-weight: bold;">Disk5 </td> <td style="font-weight: bold;">Disk6 </td> <td style="font-weight: bold;">Disk7 </td> <td style="font-weight: bold;">Disk8</td> </tr> <tr> <td>Data11 </td> <td>Data12 </td> <td>Data13 </td> <td>Data14 </td> <td>Data11 </td> <td>Data12 </td> <td>Data13 </td> <td>Data14</td> </tr> <tr> <td>Data21 </td> <td>Data22 </td> <td>Data23 </td> <td>Data24 </td> <td>Data21 </td> <td>Data22 </td> <td>Data23 </td> <td>Data24</td> </tr> </tbody> </table> <p>Figure 2<br /></p> <p><br /><span style="text-decoration: underline;">3- What Is Parity?</span><br />Parity is the term for error checking. Some levels of RAID, perform calculations when reading and writing data. The calculations are primarily done on write operations. However, if one or more disks in a volume are unavailable, then depending on the level of RAID, even read operations would require parity operations to rebuild the pieces on the failed disks. Parity is used to determine the write location and validity of each stripe that is written in a striped volume. Parity is implemented on those levels of RAID that do not support mirroring.<br />Parity algorithms contain Error Correction Code (ECC) capabilities, which calculates parity for a given ‘stripe or chunk’ of data within a RAID volume. The size of a chunk is operating system (O-S) and hardware specific. The codes generated by the parity algorithm are used to recreate data in the event of disk failure(s). Because the algorithm can reverse this parity calculation, it can rebuild data, lost as a result of disk failures. It’s just like solving a math problem what you know the answer (checksum) and one part of the question e.g. 2+X =5, what is X? Of course, X=3.<br />Figure 3 depicts a 4-way striped RAID 3 volume with parity – v1 with 5 Disks (1-5). A given stripe of data (Data1) in a file on v1 will be split/striped across the Disks (1-4) and the parity for Data1 will be stored on Disk 5. There are other levels of RAID that store parity differently and those will be covered in the following sections.<br /></p> <table border="1" cellpadding="2" cellspacing="2" width="100%"> <tbody> <tr> <td style="font-weight: bold;">Disk1 </td> <td style="font-weight: bold;">Disk2 </td> <td style="font-weight: bold;">Disk3 </td> <td style="font-weight: bold;">Disk4 </td> <td style="font-weight: bold;">Disk5</td> </tr> <tr> <td>Data11 </td> <td>Data12 </td> <td>Data13 </td> <td>Data14 </td> <td>Parity1</td> </tr> <tr> <td>Data21 </td> <td>Data22 </td> <td>Data23 </td> <td>Data24 </td> <td>Parity2</td> </tr> </tbody> </table> <p>Figure 3<br /><br /><span style="text-decoration: underline;">Putting It All Together</span><br />Striping yields better I-O performance, mirroring provides protection and parity (when applicable) is a way to check the work. With these 3 aspects of RAID, we can achieve scalable, protected, highly available I-O performance.<br /></p> <p><br /><br /></p> <p> <b><u>Types of RAID available</u></b><br />Vendors typically offer the following choices for RAID configurations (Nice example <a href="http://www.acnc.com/04_01_00.html">HERE</a>)<br /></p> <p> <u>RAID0</u><br />Disk stripping. RAID0 requires at least two physical disks. Data is read and written across multiple drives, so disk I/O is relatively evenly spread. Writes can occur in a block or streaming manner (similar to non-RAIDed disks) as requested by the operating system. Disk failure results in lost data. Compared to a single disk drive, RAID0 has the following attributes:<br />- Better read performance<br />- Better write performance<br />- Inexpensive in cost<br />- Not fault-tolerant<br />- Storage equivalent to sum of physical drive storage in the array<br />- Readily available from most vendors</p> <p> <u>RAID1 Shadowing/Mirroring</u><br />Disk mirroring. RAID1 requires two physical disks. Logical writes are done by physically writing the data to both disks simultaneously, and can typically be done in a block manner or streaming manner, as requested by the operating system. Reads can be done using either disk. In the event of a disk failure, data can still be retrieved and written to the surviving disk. Compared to a single disk drive, RAID1 has the following attributes:<br />- Better read performance<br />- Similar write performance<br />- Expensive<br />- Fault-tolerant<br />- Storage equivalent to 1/2 the sum of the physical drive storage in the mirrored set.<br />- Readily available from most vendors</p> <p> <u>RAID5 Striping with Rotating Parity</u><br />Disk stripping with parity. RAID5 requires at least three physical disks. On a logical write, a block of data is physically written to disk, parity information is calculated using the block just written plus blocks already existing on disk, then the parity information is written to disk. In RAID5, the parity information is rotated among the physical disks to prevent bottlenecks caused by a dedicated parity disk. Note that writes occur in a block manner regardless of whether the O/S is sending a stream of data to be written or requests to write whole blocks. On a logical read, data is read from multiple disks in a manner very similar to RAID0. In the event of a disk failure, data can be reconstructed on the fly using the parity information. Compared to a single disk drive, RAID5 has the following attributes:<br />- Data is stripped across multiple physical disks and parity data is stripped across storage equivalent to one disk.<br />- Better read performance<br />- Poorer write performance<br />- Inexpensive<br />- Fault-tolerant<br />- Storage equivalent to N - 1 times the number of physical drives in the array.<br />- Readily available from most vendors</p> <p> <u>RAID10 (or RAID0+1)</u><br />Mirrored stripe sets. RAID10 requires at least 4 physical drives, and combines the performance gains of RAID0 with the fault-tolerance and expense of RAID1. Data is written simultaneously to two mirrored sets of striped disks in blocks or streams. Reads can be performed against either striped set. In the event of a failure of a disk drive in one striped set, data can be written to and read from the surviving striped set. Compared to a single disk drive, RAID10 has the following attributes:<br />- Better read performance<br />- Better write performance<br />- Expensive<br />- Fault-tolerant<br />- Storage is 1/2 of the sum of the physical drives' storage<br />- Currently available from only a few vendors (at the time of this writing)<br /></p> <p> <b><u>Possible configurations using 4 physical disks:</u></b><br /><br /><table border="1" cellpadding="1" cellspacing="2" cols="5" width="100%"> <tbody> <tr> <td valign="CENTER"> <p> Configuration</p> </td> <td valign="CENTER"> <p> Number of disks</p> </td> <td valign="CENTER"> <p> Available space</p> </td> <td valign="CENTER"> <p> Max Reads/Sec</p> </td> <td valign="CENTER"> <p> Max Writes/Sec</p> </td> </tr> <tr> <td valign="CENTER"> <p> Single disk</p> </td> <td valign="CENTER"> <p> 1</p> </td> <td valign="CENTER"> <p> 4 GB</p> </td> <td valign="CENTER"> <p> 60</p> </td> <td valign="CENTER"> <p> 60</p> </td> </tr> <tr> <td valign="CENTER"> <p> RAID0</p> </td> <td valign="CENTER"> <p> 4</p> </td> <td valign="CENTER"> <p> 16 GB</p> </td> <td valign="CENTER"> <p> 240</p> </td> <td valign="CENTER"> <p> 240</p> </td> </tr> <tr> <td valign="CENTER"> <p> RAID1</p> </td> <td valign="CENTER"> <p> 4</p> </td> <td valign="CENTER"> <p> 8 GB</p> </td> <td valign="CENTER"> <p> 240 (2 arrays)</p> </td> <td valign="CENTER"> <p> 120 (2 arrays)</p> </td> </tr> <tr> <td valign="CENTER"> <p> RAID5</p> </td> <td valign="CENTER"> <p> 4</p> </td> <td valign="CENTER"> <p> 12 GB</p> </td> <td valign="CENTER"> <p> 180</p> </td> <td valign="CENTER"> <p> 60</p> </td> </tr> </tbody> </table> </p> <p> <b><u>Possible configurations using 6 physical disks:</u></b><br /><br /><table border="1" cellpadding="1" cellspacing="2" cols="5" width="100%"> <tbody> <tr> <td valign="CENTER"> <p> Configuration</p> </td> <td valign="CENTER"> <p> Number of disks</p> </td> <td valign="CENTER"> <p> Available space</p> </td> <td valign="CENTER"> <p> Max Reads/Sec</p> </td> <td valign="CENTER"> <p> Max Writes/Sec</p> </td> </tr> <tr> <td valign="CENTER"> <p> Single disk</p> </td> <td valign="CENTER"> <p> 1</p> </td> <td valign="CENTER"> <p> 4 GB</p> </td> <td valign="CENTER"> <p> 60</p> </td> <td valign="CENTER"> <p> 60</p> </td> </tr> <tr> <td valign="CENTER"> <p> RAID0</p> </td> <td valign="CENTER"> <p> 6</p> </td> <td valign="CENTER"> <p> 24 GB</p> </td> <td valign="CENTER"> <p> 360</p> </td> <td valign="CENTER"> <p> 360</p> </td> </tr> <tr> <td valign="CENTER"> <p> RAID1</p> </td> <td valign="CENTER"> <p> 6</p> </td> <td valign="CENTER"> <p> 12 GB</p> </td> <td valign="CENTER"> <p> 360 (3 arrays)</p> </td> <td valign="CENTER"> <p> 180 (3 arrays)</p> </td> </tr> <tr> <td valign="CENTER"> <p> RAID5</p> </td> <td valign="CENTER"> <p> 6</p> </td> <td valign="CENTER"> <p> 20 GB</p> </td> <td valign="CENTER"> <p> 300</p> </td> <td valign="CENTER"> <p> 90</p> </td> </tr> </tbody> </table><br />As can be seen from the charts, RAID0 offers good read and write performance, but no fault tolerance.<br />RAID1 offers good read performance, and half as much write performance, but provides fault-tolerance.<br />RAID5 reclaims most of the space lost to RAID1, provides fault-tolerance, offers reasonably good read performance, but poor write performance. (In fact, RAID5 requires 4 disks to regain the same write performance as a single disk). Also, note that streaming logical writes, as well as block-level logical writes, to RAID5 arrays are handled as block-level physical writes. Finally, read or write workload capacity can be increased in any RAID configuration by adding physical disks.<br /></p> <p>The RAID "hierarchy" begins with RAID 0 (striping) and RAID 1 (mirroring). Combining RAID 0 and RAID 1 is called RAID-0+1 or RAID-1+0, depending on how you combine them. (RAID 0+1 is also called RAID-01, and RAID-1+0 is also called RAID-10.) The performance of RAID-10 and RAID-01 are identical, but they have different levels of data integrity.<br /></p> <p>RAID-01 (or RAID 0+1) is a mirrored pair (RAID-1) made from two stripe sets (RAID-0); hence the name RAID 0+1, because it is created by first creating two RAID-0 sets and adding RAID-1. If you lose a drive on one side of a RAID-01 array, then lose another drive on the other side of that array before the first side is recovered, you will suffer complete data loss. It is also important to note that all drives in the surviving mirror are involved in rebuilding the entire damaged stripe set, even if only a single drive was damaged. Performance during recovery is severely degraded during recovery unless the RAID subsystem allows adjusting the priority of recovery. However, shifting the priority toward production will lengthen recovery time and increase the risk of the kind of the catastrophic data loss mentioned earlier.<br /><br />RAID-10 (or RAID 1+0) is a stripe set made up from N mirrored pairs. Only the loss of both drives in the same mirrored pair can result in any data loss and the loss of that particular drive is 1/Nth as likely as the loss of some drive on the opposite mirror in RAID-01. Recovery only involves the replacement drive and its mirror so the rest of the array performs at 100% capacity during recovery. Also since only the single drive needs recovery bandwidth requirements during recovery are lower and recovery takes far less time reducing the risk of catastrophic data loss.<br /></p> <p>The most appropriate RAID configuration for a specific filesystem or database table space must be determined based on data access patterns and cost versus performance tradeoffs. RAID-0 offers no increased reliability. It can, however, supply performance acceleration at no increased storage cost. RAID-1 provides the highest performance for redundant storage, because it does not require read-modify-write cycles to update data, and because multiple copies of data may be used to accelerate read-intensive applications. Unfortunately, RAID-1 requires at least double the disk capacity of RAID-0. Also, since more than two copies of the data exist, RAID-1 arrays may be constructed to endure loss of multiple disks without interruption. Parity RAID allows redundancy with less total storage cost. The read-modify-write it requires, however, will reduce total throughput in any small write operations (read-only or extremely read-intensive applications are fine). The loss of a single disk will cause read performance to be degraded while the system reads all other disks in the array and recomputes the missing data. Additionally, it does not support losing multiple disks, and cannot be made redundant.<br /><br /></p> <p> <big><b><u>ORACLE database files on RAID</u></b></big><br />Given the information regarding the advantages and disadvantages of various RAID configurations, how does this information apply to an ORACLE instance? The discussion below will provide information about how database files are used by an ORACLE instance under OLTP and DSS classifications of workload.<br />Note that the perspectives presented below are very sensitive to the number of users: if your organization has a 10-20 user OLTP system (and thus, a low throughput requirement), then you may get very acceptable performance with all database files stored on RAID5 arrays. On the other hand, if your organization has a 100 user OLTP system (resulting in a higher throughput requirement), then a different RAID configuration may be absolutely necessary. An initial configuration can be outlined by estimating the number of transactions (based on the number of users), performing adjustments to encompass additional activity (such as hot backups, nightly batch jobs, etc.), then performing the necessary mathematical calculations.<br />You definitely want to keep rollback segments, temp tablespaces and redo logs off from RAID5 since that is too slow for these write-intensive Oracle files. They are sequentially accessed. Redo logs should have their *own* dedicated drives.<br /><br /></p> <p> <b><u>OLTP (On-line transaction processing) workloads</u></b><br />Characterized by multi-user concurrent INSERTS, UPDATES, and DELETES during normal working hours, plus possibly some mixture of batch jobs nightly. Large SELECTS may generate reports, but the reports will typically be "pre-defined" reports rather than ad-hoc queries. The focus, though, is on enabling update activity that completes within an acceptable response time. Ideally, each type of database file would be spread out over it's own private disk subsystem, although grouping certain types of files together (when the number of disks, arrays, and controllers is less than ideal) may yield adequate performance. (Please see the article on Instance tuning for information regarding groupings of database files in an OLTP system.)</p> <p> <u>Redo logs</u>.<br />During update activity, redo logs are written to in a continuous and sequential manner, and are not read under normal circumstances. RAID5 would be the worst choice for performance. Oracle Corporation recommends placing redo logs on single non-RAIDed disk drives, under the assumption that this configuration provides the best overall performance for simple sequential writes. Redo logs should always be multiplexed at the ORACLE software level, so RAID1 provides few additional benefits. Since non-RAID and RAID0 configurations can vary with hardware from different vendors, the organization should contact their hardware vendor to determine whether non-RAIDed disks or RAID0 arrays will yield the best performance for continuous sequential writes. Note that even if redo logs are placed on RAID1 arrays that the redo logs should still be mirrored at the ORACLE level.<br /></p> <p> <u>Archive logs</u><br />As redo logs are filled, archive logs are written to disk one whole file at a time (assuming, of course, that the database is running in archivelog mode), and are not read under normal circumstances. Any RAID or non-RAID configuration could be used, depending upon the performance requirements and size of the redo logs. For instance, if the redo logs are large, then they will become full and be archived less often. If an archive log is likely to be written no more than once per minute, then RAID5 may provide acceptable performance. If RAID5 proves too slow, then a different RAID configuration can be chosen, or the redo logs can simply be made larger. Note that a fault-tolerant configuration is advisable: if the archive log destination becomes unavailable, the database will halt.</p> <p> <u>Rollback Segments</u><br />As modifications are made to the database tables, undo information is written to the buffer cache in memory. These rollback segments are used to to maintain commitment control and read consistency. Rollback segment data is periodically flushed to disk by checkpoints. Consequently, the changes to the rollback segments are also recorded in the redo logs. However, a smaller amount of information is typically written to the rollback segments than to the redo logs, so the write rate is less stringent. A fault-tolerant configuration is advisable, since the database cannot operate without rollback segments, and recovery of common rollback segments will typically require an instance shutdown. If the transaction rate is reasonably small, RAID5 may provide adequate performance. If it does not, then RAID1 (or RAID10) should be considered.</p> <p> <u>User tables and indexes</u><br />As updates are performed, these changes are stored in memory. Periodically, a checkpoint will flush the changes to disk. Checkpoints occur under two normal circumstances: a redo log switch occurred, or the time interval for a checkpoint expired. (There are a variety of other situations that trigger a checkpoint. Please check the ORACLE documentation for more detail.) Like redo log switches and generation of archive logs, checkpoints can normally be configured so that they occur approximately once per minute. Recovery can be performed up to the most recent checkpoint, so the interval should not be too large for an OLTP system. If the volume of updated data written to disk at each checkpoint is reasonably small (ie. the transaction rate is not extremely large), then RAID5 may provide acceptable performance. Additionally, analysis should be performed to determine the ratio of reads to writes. Recalling that RAID5 offers reasonably good read performance, if the percentage of reads is much larger than the percentage of writes (for instance, 80% to 20%), then RAID5 may offer acceptable performance for small, medium, and even some large installations. A fault-tolerant configuration is preferable to maximize availability (assuming availability is an objective of the organization), although only failures damaging datafiles for the SYSTEM tablespace (and active rollback segments) require the instance to be shutdown. Disk failures damaging datafiles for non-SYSTEM tablespaces can be recovered with the instance on-line, meaning that only the applications using data in tablespaces impacted by the failure will be unavailable. With this in mind, RAID0 could be considered if RAID5 does not provide the necessary performance. If high availability and high performance on a medium to large system are explicit requirements, then RAID1 or RAID10 should be considered.</p> <p> <u>Temp segments</u><br />Sorts too large to be performed in memory are performed on disk. Sort data is written to disk in a block-oriented. Sorts do not normally occur with INSERT/UPDATE/DELETE activity. Rather, SELECTS with ORDER BY or GROUP BY clauses and aggregate functions (ie. operational reports) , index rebuilds, etc., will use TEMP segments only if the sort is too large to perform in memory. Temp segments are good candidates for non-RAIDed drives or RAID0 arrays. Fault-tolerance is not critical: if a drive failure occurs and datafiles for a temp segment are lost, then the temp segment can either be recovered in the normal means (restore from tape and perform a tablespace recovery), or the temp segment can simply be dropped and re-created since there is no permanent data stored in the temp segment. Note that while a temp segment is unavailable, certain reports or index creations may not execute without errors, but update activity will typically not be impacted. With this in mind, RAID1 arrays are a bit unnecessary for temp segments, and should be used for more critical database files. RAID5 will provide adequate performance if the sort area hit ratios are such that very few sorts are performed on disk rather than in memory.</p> <p> <u>Control files</u><br />Control files are critical to the instance operation, as they contain the structural information for the database. Control files are updated periodically (at a checkpoint and at structural changes), but the data written to the control files is a very small quantity compared to other database files. Control files, like redo logs, should be multiplexed at the ORACLE software level onto different drives or arrays. Non-RAIDed drives or or any RAID configuration would be acceptable for control files, although most organizations will typically distribute the multiple copies of the control files with the other database files, given that the read and write requirements are so minimal. For control files, maintaining multiple copies in different locations should be favored over any other concern.</p> <p> <u>Software and static files</u><br />The ORACLE software, configuration files, etc. are very good candidates for RAID5 arrays. This information is not constantly updated, so the RAID5 write penalty is of little concern. Fault-tolerance is advisable: if the database software (or O/S software) becomes unavailable due to a disk failure, then the database instance will abort. Also, recovery will include restore or re-installation of ORACLE software (and possible operating system software) as well as restore and recovery of the database files. RAID5 provides the necessary fault-tolerance to prevent this all-inclusive recovery, and good read performance for dynamic loading and unloading of executable components at the operating system level.<br /></p> <p> <b><u>DSS (Decision Support System) workloads</u></b><br />In comparison to OLTP systems, DSS or data warehousing systems are characterized by primarily SELECT activity during normal working hours, and batch INSERT, UPDATE, and DELETE activity run on a periodic basis (nightly, weekly, or monthly). There will typically be a large amount of variability in the number of rows accessed by any particular SELECT, and the queries will tend to be of a more ad-hock nature. The number of users will typically be smaller than their ajoining OLTP systems (where the data originates). The focus is on enabling SELECT activity that completes within an acceptable response time, while insuring that the batch update activity still has capacity to complete in it's allowable time window. Note now that there are two areas of performance over which to be concerned: periodic refreshes and ad-hock read activity. The general level directive in this case should be to configure the database such that read-only performed by end users is as good as it can get without rendering refreshes incapable of completion. As with OLTP systems, each type of database file would ideally have it's own private disk subsystem (disks, arrays, and controller channel), but with less than ideal resources certain grouping tend to work well for DSS systems. (Please see the article on Instance tuning for information on these groupings.)</p> <p> <u>Redo logs</u><br />Redo logs are only written to while update activity is occurring. In a DSS-oriented system, a significant portion of data entered interactively during the day may loaded into the DSS database during only a few hours. Given this characteristic, redo logging may tend to be more of a bottleneck on periodic refresh processes of a DSS database than on it's ajoining OLTP systems. If nightly loads are taking longer than their allowance, then redo logging should be the first place to look. The same RAID/non-RAID suggestions that apply to redo logging in OLTP also apply with DSS systems. As with OLTP systems, redo logs should always be mirrored at the ORACLE software level, even if they are stored on fault-tolerant disk arrays.</p> <p> <u>Archive logs</u><br />Like redo logging, archive logs are only written out during update activity. If the archive log destination appears to be over-loaded with I/O requests, then consider changing the RAID configuration, or simply increase the size of the redo logs. Since there is a large volume of data being entered in a short period of time, it may be very reasonable to make the redo logs for the DSS or data warehouse much larger (10 or more times) than the redo logs used by the OLTP system. A reasonable rule of thumb is to target about one log switch per hour. With this objective met, then the disk configuration and fault-tolerance can be chosen based on the same rules used for OLTP systems.</p> <p> <u>Rollback Segments</u><br />Again like redo logging, rollback segments will be highly utilized during the periodic refreshes, and virtually unused during the normal work hours. Use the same logic for determining RAID or non-RAID configurations on the DSS database that would be used for the OLTP systems.</p> <p> <u>User tables and indexes</u><br />Writes are done to tablespaces containing data and indexes during periodic refreshes, but during the normal work hours read activity on the table and indexes will typically far exceed the update work performed on a refresh. A fault-tolerant RAID configuration is suggested to sustain availability. However, in most cases the business can still operate if the DSS system is unavailable for several hours due to a disk failure. Information for strategic decisions may not be available, but orders can still be entered. If the DSS has high availability requirements, select a fault-tolerant disk configuration. If RAID5 arrays can sustain the periodic refresh updates, then it is typically a reasonably good choice due to it's good read performance. As seen above, the read and write workload capacities can be adjusted by adding physical drives to the array.</p> <p> <u>Temp segments</u><br />In a decision support system or data warehouse, expect temp segment usage to be much greater than what would be found in a transaction system. Recalling that temp segments do not store any permanent data and are not absolutely necessary for recovery, RAID0 may be a good choice. Keep in mind, though, that the loss of a large temp segment due to drive failure may render the DSS unusable (unable to perform sorts to answer large queries) until the failed drives are replaced. If availability requirements are high, then a fault-tolerant solution should be selected, or at least considered. If the percentage of sorts on disk is low, then RAID5 may offer acceptable performance; if this percentage is high, RAID1 or RAID10 may be required.</p> <p> <u>Control files</u><br />As with OLTP systems, control files should always be mirrored at the ORACLE software level regardless of any fault-tolerant disk configurations. Since reads and writes to these files are minimal, any disk configuration should be acceptable. Most organizations will typically disperse control files onto different disk arrays and controller cards, along with other database files.</p> <p> <u>Software and static files</u><br />Like OLTP systems, these files should be placed on fault-tolerant disk configurations. Since very little write activity is present, these are again good candidates for RAID5.</p> <p> Taking the above information into consideration, can an organization run an entire ORACLE database instance on a single RAID5 array? The answer is "yes". Will the organization get a good level of fault-tolerance? Again, the answer is "yes". Will the organization get acceptable performance? The answer is "it depends". This dependency includes the type of workload, the number of users, the throughput requirements, and a whole host of other variables. If the organization has an extremely limited budget, then it can always start with a single RAID5 array, perform the necessary analysis to see where improvement is needed, and proceed to correct the deficiencies.<br /><br /></p> <p><b style="text-decoration: underline;">Summary</b><br /><br />RAID 1 is mirroring - blocks are written simultaneously to both members of the raid set so if one member fails, the other still has a full set of data.<br /><br />RAID 5 is striping with distributed parity - data blocks are written in a stripe across all except one of the disks and a parity block is written to the last disk. Each data write is written to a different set of n-1 disks so that the data and parity are scattered equally amongst the drives.<br /><br />RAID 1 gives marginal read speed increase with no real write overhead RAID 5 gives quite a high speed increase for reads but invokes a high overhead for write operations (normally the RAID5 controller will have a write back cache attached to alleviate this)<br /><br />For high write volumes R5 is far from ideal - much better for low writes but high reads. Don't put actively written files on an R5 set - especially things like redo logs !<br /><br />To really tune the I/Os you need to know the chunksize of your RAID controller and tweak db_file_multiblock_read_count appropriately<br /></p> <p><br /><big><big><span style="font-weight: bold; text-decoration: underline;">Distribution of Oracle Files</span></big></big><br />The following table shows what Oracle suggests for RAID usage:<br /></p> <table style="width: 100%; text-align: left;" border="2" cellpadding="2" cellspacing="2"> <tbody> <tr> <td style="vertical-align: top; font-weight: bold;">RAID<br /> </td> <td style="vertical-align: top; font-weight: bold;">Type of Raid<br /> </td> <td style="vertical-align: top; font-weight: bold;">Control File<br /> </td> <td style="vertical-align: top; font-weight: bold;">Database File<br /> </td> <td style="vertical-align: top; font-weight: bold;">Redo Log File<br /> </td> <td style="vertical-align: top; font-weight: bold;">Archive Log File<br /> </td> </tr> <tr> <td style="vertical-align: top; font-weight: bold;">0<br /> </td> <td style="vertical-align: top;">Striping<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> <td style="vertical-align: top;">OK<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> </tr> <tr> <td style="vertical-align: top; font-weight: bold;">1<br /> </td> <td style="vertical-align: top;">Shadowing<br /> </td> <td style="vertical-align: top;"><span style="font-weight: bold;">Best</span><br /> </td> <td style="vertical-align: top;">OK<br /> </td> <td style="vertical-align: top;"><span style="font-weight: bold;">Best</span><br /> </td> <td style="vertical-align: top;"><span style="font-weight: bold;">Best</span><br /> </td> </tr> <tr> <td style="vertical-align: top; font-weight: bold;">0 + 1<br /> </td> <td style="vertical-align: top;">Striping and Shadowing<br /> </td> <td style="vertical-align: top;">OK<br /> </td> <td style="vertical-align: top;"><span style="font-weight: bold;">Best</span><br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> </tr> <tr> <td style="vertical-align: top; font-weight: bold;">3<br /> </td> <td style="vertical-align: top;">Striping with static parity<br /> </td> <td style="vertical-align: top;">OK<br /> </td> <td style="vertical-align: top;">OK<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> </tr> <tr> <td style="vertical-align: top; font-weight: bold;">5<br /> </td> <td style="vertical-align: top;">Striping with rotating parity<br /> </td> <td style="vertical-align: top;">OK<br /> </td> <td style="vertical-align: top;">Best if RAID0-1 not available<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> <td style="vertical-align: top;">Avoid<br /> </td> </tr> </tbody> </table> <p><br /><br /></p> <script src="http://www.google-analytics.com/urchin.js" type="text/javascript"> </script> <script type="text/javascript"> _uacct = "UA-513593-1"; urchinTracker(); </script>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-81789658561920690912009-04-29T13:59:00.000+05:302009-05-19T11:31:33.210+05:30<h2><span style="text-decoration: underline;">File and Directory Navigation</span></h2> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">pwd</span><span style="font-weight: bold;">" </span>command displays the current directory:<br /><blockquote> <pre>root> pwd<br />/u01/app/oracle/product/9.2.0.1.0</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">ls</span><span style="font-weight: bold;">" </span>command lists all files and directories in the specified directory. If no location is defined it acts on the current directory. The "-a" flag lists hidden "." files. The "-l" flag lists file details. <blockquote> <pre>root> ls<br />root> ls /u01<br />root> ls -al</pre> </blockquote> More Examples:<br /><span style="color:BLUE;">ls -al | pg</span> do a full directory listing and prompt to stop stuff whizzing off the page.<br /><span style="color:BLUE;">ls | wc -l </span>count the files in the current directory.<br /><span style="color:BLUE;">ls -alt</span> list files in date order<br /><span style="color:BLUE;">ls -alt | head -10 </span>as above but only display the first 10<br /><span style="color:BLUE;">ls -l $ORACLE_HOME/reports60/printer/admin/spoolcmd.sh </span>Verify that the spoolcmd.sh file has execute permissions<br /><span style="color:BLUE;">ls -s | awk '{if ($1 > 50) print $1 " " $2 }'</span> list all files over 50 blocks in size.<br /><span style="color:BLUE;">ls -alq </span>List files with hidden characters. Very useful when you cannot delete a file for an unknown reason, as sometimes a file can be created with hidden control characters. (very common when stty not set properly)<br /><span style="color:BLUE;">ls -1</span> Shows the files in a list (just the file names, this option is useful in shell scripts where the files names need to be fed into another program or command for manipulation)<br /><span style="color:BLUE;">ls -1h</span> The option "-h" comes handy to display the size of the files in a human readable form.<br /><span style="color:BLUE;">ls -lr</span> The parameter -r shows the output in the reverse order<br /><span style="color:BLUE;">ls -lR</span> The -R operator makes the ls command execute recursively—that is, go under to the subdirectories and show those files too<br /><br /><br />The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">cd</span><span style="font-weight: bold;">" </span>command is used to change directories:<br /><blockquote> <pre>root> cd /u01/app/oracle</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">touch</span><span style="font-weight: bold;">" </span>command is used to create a new empty file with the default permissions:<br /><blockquote> <pre>root> touch my.log</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">rm</span><span style="font-weight: bold;">" </span>command is used to delete files and directories. The "-R" flag tells the command to recurse through subdirectories. <blockquote> <pre>root> rm my.log<br />root> rm -R /archive</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">mv</span><span style="font-weight: bold;">" </span>command is used to move or rename files and directories. The "." represents the current directory <blockquote> <pre>root> mv [from] [to]<br />root> mv my.log my1.log<br />root> mv * /archive<br />root> mv /archive/* .</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">cp</span><span style="font-weight: bold;">" </span>command is used to copy files and directories:<br /><blockquote> <pre>root> cp [from] [to]<br />root> cp my.log my1.log<br />root> cp * /archive<br />root> cp /archive/* .</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">mkdir</span><span style="font-weight: bold;">" </span>command is used to create new directories:<br /><blockquote> <pre>root> mkdir archive</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">rmdir</span><span style="font-weight: bold;">" </span>command is used to delete directories:<br /><blockquote> <pre>root> rmdir archive</pre> </blockquote> The <span style="font-weight: bold; color: rgb(204, 0, 0);">"grep"</span> command performs a search for a specified string or pattern.<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">ps -eaf | grep oracle</span> Show all processes owned by oracle.<br /></div><br />The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">find</span><span style="font-weight: bold;">" </span>command can be used to find the location of specific files. The "/" flag represents the staring directory for the search. Wildcards such as "dbms*" can be used for the filename. <blockquote> <pre>root> find / -name dbmspool.sql<br />root> find / -print | grep dbmspool.sql Search everywhere for the specified file<br />root> find . -exec grep "DISPLAY" {} \; -print | pg Search all files for the text string "DISPLAY" - takes a while to run !<br /></pre> </blockquote> Display only the lines in /etc/oratab where the lines do not (-v option; negation) start with # character (^ is a special character indicating beginning of line, similarly $ is end of line).<br /><pre style="margin-left: 40px;">root> grep -v '^#' /etc/oratab<br /></pre> <span style="text-decoration: underline;">Tip for Oracle Users</span><br />Oracle produces many extraneous files: trace files, log files, dump files, and so on. Unless they are cleaned periodically, they can fill up the filesystem and bring the database to a halt.<br />To ensure that doesn't happen, simply search for the files with extension "trc" and remove them if they are more than three days old. A simple command does the trick:<br /><span style="font-family:monospace;">find . -name "*.trc" -ctime +3 -exec rm -f {} \;</span><br /><br /><br />The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">which</span><span style="font-weight: bold;">" </span>command can be used to find the location of an executable you are using. The "which" command searches your PATH setting for occurences of the specified executable. <blockquote> <pre>oracle> which sqlplus</pre> </blockquote> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">PS1</span><span style="font-weight: bold;">"</span>changes your prompt.<br /><pre style="margin-left: 40px;">root> PS1="Diego_Master:> "<br />Diego_Master:><br /></pre> The <span style="font-weight: bold;">"</span><span style="color: rgb(204, 0, 0); font-weight: bold;">wc</span><span style="font-weight: bold;">" </span>utility displays a count of the number of characters, words and lines in a file. The switches for this utility are:<br />-l print line count<br />-c print character count<br />-w print word count<br /><pre style="margin-left: 40px;">root> wc -l README.txt<br /> 85 README.txt<br /></pre> The "<span style="font-weight: bold; color: rgb(204, 0, 0);">more</span>" or <span style="font-weight: bold; color: rgb(204, 0, 0);">"cat</span>" commands lets you display the contents of a file:<br /><span style="font-family:monospace;"></span><span style="font-family:monospace;">cat file1 file2 > file3 </span>Join file1 to file2 and output to file3<br /><br />The "<span style="color: rgb(204, 0, 0); font-weight: bold;">tail</span>" command let you see a specified number of lines from the end of the file<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">tail -n filename</span><br /></div><br />The "<span style="color: rgb(204, 0, 0); font-weight: bold;">head</span>" command let you see the specified number of lines from the top of the file<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">head -n filename</span><br /></div><br />The "<span style="font-weight: bold; color: rgb(204, 0, 0);">diff</span>" command displays the differences between file1 and file2. Options:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">diff README.txt README2.txt </span><br /></div> -t = ignore white spaces and tabs<br />-i = ignore 'case' letters (A=a)<br />Another option, -y, shows the same output, but side by side:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">diff -y file1 file2 -W 120</span><br /></div><br /><br />The <span style="font-weight: bold; color: rgb(204, 0, 0);">"alias"</span> command, creates an alias to some commands. Examples:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">alias ls='ls -al'</span> Alias the command 'ls -al' to ls<br /><span style="font-family:monospace;">alias os='echo $ORACLE_HOME'</span> alias the command to os<br /></div> Here is a list of some very useful aliases I like to define:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">alias bdump='cd $ORACLE_BASE/admin/$ORACLE_SID/bdump'</span><br /><span style="font-family:monospace;">alias l='ls -d .* --color=tty'</span><br /><span style="font-family:monospace;">alias ll='ls -l --color=tty'</span><br /><span style="font-family:monospace;">alias oh='cd $ORACLE_HOME'</span><br /><span style="font-family:monospace;">alias os='echo $ORACLE_SID'</span><br /><span style="font-family:monospace;">alias tns='cd $ORACLE_HOME/network/admin'</span><br /></div><br />The <span style="font-weight: bold; color: rgb(204, 0, 0);">"echo"</span> command, echo strings to screen<br /><span style="font-family:monospace;">echo $DISPLAY</span> display the contents of the DISPLAY variable to screen.<br /><br />With the <span style="font-weight: bold; color: rgb(204, 0, 0);">"du"</span> and<span style="color: rgb(204, 0, 0);"> </span><span style="font-weight: bold; color: rgb(204, 0, 0);">"df"</span> commands, you can display hard disk information. (-k Use 1024 byte blocks instead of the default 512)<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">du</span> Display disk usage for all directories and subdirectories under the current directory.<br /><span style="font-family:monospace;">df -k</span> Displays disk space free on each filesystem. Very useful.<br /></div><br />The<span style="font-weight: bold; color: rgb(204, 0, 0);"> "ftp"</span> comamnd Invoke the file transfer protocol file exchange program:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">ftp diego.domain.com</span> ftp to that machine (it will prompt you for a login.)<br /></div> Once logged in and at the ftp prompt, you have many options:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">bin</span> Change transfer mode to binary mode (essential for moving oracle files, dmp, zip, etc).<br /></div> <div style="margin-left: 40px;"><span style="font-family:monospace;">ascii </span>Change transfer mode to ascii mode<br /><span style="font-family:monospace;">send myfile</span> Transfer 'myfile' from your local machine<br /><span style="font-family:monospace;">get fred</span> Receive the file from the host into my local machine.<br /><span style="font-family:monospace;">mget *</span> Transfer all files in current directory of the host to your local machine.<br /><span style="font-family:monospace;">!pwd</span> Check the directory of your local machine<br /><span style="font-family:monospace;">pwd </span>Check current directory of host machine<br /></div><br />The <span style="color: rgb(204, 0, 0); font-weight: bold;">"ln"</span> command let you create a link to a file. You use this during the Oracle Software installation<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora</span><br /><span style="font-family:monospace;">ln -s /etc/init.d/dbora /etc/rc3.d/S99dbora</span><br /><span style="font-family:monospace;">ln -s /etc/init.d/dbora /etc/rc5.d/S99dbora</span><br /></div><br />The <span style="font-weight: bold; color: rgb(204, 0, 0);">"sed"</span> cpmmand invokes the stream editor. It's helpful to do a global search and replace on a file:<br /><span style="font-family:monospace;">ls | sed 's/$/<br />/g' > my_ls.html</span> Place the html command<br />at the end of each line of the output of 'ls.' Good for formatting the ouptut of unix commands into html for cgi scripts.<br /><br />The <span style="font-weight: bold; color: rgb(204, 0, 0);">"awk"</span> command it has its own scripting language:<br />For example, to display only the 6th field of the output from 'who am i.' (Field 6 is the IP address of your own terminal session / PC.) you can use:<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">who am i | awk '{print $6}' </span><br /></div> This can be used to automatically set the DISPLAY environment variable for users' logins.<br /><br />The <span style="color: rgb(204, 0, 0); font-weight: bold;">"cksum"</span> command provides a checksum of a file. It's very useful for comparing two files of the same size that you suspect are different.<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">cksum <filename></filename></span><br /></div><br />The<span style="color: rgb(204, 0, 0);"> <span style="font-weight: bold;">"split"</span></span><span style="font-weight: bold;"> </span>command can split up a file into smaller chunks.<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">split -10000 bug123456.z</span> Splits 'bug123456' into minifiles of 10000 lines each.<br /></div><br /><span style="font-weight: bold;"><br /></span>The <span style="font-weight: bold; color: rgb(204, 0, 0);">"gzip"</span> and <span style="font-weight: bold; color: rgb(204, 0, 0);">"compress"</span> commands allows you to compress files. The <code>gzip</code> command results in a compressed copy of the original file with a ".gz" extension. The <code>gunzip</code> command reverses this process. The <code>compress</code> command results in a compressed copy of the original file with a ".Z" extension. The <code>uncompress</code> command reverses this process: <blockquote> <pre>gzip myfile<br />gunzip myfile.gz<br />compress myfile<br />uncompress myfile<br /></pre> </blockquote><br />The <span style="font-weight: bold; color: rgb(204, 0, 0);">"rsync"</span> is a great file copier or command to SYNC directories, Here are some examples:<br />Only get diffs. Do multiple times for troublesome downloads<br /><span style="font-family:monospace;"> rsync -P rsync://rsync.server.com/path/to/file file</span> <br />Locally copy with rate limit. It's like nice for I/O<br /><span style="font-family:monospace;"> rsync --bwlimit=1000 fromfile tofile</span> <br />Mirror web site (using compression and encryption)<br /><span style="font-family:monospace;"> rsync -az -e ssh --delete ~/public_html/ remote.com:'~/public_html'</span><br />Synchronize current directory with remote one<br /><span style="font-family:monospace;"> rsync -auz -e ssh remote:/dir/ . && rsync -auz -e ssh . remote:/dir/</span><br /><br /><span style="font-family:monospace;"></span><br />The <span style="font-weight: bold; color: rgb(204, 0, 0);">"ssh"</span> command lets you connect to a remote box. The <span style="font-weight: bold; color: rgb(204, 0, 0);">"scp"</span> command lets you perform remote copy operations<br />Run command on $HOST as $USER (default command=shell)<br /><span style="font-family:monospace;"> ssh $USER@$HOST command </span><br />Run GUI command on $HOSTNAME as $USER<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">ssh -f -Y $USER@$HOSTNAME xeyes </span><br /></div> Copy with permissions to $USER's home directory on $HOST<br /><span style="font-family:monospace;"> scp -p -r $USER@$HOST: file dir/ </span><br />Forward connections to $HOSTNAME:8080 out to $HOST:80<br /><span style="font-family:monospace;"> ssh -g -L 8080:localhost:80 root@$HOST </span><br />Forward connections from $HOST:1434 in to imap:143<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">ssh -R 1434:imap:143 root@$HOST </span><br /></div><br /><h2><a name="File_Permissions"></a><span style="text-decoration: underline;">File Permissions</span></h2> The "<span style="font-weight: bold; color: rgb(204, 0, 0);">umask</span>" command can be used to read or set default file permissions for the current user:<br /><blockquote> <pre>root> umask 022</pre> </blockquote> The umask value is subtracted from the default permissions (666) to give the final permission:<br /><blockquote> <pre>666 : Default permission<br />022 : - umask value<br />644 : final permission</pre> </blockquote> The "<span style="color: rgb(204, 0, 0); font-weight: bold;">chmod</span>" command is used to alter file permissions after the file has been created:<br /><blockquote> <pre>root> chmod 777 *.log<br />Owner Group World Permission<br />========= ========= ========= ======================<br />7 (u+rwx) 7 (g+rwx) 7 (o+rwx) read + write + execute<br />6 (u+wx) 6 (g+wx) 6 (o+wx) write + execute<br />5 (u+Rx) 5 (g+Rx) 5 (o+Rx) read + execute<br />4 (u+r) 4 (g+r) 4 (o+r) read only<br />2 (u+w) 2 (g+w) 2 (o+w) write only<br />1 (u+x) 1 (g+x) 1 (o+x) execute only</pre> </blockquote> Character eqivalents can be used in the chmod command:<br /><blockquote> <pre>root> chmod o+rwx *.log<br />root> chmod g+r *.log<br />root> chmod -Rx *.log</pre> </blockquote> The "<span style="font-weight: bold; color: rgb(204, 0, 0);">chown</span>" command is used to change the ownership of files after creation. The "-R" flag causes the command ro recurse through any subdirectories. <blockquote> <pre>root> chown -R oinstall.dba *</pre> </blockquote> Finally the "<span style="font-weight: bold; color: rgb(204, 0, 0);">chgrp</span>" command is used to change the group to a file:<br /><pre style="margin-left: 40px;">root> chgrp <directory> group</directory></pre> The following example changes the ownership on every single file in current directory and lower directories to oracle (useful if someone has done an install erroneously as root.)<br /><span style="font-family:monospace;">find . -exec chown oracle {} \; -print </span><br /><h2><br /></h2> <h2><a name="OS_Users_Management"></a><span style="text-decoration: underline;">OS Users Management</span></h2> The <span style="font-weight: bold; color: rgb(204, 0, 0);">"useradd"</span> command is used to add OS users:<br /><blockquote> <pre>root> useradd -G oinstall -g dba -d /usr/users/my_user -m -s /bin/ksh my_user</pre> </blockquote> <ul><li>The "-G" flag specifies the primary group.</li><li>The "-g" flag specifies the secondary group.</li><li>The "-d" flag specifies the default directory.</li><li>The "-m" flag creates the default directory.</li><li>The "-s" flag specifies the default shell.</li></ul> The <span style="font-weight: bold; color: rgb(204, 0, 0);">"usermod"</span> command is used to modify the user settings after a user has been created:<br /><blockquote> <pre>root> usermod -s /bin/csh my_user</pre> </blockquote> The <span style="font-weight: bold; color: rgb(204, 0, 0);">"userdel"</span> command is used to delete existing users. The "-r" flag removes the default directory. <blockquote> <pre>root> userdel -r my_user</pre> </blockquote> The <span style="font-weight: bold; color: rgb(204, 0, 0);">"passwd"</span> command is used to set, or reset, the users login password:<br /><blockquote> <pre>root> passwd my_user</pre> </blockquote> The<span style="font-weight: bold; color: rgb(204, 0, 0);"> "who"</span> command can be used to list all users who have OS connections:<br /><blockquote> <pre>root> who<br />root> who | head -5<br />root> who | tail -5<br />root> who | grep -i ora<br />root> who | wc -l</pre> </blockquote> <ul><li>The "head -5" command restricts the output to the first 5 lines of the who command.</li><li>The "tail -5" command restricts the output to the last 5 lines of the who command.</li><li>The "grep -i ora" command restricts the output to lines containing "ora".</li><li>The "wc -l" command returns the number of lines from "who", and hence the number of connected users.</li></ul> <h2><br /></h2> <h2><a name="Process_Management"></a><span style="text-decoration: underline;">Process Management</span></h2> The "<span style="color: rgb(204, 0, 0); font-weight: bold;">ps</span>" command lists current process information:<br /><blockquote> <pre>root> ps<br />root> ps -ef | grep -i ora</pre> </blockquote> Specific processes can be killed by specifying the process id in the <span style="color: rgb(204, 0, 0); font-weight: bold;">"kill"</span> command, the -9 forces to kill that process.<br /><blockquote> <pre>root> kill -9 12345</pre> </blockquote> <h2><br /></h2> <h2><a name="uname_and_hostname"></a><span style="text-decoration: underline;">uname and hostname</span></h2> The <span style="font-weight: bold; color: rgb(204, 0, 0);">"uname"</span> and <span style="color: rgb(204, 0, 0);">"hostname" </span>commands can be used to get information about the host:<br /><blockquote> <pre>root> uname -a<br />Linux HPLINUX 2.4.21-20.ELsmp #1 SMP Wed Aug 18 20:46:40 EDT 2004 i686 i686 i386 GNU/Linux<br /><br />root> uname -a | awk '{ print $2 }'<br />HPLINUX<br /><br />root> hostname<br />HPLINUX<br /><br /></pre> </blockquote> <h2 style="text-decoration: underline;"><a name="Helpful_Commands"></a>Some helpful commands</h2> <b>To enable doskey mode in Unix</b><br />set -o vi<br /><b><br />To see errors from Alert log file</b><br />grep ORA- alertSID.log<br />or<br />cat alert_LIN1.log | grep -i ORA-<br /><br /><b>To see the name of a user from his unix id (Provided your UNIX admin keeps them!)</b><br />grep userid /etc/passwd<br /><br /><b>To see if port number 1521 is reserved for Oracle</b><br />grep 1521 /etc/services<br /><br /><b>To see the latest 20 lines in the Alert log file:</b><br />tail -20 alertSID.log<br /><br /><b>To see the first 20 lines in the Alert log file:</b><br />head -20 alertSID.log<br /><br /><b>To find a file named "whereare.you" under all sub-directories of /usr/oracle</b><br />find /usr/oracle -name whereare.you -print<br /><br /><b>To remove/delete all the files under /usr/oracle which end with .tmp</b><br />find /usr/oracle -name "*.tmp" -print -exec rm -f {} \;<br /><br /><span style="font-weight: bold;">Remove/Delete files older than N number of days (Useful in delete log, trace, tmp file )</span><br />find . -name ‘*.*’ -mtime +[N in days] -exec rm {} \; <br /><br /><b>To list all files under /usr/oracle which are older than a week.</b><br />find /usr/oracle -mtime +7 -print<br /><br /><b>To list all files under /usr/oracle which are modified within a week.</b><br />find /usr/oracle -mtime -7 -print -> Solaris<br />find . -mtime -7 -exec ls -lt {} \; -> Linux<br /><br /><b>To compress all files which end with .dmp and are more than 1 MB.</b><br />find /usr/oracle -size +1048576c -name "*.dmp" -print -exec compress {} \;<br /><br /><b>To see the space used and available on /oracle mount point</b><br /> df -k /oracle<br /><br /><b>To convert the contents of a text file to UPPERCASE</b><br />tr "[a-z]" "[A-Z]" <> newfilename<br /><br /><b>To convert the contents of a text file to lowercase.</b><br />tr "[A-Z]" "[a-z]" <> newfilename<br /><br /><b>To see the oracle processes</b><br />ps -ef | grep SIDNAME<br /><br /><b>To change all occurrences of SCOTT with TIGER in a file</b><br />sed 's/SCOTT/TIGER/g' filename > newfilename<br /><br /><span style="font-weight: bold;"></span><b>To see lines 100 to 120 of a file</b><br />head -120 filename | tail -20<br /><br /><span style="font-weight: bold;">To remove DOS CR/LFs (^M)</span><br />sed -e 's/^M$//' filename > tempfile<br /><span style="font-weight: bold;"><br /></span><big><big><span style="font-weight: bold; text-decoration: underline;"><a name="Getting_Information_from_the_OS"></a>Getting Information from the OS</span></big></big><br /><table style="width: 1103px; height: 389px;" border="1" cellpadding="1" cellspacing="2" cols="7"> <tbody> <tr> <td bg="" valign="CENTER" style="color:WHITE;"> <p> <b><span style="color:BLUE;">OS</span></b></p> </td> <td bg="" valign="CENTER" style="color:WHITE;"> <p> <b><span style="color:BLUE;">patchlevel</span></b></p> </td> <td bg="" valign="CENTER" style="color:WHITE;"> <p> <b><span style="color:BLUE;">memory</span></b></p> </td> <td bg="" valign="CENTER" style="color:WHITE;"> <p> <b><span style="color:BLUE;">I/O Info</span></b></p> </td> <td bg="" valign="CENTER" style="color:WHITE;"> <p> <b><span style="color:BLUE;">CPU Info</span></b></p> </td> <td bg="" valign="CENTER" style="color:WHITE;"> <p> <b><span style="color:BLUE;">CPU / Memory</span></b></p> </td> </tr> <tr> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> Sun Solaris</p> </td> <td style="background-color: lightyellow;"> <p> showrev -p</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> sysinfo<br />vmstat<br /> <i>/usr/sbin/prtconf</i><br /> for general information + memory<br /> <i>/usr/sbin/psrinfo -v</i> for CPU info<br /> </p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p>sar -d<br />iostat<br /> </p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p>/opt/RICHPse/bin/se<br />/opt/RICHPse/examples/toptool.se<br />sar -u<br />/usr/bin/mpstat<br /> </p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p>/opt/RICHPse/bin/se<br />/opt/RICHPse/examples/toptool.se<br />top<br />/etc/swap -l<br /> </p> </td> </tr> <tr> <td style="vertical-align: top;">Linux<br /> </td> <td style="vertical-align: top;"><br /> </td> <td style="vertical-align: top;"><small>grep MemTotal /proc/meminfo</small><br />free<br /> </td> <td style="vertical-align: top;">vmstat 3 5<br /> </td> <td style="vertical-align: top;">grep "model name" /proc/cpuinfo<br />cat /proc/cpuinfo<br />sar -u 2 5<br />sar -b<br /> </td> <td style="vertical-align: top;">top<br />sar -W 5 5<br /> </td> </tr> <tr> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> HP-UX</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> swlist</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> sam</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p><br /> </p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"><br /> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> vmstat -n 2 200</p> </td> </tr> <tr> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> AIX/RS-6000</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> instfix -ivqk</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> smit or sar</p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p><br /> </p> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"><br /> </td> <td bgcolor="LIGHTYELLOW" valign="CENTER"> <p> </p> <br /></td> </tr> </tbody> </table> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /></p> <span style="font-weight: bold; text-decoration: underline;"><br />Information on RAM and CPU's </span><span style="font-weight: bold; text-decoration: underline;">(Metalink Note </span><span style="font-weight: bold; text-decoration: underline;">233753.1)</span><br /><span style="font-family:monospace;">grep MemTotal /proc/meminfo </span>Show RAM total seen by the system<br /><span style="font-family:monospace;"><br />grep "model name" /proc/cpuinfo</span> Show CPU(s) info<br /><span style="font-family:monospace;">cat /proc/cpuinfo</span><br /><p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /><span style="font-family:monospace;">mount | column -t </span> List mounted filesystems on the system (and align output)<br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /><span style="font-family:monospace;">free -m</span> (in MB)<br /><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><span style="font-weight: bold; text-decoration: underline;"><br />Check Swap Activity (Metalink Note </span><span style="font-weight: bold; text-decoration: underline;">225451.1)</span><br /><span style="font-family:monospace;">/sbin/swapon -s</span><br /><span style="font-family:monospace;">free -t</span><br /><span style="font-family:monospace;">cat /proc/swaps </span><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br />The recommended SWap size is two to three times the amount of Physical Memory for Swap space (unless the system exceeds 1 GB of Physical Memory, where two times the amount of Physical Memory for Swap space is sufficient)<br />Swap space in Linux is used when the amount of physical memory (RAM) is full.If the system needs more memory resources and the physical memory is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory.<br />Swapping is one of the Unix mechanisms to accommodate the size limitation of memory by moving entire processes to disk to reclaim memory.<br />Paging is another Unix machanism to manage the limitations of memory. Unlike swapping, where entire processes are moved in and out of memory, paging moves only individual pages of processes to disk. Paging is not as serious a problem as swapping, as the entire program does not have to reside in memory to run. A small amount of paging may not noticeably affect the performance of a system. However, the performance of a system may degraderapidly as paging activity increases.<br />Swap space can have a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files.<br />When analyzing your UNIX machine, make sure that the machine is not swapping at all and at worst paging lightly. This indicates a system with a healthy amount of memory available.</p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /><span style="text-decoration: underline;">How can I enable Swap in LINUX ?</span><span style="font-weight: bold;"><b><br /></b></span>First check is Swap is enabled:<br /><span style="font-family:monospace;">/sbin/swapon -s</span><br /><span style="font-family:monospace;">Filename Type Size Used Priority</span><br /><span style="font-family:monospace;">/dev/sda3 partition 2040244 453180 -1</span><br /><br />To enable swap, check for swap entries in your /etc/fstab<br /><span style="font-family:monospace;">grep swap /etc/fstab</span><br /><span style="font-family:monospace;">/dev/sda3 swap swap defaults 0 0</span><br /><br />And use the<span style="font-family:monospace;"> '/sbin/swapon -a'</span> command to enable all Swap partitions listed in /etc/fstab.<br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /><span style="text-decoration: underline;">How to add a swapfile?</span><br />Determine the size of the new swap file and multiple by 1024 to determine the block size. For example, the block size of a 64 MB swap file is 65536.<br /><br />At a shell prompt as root, type the following command with count being equal to the desired block size:<br /><span style="font-family:monospace;">dd if=/dev/zero of=/data2/swapfile1 bs=1024 count=65536</span><br /><br />Setup the swap file with the command:<br /><span style="font-family:monospace;">/sbin/mkswap /data2/swapfile1</span><br /><br />To enable the swap file immediately but not automatically at boot time:<br /><span style="font-family:monospace;">/sbin/swapon /data2/swapfile</span><br /><br />To enable it at boot time, edit /etc/fstab to include:<br /><span style="font-family:monospace;">/data2/swapfile swap swap defaults 0 0</span><br /><br />The next time the system boots, it will enable the new swap file.<br /><br /><span style="font-weight: bold;"><span style="text-decoration: underline;">Check Services Running and stop them if not used<br /></span></span>Services that should be removed: r* (shell or rsh, login or rlogin, exec or rexec, rcp), telnet, ftp, sendmail, exim, postfix, printer, qmail, http, portmap, SMBD (Samba)<br /><span style="font-family:monospace;">chkconfig --list</span> --> Show services running and its level<br /><span style="font-family:monospace;">chkconfig --del servicename</span> --> Stop that service<br /><span style="font-family:monospace;">chkconfig --level 345 servicename off</span> --> Stop that service for level 3,4,5<br /><br />Also it could be necessary to check the file /etc/inetd.conf because it has references to some services, if any service that I want to stop is there, comment that line and reboot the server or run:<br /><span style="font-family:monospace;">/etc/init.d/inetd restart</span><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><span style="font-weight: bold; text-decoration: underline;">Enable FTP and TELNET Services</span><br /><span style="font-family:monospace;">cd to /etc/xinetd.d</span><br /><span style="font-family:monospace;">vi wu-ftpd</span><br />Change the disable field from "yes" to "no" and save changes.<br /><span style="font-family:monospace;">vi telnet</span><br />Change the disable field from "yes" to "no" and save changes.<br /><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><span style="text-decoration: underline; font-weight: bold;"><br />Information on Network</span><b style=""><o:p> </o:p></b></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px; color: rgb(0, 0, 0);"><span style=""></span>Display network interface configuration parameters<br /></p> <div style="margin-left: 40px;"><span style="font-family:monospace;">ifconfig -a</span><br /></div> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px; color: rgb(0, 0, 0);"><br />Address resolution display and control<br /></p> <div style="margin-left: 40px;"><span style="font-family:monospace;">arp -a</span><br /></div> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br />Check Routes:<br /></p> <div style="margin-left: 40px;"><span style="font-family:monospace;">netstat -rn </span><br /></div> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /><span style="font-weight: bold; text-decoration: underline;">Change network, change it's ip, mask, bcast and gateway. </span><br />The easiest way is to execute <span style="font-family:monospace;">sys-unconfig</span>.<br />After the process finishes power down the box and move it to the new network.<br />When you boot the box it will ask the appropriate questions about the network configuration<br /><br /><span style="color:maroon;"></span></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><span style="color:maroon;"></span><span style="font-weight: bold; text-decoration: underline;">Important Network LINUX files:</span><br />Making the following gross assumptions:<br /><span style="font-family:monospace;">Your IP is: 192.168.0.1</span><br /><span style="font-family:monospace;">Your Gateway is: 192.168.0.254</span><br /><span style="font-family:monospace;">Your netmask is: 255.255.255.0</span><br /><span style="font-family:monospace;">Your nameservers are: 192.168.0.2, 192.168.0.3, and 192.168.0.4</span><br /><br /><span style="font-weight: bold;">/etc/sysconfig/network File</span><br /><span style="font-family:monospace;">NETWORKING=yes</span><br /><span style="font-family:monospace;">HOSTNAME=your_machine_name.saa.senate.gov</span><br /><span style="font-family:monospace;">GATEWAY=192.168.0.254</span><br /><br /><span style="font-weight: bold;">/etc/hosts File</span><br /><span style="font-family:monospace;">127.0.0.1 localhost.localdomain localhost</span><br /><span style="font-family:monospace;">192.168.0.1 your_machine_name.company.com your_machine_name</span><br /><span style="font-family:monospace;">192.168.0.254 your_gateway.company.com your_gateway</span><br />(You don't absolutely *need* your gateway in the hosts file, but I feel it does sometimes speed up some operations)<br /><br /><span style="font-weight: bold;">/etc/sysconfig/network-scripts/ifcfg-eth0 File</span><br /><span style="font-family:monospace;">DEVICE=eth0</span><br /><span style="font-family:monospace;">BOOTPROTO=none</span><br /><span style="font-family:monospace;">ONBOOT=yes</span><br /><span style="font-family:monospace;">IPADDR=192.168.0.1</span><br /><span style="font-family:monospace;">NETMASK=255.255.255.0</span><br /><br /><span style="font-weight: bold;">/etc/resolv.conf File</span><br /><span style="font-family:monospace;">search gateway compay_gateway</span><br /><span style="font-family:monospace;">nameserver 192.168.0.2</span><br /><span style="font-family:monospace;">nameserver 192.168.0.3</span><br /><span style="font-family:monospace;">nameserver 192.168.0.4</span><br />(The 'search' line is optional. You can have up to 3 'nameserver' lines,and they don't need to be inside your network)<br /><br /><span style="font-weight: bold;">/etc/resolv.conf File</span><br /><span style="font-family:monospace;">domain domain_name</span><br /><span style="font-family:monospace;">nameserver 192.168.0.1</span><br /><span style="font-family:monospace;">search domain_name</span><br /><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><br /></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><span style="color:maroon;"><span style="font-weight: bold; text-decoration: underline; color: rgb(0, 0, 0);">Get OS File System Block Size </span></span><span style="font-weight: bold; text-decoration: underline;">64 bit or 32 bit</span><span style="color:maroon;"><span style="font-weight: bold; text-decoration: underline; color: rgb(0, 0, 0);"></span><br /></span></p> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px; color: rgb(0, 0, 0);">Create a file with just 1 character on it and then perform:<br /></p> <div style="margin-left: 40px;"><span style="font-family:monospace;">du -b filename</span><br /></div> <p class="MsoNormal" style="margin-top: 0px; margin-bottom: 0px;"><span style="text-decoration: underline;">On linux</span><br />$uname -a<br />Linux debian 2.6.18-4-686 #1 SMP Wed May 9 23:03:12 UTC 2007 i686 GNU/Linux<br /><span style="text-decoration: underline;">On Solaris</span><br />isainfo -b -v<br />/usr/bin/isainfo -kv<br /><span style="text-decoration: underline;">On AIX</span><br />$ getconf -a | grep KERN<br />$ file /usr/lib/boot/unix*<br /><span style="text-decoration: underline;">On HP-UX</span><br />/usr/bin/ getconf KERNEL_BITS<br />/usr/bin/file /stand/vmunix</p> <span style="font-weight: bold; text-decoration: underline;"></span><br /><br /><span style="font-weight: bold;"><span style="text-decoration: underline;">OS version </span></span><br /><ul><li><var>/usr/bin/uname -s -r</var> </li><li><var>cat<a href="https://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/ch-proc.html"> /proc</a>/version</var> </li><li><var>cat /etc/issue</var></li></ul><br /><span style="font-weight: bold; text-decoration: underline;">OS kernel parameters files</span><span style="font-weight: bold; text-decoration: underline;"> (<<a target="corner" href="http://metalink.oracle.com/metalink/plsql/showdoc?db=not&id=68862.1">note:68862.1</a>>)</span> <ul><li>/sbin/sysctl -a </li><li>more /etc/sysctl.conf <span style=";font-family:";font-size:10pt;" ><span style="">Parameter kernel.shmmax shows </span></span><span style=";font-family:";font-size:10pt;" >Shared Space, must be less than REAL Memory.</span> </li><li>/usr/src/linux/include/* </li><li>/usr/src/linux/include/linux/shm.h for shared memory </li><li>/usr/src/linux/include/linux/sem.h for semaphores </li></ul> <var></var><br /><span style="font-weight: bold; text-decoration: underline;">Max number of semaphores sets (SEMMNI)</span><br /><ul><li><var>/usr/bin/ipcs -ls</var> (max number of arrays) </li><li><var>/sbin/sysctl kernel.sem</var> (4th & last value) </li><li><var>awk '{print $4}' /proc/sys/kernel/sem</var> </li></ul><br /><span style="font-weight: bold; text-decoration: underline;">Max number of semaphores systemwide (SEMMNS)</span><br /><ul><li><var>/usr/bin/ipcs -ls</var> (max semaphores system wide) </li><li><var>/sbin/sysctl kernel.sem</var> (2nd value) </li><li><var>awk '{print $2}' /proc/sys/kernel/sem</var></li></ul><br /><span style="font-weight: bold; text-decoration: underline;">Max number of shared segments</span><br /><ul><li><var>/sbin/sysctl kernel.shmmni</var> </li><li><var>/usr/bin/ipcs -lm</var> (max number of segments) </li><li><var>cat /proc/sys/kernel/shmmni</var></li></ul> <span style="font-weight: bold; text-decoration: underline;">Max shared segment size</span><br /><ul><li><var>/sbin/sysctl kernel.shmmax</var> </li><li><var>/usr/bin/ipcs -lm</var> (max seg size (kbytes)) </li><li><var>cat<a href="https://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/ch-proc.html"> /proc</a>/sys/kernel/shmmax</var> (max value=4Gb)</li></ul> <span style="font-weight: bold;"><span style="text-decoration: underline;">Using PMAP to determine the memory size of background processes </span></span><br />First, to determine the memory size, the process id (PID) of the Oracle background process must be found. This is done by issuing the following command:<br /><span style="font-family:monospace;">ps -ef |grep smon</span><br /><span style="font-family:monospace;"></span><span style="font-family:monospace;"> oracle <span style="color: rgb(204, 0, 0);">540</span> 1 0 Jun 25 ? 1:55 ora_smon_DEVSOL</span><br /><span style="font-family:monospace;"></span><br />Thenm enter the following command:<br /><span style="font-family:monospace;">pmap -x <span style="color: rgb(204, 0, 0);">540</span></span> (540 is the PID for the SMON process)<br /><span style="font-family:monospace;"> Address Kbytes Resident Shared Private Permissions Mapped File</span><br /><span style="font-family:monospace;">0000000100000000 50472 23640 21336 2304 read/exec oracle</span><br /><span style="font-family:monospace;">0000000103248000 712 512 368 144 read/write/exec oracle</span><br /><span style="font-family:monospace;">00000001032FA000 392 208 - 208 read/write/exec [ heap ]</span><br /><span style="font-family:monospace;">0000000380000000 1462272 1462272 - <span style="color: rgb(204, 0, 0);">1462272 </span>read/write/exec/shared [ ism <span style="color: rgb(204, 0, 0);">shmid</span>=0x65 ]</span><br /><span style="font-family:monospace;">FFFFFFFF7CE70000 72 72 - 72 read/write [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7CE88000 32 16 - 16 read/write [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7CF00000 8 8 - 8 read/write [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7CF10000 8 8 - 8 read/write [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7CF50000 136 128 - 128 read/write [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7CF74000 48 40 - 40 read/write [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7D000000 8 - - - read/write/exec [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7D100000 16 16 8 8 read/exec libc_psr.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7D200000 16 16 8 8 read/exec libmp.so.2</span><br /><span style="font-family:monospace;">FFFFFFFF7D304000 8 8 - 8 read/write/exec libmp.so.2</span><br /><span style="font-family:monospace;">FFFFFFFF7D400000 88 72 64 8 read/exec libm.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7D516000 8 8 - 8 read/write/exec libm.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7D600000 8 8 - 8 read/write/exec [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7D700000 8 8 - 8 read/exec libkstat.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7D802000 8 8 - 8 read/write/exec libkstat.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7D900000 32 32 24 8 read/exec librt.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7DA08000 8 8 - 8 read/write/exec librt.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7DB00000 32 32 24 8 read/exec libaio.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7DC08000 8 8 - 8 read/write/exec libaio.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7DD00000 704 600 504 96 read/exec libc.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7DEB0000 56 56 - 56 read/write/exec libc.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7DEBE000 8 8 - 8 read/write/exec libc.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7E000000 32 24 8 16 read/exec libgen.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7E108000 8 8 - 8 read/write/exec libgen.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7E200000 56 40 32 8 read/exec libsocket.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7E30E000 16 16 - 16 read/write/exec libsocket.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7E400000 5328 1864 1736 128 read/exec libjox9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7EA32000 384 288 - 288 read/write/exec libjox9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7EA92000 8 - - - read/write/exec libjox9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7EB00000 8 8 - 8 read/write/exec [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7EC00000 656 224 216 8 read/exec libnsl.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7EDA4000 56 56 - 56 read/write/exec libnsl.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7EDB2000 40 - - - read/write/exec libnsl.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7EE00000 32 24 8 16 read/exec libskgxn9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7EF06000 8 8 - 8 read/write/exec libskgxn9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7F000000 8 8 - 8 read/write/exec [ anon ]</span><br /><span style="font-family:monospace;">FFFFFFFF7F100000 8 8 - 8 read/exec libskgxp9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7F200000 8 8 8 - read/write/exec libskgxp9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7F300000 8 8 - 8 read/exec libodmd9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7F400000 8 8 8 - read/write/exec libodmd9.so</span><br /><span style="font-family:monospace;">FFFFFFFF7F500000 8 8 - 8 read/exec libdl.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7F600000 128 128 120 8 read/exec ld.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7F71E000 8 8 - 8 read/write/exec ld.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7F720000 8 8 - 8 read/write/exec ld.so.1</span><br /><span style="font-family:monospace;">FFFFFFFF7FFDC000 144 120 - 120 read/write [ stack ]</span><br /><span style="font-family:monospace;">---------------- ------ ------ ------ ------</span><br /><span style="font-family:monospace;"> total Kb 1522136 1490664 24472 <span style="color: rgb(204, 0, 0);">1466192</span></span><br /><br />The private memory of this SMON process is <span style="color: rgb(204, 0, 0);">1466192K </span>minus the SGA size, which is the line marked with <span style="font-weight: bold; color: rgb(204, 0, 0);">'shmid='</span> above. In this case it is 1462272K.<br />The calculation is as follows: <span style="color: rgb(204, 0, 0);">1466192K</span> minus <span style="color: rgb(204, 0, 0);">1462272K </span>is 3920K.<br />So, the process memory for SMON is 3920K.<br /><br /><span style="font-weight: bold; text-decoration: underline;"></span> <h2 style="text-decoration: underline;"><a name="General_Performance"></a><big style="color: rgb(153, 0, 0);">General Performance</big></h2> <span style="text-decoration: underline;"><span style="font-weight: bold;"><big><big><a name="free"></a>free</big></big><br /></span></span> <p>The <tt>free</tt> command let you identify the amoung of memory used by all the apps on the box. If the amount of memory used is bigger than the available RAM, then the box starts to swap.<br />If you use this command with the <span style="font-family:monospace;">-m</span> option, it will show the numbers in MB.<br /><span style="font-family:monospace;"><br /># free -m</span><br /><span style="font-family:monospace;"> total used free shared buffers cached</span><br /><span style="font-family:monospace;">Mem: <span style="font-weight: bold; color: rgb(204, 0, 0);">1772 1654 117</span> 0 18 618</span><br /><span style="font-family:monospace;">-/+ buffers/cache: 1017 754</span><br /><span style="font-family:monospace;">Swap: 1983 1065 918</span><br /></p> <p>Here we can see that the box has <span style="font-weight: bold;">1772 MB</span> of RAM, currently using <span style="font-weight: bold;">1654 MB</span>, and only <span style="font-weight: bold;">117 MB</span> of free memory.<br />The next line shows the changes on the size of the cache and buffers in the memory.<br />Finally the third one shows the amount of swap memory that is being used.</p> <p>The –t options shows you the totals at the end of the output (adds physical memory plus swap memory):<br /><span style="font-family:monospace;"># free -m -t</span><br /><span style="font-family:monospace;"> total used free shared buffers cached</span><br /><span style="font-family:monospace;">Mem: 1772 1644 127 0 16 613</span><br /><span style="font-family:monospace;">-/+ buffers/cache: 1014 757</span><br /><span style="font-family:monospace;">Swap: 1983 1065 918</span><br /><span style="font-family:monospace;">Total: 3756 2709 1046</span><br /></p> <p><span style="text-decoration: underline;">Some tips</span><br />Shows the percentage of used memory:<br /><span style="font-family:monospace;"># free -m | grep Mem | awk '{print ($3 / $2)*100}' </span><br /><span style="font-family:monospace;">98.7077</span><br /></p> <p>Shows the percentage of swap memory:<br /><span style="font-family:monospace;">free -m | grep -i Swap | awk '{print ($3 / $2)*100}'</span></p><br /><span style="text-decoration: underline;"><span style="font-weight: bold;"><big><big><a name="top"></a>top</big></big><br /></span></span> <p>The <tt>top</tt> command is probably the most useful one for an Oracle DBA managing a database on Linux.<br />Note that unlike other commands, <tt>top</tt> does not produce an output and sits still. It refreshes the screen to display new information. So, if you just issue <tt>top</tt> and leave the screen up, the most current information is always up. To stop and exit to shell, you can press Control-C.</p> <pre>$ top<br /><br />18:46:13 up 11 days, 21:50, 5 users, load average: 0.11, 0.19, 0.18<br />151 processes: 147 sleeping, 4 running, 0 zombie, 0 stopped<br />CPU states: cpu user nice system irq softirq iowait idle<br /> total 12.5% 0.0% 6.7% 0.0% 0.0% 5.3% 75.2%<br />Mem: 1026912k av, 999548k used, 27364k free, 0k shrd, 116104k buff<br /> 758312k actv, 145904k in_d, 16192k in_c<br />Swap: 2041192k av, 122224k used, 1918968k free 590140k cached<br /><br />PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />451 oracle 15 0 6044 4928 4216 S 0.1 0.4 0:20 0 tnslsnr<br />8991 oracle 15 0 1248 1248 896 R 0.1 0.1 0:00 0 top<br /> 1 root 19 0 440 400 372 S 0.0 0.0 0:04 0 init<br /> 2 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 keventd<br /> 3 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 kapmd<br /> 4 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0<br /> 7 root 15 0 0 0 0 SW 0.0 0.0 0:01 0 bdflush<br /> 5 root 15 0 0 0 0 SW 0.0 0.0 0:33 0 kswapd<br /> 6 root 15 0 0 0 0 SW 0.0 0.0 0:14 0 kscand<br /> 8 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 kupdated<br /> 9 root 25 0 0 0 0 SW 0.0 0.0 0:00 0 mdrecoveryd<br /><em>... output snipped ...</em><br /></pre> <p>Let's examine the different types of information produced.<br /></p> <p>The first line:<span style="font-family:monospace;"> </span><span style="font-style: italic;">18:46:13 up 11 days, 21:50, 5 users, load average: 0.11, 0.19, 0.18</span><br />shows the current time (18:46:13), that system has been up for 11 days; that the system has been working for 21 hours 50 seconds. The load average of the system is shown (0.11, 0.19, 0.18) for the last 1, 5 and 15 minutes respectively. (By the way, you can also get this information by issuing the <tt>uptime</tt> command.)<br />If the load average is not required, press the letter "l" (lowercase L); it will turn it off. To turn it back on press l again. Ideally Load average should be less than 1, otherwise the processes are fully burdened<br /><br /></p> <p>The second line:<span style="font-family:monospace;"> </span><span style="font-style: italic;">151 processes: 147 sleeping, 4 running, 0 zombie, 0 stopped</span><br />shows the number of processes, running, sleeping, etc.<br /><br /></p> <p>The third and fourth lines: </p> <pre>CPU states: cpu user nice system irq softirq iowait idle<br /> total 12.5% 0.0% 6.7% 0.0% 0.0% 5.3% 75.2%<br /></pre> <p>show the CPU utilization details. The above line shows that user processes consume 12.5% and system consumes 6.7%. The user processes include the Oracle processes. Press "t" to turn these three lines off and on. If there are more than one CPU, you will see one line per CPU.<br /><br /></p> <p>The next two lines:</p> <pre>Mem: 1026912k av, 1000688k used, 26224k free, 0k shrd, 113624k buff<br /> 758668k actv, 146872k in_d, 14460k in_c<br />Swap: 2041192k av, 122476k used, 1918716k free 591776k cached<br /></pre> <p>show the memory available and utilized. Total memory is "1026912k av", approximately 1GB, of which only 26224k or 26MB is free. The swap space is 2GB; but it's almost not used. To turn it off and on, press "m".</p> <p>The rest of the display shows the processes in a tabular format. Here is the explanation of the columns:</p> <p> <table class="bodycopy" border="1" cellpadding="2" cellspacing="0" width="550"> <tbody> <tr valign="top"> <td width="20%"><strong>Column</strong> </td> <td width="80%"><strong>Description</strong> </td> </tr> <tr valign="top"> <td>PID</td> <td>The process ID of the process</td> </tr> <tr valign="top"> <td>USER </td> <td>The user running the process</td> </tr> <tr valign="top"> <td>PRI </td> <td>The priority of the process</td> </tr> <tr valign="top"> <td>NI</td> <td>The nice value: The higher the value, the lower the priority of the task</td> </tr> <tr valign="top"> <td>SIZE </td> <td>Memory used by this process (code+data+stack)</td> </tr> <tr valign="top"> <td>RSS </td> <td>The physical memory used by this process</td> </tr> <tr valign="top"> <td>SHARE </td> <td>The shared memory used by this process</td> </tr> <tr valign="top"> <td>STAT </td> <td> <p>The status of this process, shown in code. Some major status codes are:<br />R – Running<br />S –Sleeping<br />Z – Zombie<br />T – Stopped </p> You can also see second and third characters, which indicate:<br />W – Swapped out process<br />N – positive nice value </td> </tr> <tr valign="top"> <td>%CPU </td> <td>The percentage of CPU used by this process</td> </tr> <tr valign="top"> <td>%MEM </td> <td>The percentage of memory used by this process</td> </tr> <tr valign="top"> <td>TIME </td> <td>The total CPU time used by this process</td> </tr> <tr valign="top"> <td>CPU </td> <td>If this is a multi-processor system, this column indicates the ID of the CPU this process is running on.</td> </tr> <tr valign="top"> <td>COMMAND</td> <td>The command issued by this process</td> </tr> </tbody> </table> </p> <p>While the <tt>top</tt> is being displayed, you can press a few keys to format the display as you like. Pressing the uppercase <span style="font-weight: bold;">M key</span> sorts the output by memory usage. (Note that using lowercase m will turn the memory summary lines on or off at the top of the display.) This is very useful when you want to find out who is consuming the memory. Here is sample output:</p> <pre>PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />31903 oracle 15 0 75760 72M 72508 S 0.0 7.2 0:01 0 ora_smon_PRODB2<br />31909 oracle 15 0 68944 66M 64572 S 0.0 6.6 0:03 0 ora_mmon_PRODB2<br />31897 oracle 15 0 53788 49M 48652 S 0.0 4.9 0:00 0 ora_dbw0_PRODB2<br /></pre> <p>Now that you learned how to interpret the output, let's see how to use command line parameters. </p> <p>The most useful is <span style="font-weight: bold;">-d</span>, which indicates the delay between the screen refreshes. To refresh every second, use <tt style="font-weight: bold;">top -d 1</tt>. </p> <p>The other useful option is <span style="font-weight: bold;">-p. </span>If you want to monitor only a few processes, not all, you can specify only those after the -p option. To monitor processes 13609, 13608 and 13554, issue: </p> <pre>top -p 13609 -p 13608 -p 13554<br /></pre> <p>This will show results in the same format as the <tt>top</tt> command, but only those specific processes.</p> <!-- Tips Box start --> <div id="tipsBox"> <h3><br /></h3> <h3 style="text-decoration: underline;">Tip for Oracle Users</h3> <p>It's probably needless to say that the <tt>top</tt> utility comes in very handy for analyzing the performance of database servers. Here is a partial <tt>top</tt> output.</p> <pre>20:51:14 up 11 days, 23:55, 4 users, load average: 0.88, 0.39, 0.27<br />113 processes: 110 sleeping, 2 running, 1 zombie, 0 stopped<br /><span style="color: rgb(204, 0, 0);">CPU states</span>: cpu <span style="color: rgb(51, 102, 255);">user</span> nice <span style="color: rgb(51, 102, 102);"><span style="color: rgb(0, 153, 0);">system</span> </span>irq softirq <span style="color: rgb(204, 51, 204);">iowait</span> <span style="color: rgb(204, 0, 0);">idle</span><br /> total <span style="color: rgb(51, 102, 255);">1.0%</span> 0.0% <span style="color: rgb(0, 153, 0);"> 5.6%</span> 2.2% 0.0% <span style="color: rgb(204, 51, 204);">91.2%</span> <span style="color: rgb(204, 0, 0);">0.0%</span><br />Mem: 1026912k av, 1008832k used, 18080k free, 0k shrd, 30064k buff<br /> 771512k actv, 141348k in_d, 13308k in_c<br />Swap: 2041192k av, 66776k used, 1974416k free 812652k cached<br /><br />PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />16143 oracle 15 0 39280 32M 26608 D 4.0 3.2 0:02 0 oraclePRODB2...<br /> 5 root 15 0 0 0 0 SW 1.6 0.0 0:33 0 kswapd<br /><em>... output snipped ...</em><br /></pre> <p>Let's analyze the output carefully. The first thing you should notice is the "idle" column under CPU states; it's 0.0%—meaning, the CPU is completely occupied doing something.<br />The question is, doing what?<br />Move your attention to the column <span style="color: rgb(0, 153, 0);">"system", just slightly left; it shows 5.6%.</span> So the system itself is not doing much.<br />Go even more left to the column marked <span style="color: rgb(51, 102, 255);">"user", which shows 1.0%</span>.<br />Since user processes include Oracle as well, Oracle is not consuming the CPU cycles.<br />So, what's eating up all the CPU?<br />The answer lies in the same line, just to the right under the column "<span style="color: rgb(204, 51, 204);">iowait", which indicates 91.2%</span>. This explains it all: the CPU is waiting for IO 91.2% of the time. </p> <p>So why so much IO wait? The answer lies in the display. Note the PID of the highest consuming process: 16143. You can use the following query to determine what the process is doing: </p> <pre>select s.sid, s.username, s.program<br />from v$session s, v$process p<br />where spid = &server_process_id<br />and p.addr = s.paddr<br />/<br /> SID USERNAME PROGRAM<br />------------------- -----------------------------<br /> 159 SYS rman@prolin2 (TNS V1-V3) <br /></pre> <p>The rman process is taking up the IO waits related CPU cycles. This information helps you determine the next course of action. </p> </div> <!-- Tips Box end --> <h2><br /></h2> <h2 style="text-decoration: underline;"><a name="skill_and_snice"></a>skill and snice</h2> <p>From the previous discussion you learned how to identify a CPU consuming resource. What if you find that a process is consuming a lot of CPU and memory, but you don't want to kill it? Consider the <tt>top</tt> output below: </p> <pre>$ top -c -p 16514<br /><br />23:00:44 up 12 days, 2:04, 4 users, load average: 0.47, 0.35, 0.31<br />1 processes: 1 sleeping, 0 running, 0 zombie, 0 stopped<br />CPU states: cpu user nice system irq softirq iowait idle<br /> total 0.0% 0.6% 8.7% 2.2% 0.0% 88.3% 0.0%<br />Mem: 1026912k av, 1010476k used, 16436k free, 0k shrd, 52128k buff<br /> 766724k actv, 143128k in_d, 14264k in_c<br />Swap: 2041192k av, 83160k used, 1958032k free 799432k cached<br /><br />PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />16514 oracle 19 4 28796 26M 20252 D N 7.0 2.5 0:03 0 oraclePRODB2...<br /></pre> <p>Now that you confirmed the process 16514 is consuming a lot of memory, you can "freeze" it—but not kill it—using the <tt>skill</tt> command.</p> <pre>$ skill -STOP 1<br /></pre> <p>After this, check the <tt>top</tt> output:</p> <pre>23:01:11 up 12 days, 2:05, 4 users, load average: 1.20, 0.54, 0.38<br />1 processes: 0 sleeping, 0 running, 0 zombie, 1 stopped<br />CPU states: cpu user nice system irq softirq iowait idle<br /> total 2.3% 0.0% 0.3% 0.0% 0.0% 2.3% 94.8%<br />Mem: 1026912k av, 1008756k used, 18156k free, 0k shrd, 3976k buff<br /> 770024k actv, 143496k in_d, 12876k in_c<br />Swap: 2041192k av, 83152k used, 1958040k free 851200k cached<br /><br />PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />16514 oracle 19 4 28796 26M 20252 T N 0.0 2.5 0:04 0 oraclePRODB2...<br /></pre> <p>The CPU is now 94% idle from 0%. The process is effectively frozen. After some time, you may want to revive the process from coma: </p> <pre>$ skill -CONT 16514<br /></pre> <p>This approach is immensely useful for temporarily freezing processes to make room for more important processes to complete. </p> <p>The command is very versatile. If you want to stop all processes of the user "oracle", only one command does it all: </p> <pre>$ skill -STOP oracle<br /></pre> <p>You can use a user, a PID, a command or terminal id as argument. The following stops all rman commands.</p> <pre>$ skill -STOP rman<br /></pre> <p>As you can see, <tt>skill</tt> decides that argument you entered—a process ID, userid, or command—and acts appropriately. This may cause an issue in some cases, where you may have a user and a command in the same name. The best example is the "oracle" process, which is typically run by the user "oracle". So, when you want to stop the process called "oracle" and you issue: </p> <pre>$ skill -STOP oracle<br /></pre> <p>all the processes of user "oracle" stop, including the session you may be on. To be completely unambiguous you can optionally give a new parameter to specify the type of the parameter. To stop a command called oracle, you can give: </p> <pre>$ skill -STOP -c oracle<br /></pre> <p>The command snice is similar. Instead of stopping a process it makes its priority a lower one. First, check the top output: </p> <pre> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br /> 3 root 15 0 0 0 0 RW 0.0 0.0 0:00 0 kapmd<br />13680 oracle 15 0 11336 10M 8820 T 0.0 1.0 0:00 0 oracle<br />13683 oracle 15 0 9972 9608 7788 T 0.0 0.9 0:00 0 oracle<br />13686 oracle 15 0 9860 9496 7676 T 0.0 0.9 0:00 0 oracle<br />13689 oracle 15 0 10004 9640 7820 T 0.0 0.9 0:00 0 oracle<br />13695 oracle 15 0 9984 9620 7800 T 0.0 0.9 0:00 0 oracle<br />13698 oracle 15 0 10064 9700 7884 T 0.0 0.9 0:00 0 oracle<br />13701 oracle 15 0 22204 21M 16940 T 0.0 2.1 0:00 0 oracle<br /></pre> <p>Now, drop the priority of the processes of "oracle" by four points. Note that the higher the number, the lower the priority. </p> <pre>$ snice +4 -u oracle<br /><br />PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />16894 oracle 20 4 38904 32M 26248 D N 5.5 3.2 0:01 0 oracle<br /></pre> <p>Note how the NI column (for nice values) is now 4 and the priority is now set to 20, instead of 15. This is quite useful in reducing priorities.<br /></p> <p><br /></p><br /><h2><a name="vmstat"></a>vmstat</h2> This utility provides a report that covers process activity, paging, memory usage, disk I/O, and CPU usage (also you can use <span style="font-weight: bold;">xosview</span>). When analyzing your UNIX machine, make sure that the machine is not swapping at all and at worst paging lightly.<br /><br />Having any processes in the <b>b</b> or <b>w</b> columns is a sign of a problem system.<br />Having an <b>id</b> of 0 is a sign that the cpu is overburdoned.<br />Having high values in <b>pi</b> and <b>po</b> show excessive paging.<br /><br /><br /><span style="text-decoration: underline;">Linux Version:</span> <b>$ vmstat 5 3 </b> (Displays system statistics (5 seconds apart; 3 times))<br /><b><br /></b><span style=";font-family:'Courier New';font-size:10pt;" ></span><span style="font-family:monospace;">procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- </span><br /> <span style="font-family:monospace;"></span><span style=";font-family:'Courier New';font-size:10pt;" >r b swpd free buff cache <span style="color: rgb(204, 0, 0);">si so</span> bi bo in cs us sy id wa<br />0 0 329476 54880 91600 613852 <span style="color: rgb(204, 0, 0);">0 1</span> 4 2 0 0 1 1 3 1<br />0 0 329476 54560 91600 613852 <span style="color: rgb(204, 0, 0);">0 0</span> 0 36 118 128 25 0 74 0<br />0 0 329476 54564 91600 613860 <span style="color: rgb(204, 0, 0);">0 0 </span> 1 48 127 143 25 0 74 1<br /></span><span style="font-family:monospace;"></span><span style="font-family:monospace;"></span><span style="text-decoration: underline;"><br /></span> <ul><li><b>procs</b> (Reports the number of processes in each of the following states) <ul><li><b>r</b> : The number of processes waiting for run time</li><li><b>b</b> : The number of processes in uninterruptible sleep. Ideally close to 0.<br /> </li><li><b>w</b> : shows the number of potential processes that have been swapped out and written to disk. If the value is not zero, then swapping occurs and the system is short of memory.<br /> </li></ul> </li><li><b>memory</b> (Reports on usage of virtual and real memory) <ul><li><b>swpd</b> : the amount of virtual memory used. (Kbytes)</li><li><b>free</b> : Amount of free physical memory (Kbytes)</li><li><span style="font-weight: bold;">buff </span>: Amount of memory used like buffer (Kbytes)</li><li><span style="font-weight: bold;">cache </span>: Amount of memory used like cache (Kbytes)</li><li><span style="font-weight: bold;">inact: </span>the amount of inactive memory. (-a option)</li><li><span style="font-weight: bold;">active: </span>the amount of active memory. (-a option)<br /> </li></ul> </li><li><b>swap </b>(Reports information about page faults and paging activity (units per second). Swap-ins and swap-outs should always be zero <ul><li style="color: rgb(204, 0, 0);"><b>si</b>: Amount of memory swapped in from disk. Swap-in per second. Ideally 0 </li><li><b style="color: rgb(204, 0, 0);">so</b><span style="color: rgb(204, 0, 0);">: Amount of memory swapped to disk. Swap-out per second. Ideally 0</span><br /> </li></ul> </li><li><b>io</b></li><ul><li><b>bi</b>: Blocks received from a block device (blocks/s).<br /> </li><li><span style="font-weight: bold;">bo</span>: Blocks sent to a block device (blocks/s).<br /> </li></ul><li><b>system</b> <ul><li><b>in</b> : The number of interrupts per second, including the clock.</li><li><b>cs</b> : The number of context switches per second.</li></ul> </li><li><b>cpu</b> (Reports the breakdown of percentage usage of CPU time (averaged across all CPUs) <ul><li><b>us</b> : Time spent running non-kernel code. (user time, including nice time)<br /> </li><li><b>sy</b> : Time spent running kernel code. (system time)<br /> </li><li><span style="font-weight: bold;">id</span> : Time spent idle. <br /> </li><li><span style="font-weight: bold;">wa:</span><br /> </li></ul> </li></ul> <span style="text-decoration: underline;"><br />Solaris Version</span> <b>$ vmstat -S 5 3</b> (Displays system statistics (5 seconds apart; 3 times))<br /><br /><span style="text-decoration: underline;"></span> <table border="0"> <tbody> <tr align="center"> <td style="font-family: monospace;" class="colname" colspan="3">procs</td> <td style="font-family: monospace;" class="colname" colspan="2">memory</td> <td style="font-family: monospace;" class="colname" colspan="7">page</td> <td style="font-family: monospace;" class="colname" colspan="4">disk</td> <td style="font-family: monospace;" class="colname" colspan="3">faults</td> <td style="font-family: monospace;" class="colname" colspan="3">cpu</td> </tr> <tr align="center"> <td style="font-family: monospace;" class="colname">r</td> <td style="font-family: monospace;" class="colname">b</td> <td style="font-family: monospace;" class="colname">w</td> <td style="font-family: monospace;" class="colname">swap</td> <td style="font-family: monospace;" class="colname">free</td> <td style="font-family: monospace;" class="colname">re</td> <td style="font-family: monospace;" class="colname">mf</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="colname">pi</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="colname">po</td> <td style="font-family: monospace;" class="colname">fr</td> <td style="font-family: monospace;" class="colname">de</td> <td style="font-family: monospace;" class="colname">sr</td> <td style="font-family: monospace;" class="colname">s0</td> <td style="font-family: monospace;" class="colname">s1</td> <td style="font-family: monospace;" class="colname">s2</td> <td style="font-family: monospace;" class="colname">s3</td> <td style="font-family: monospace;" class="colname">in</td> <td style="font-family: monospace;" class="colname">sy</td> <td style="font-family: monospace;" class="colname">cs</td> <td style="font-family: monospace;" class="colname">us</td> <td style="font-family: monospace;" class="colname">sy</td> <td style="font-family: monospace;" class="colname">id</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">28872</td> <td style="font-family: monospace;" class="collight">8792</td> <td style="font-family: monospace;" class="collight">8</td> <td style="font-family: monospace;" class="collight">5</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="collight">172</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="collight">142</td> <td style="font-family: monospace;" class="collight">210</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">24</td> <td style="font-family: monospace;" class="collight">3</td> <td style="font-family: monospace;" class="collight">11</td> <td style="font-family: monospace;" class="collight">17</td> <td style="font-family: monospace;" class="collight">2</td> <td style="font-family: monospace;" class="collight">289</td> <td style="font-family: monospace;" class="collight">1081</td> <td style="font-family: monospace;" class="collight">201</td> <td style="font-family: monospace;" class="collight">14</td> <td style="font-family: monospace;" class="collight">6</td> <td style="font-family: monospace;" class="collight">80</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">102920</td> <td style="font-family: monospace;" class="collight">1936</td> <td style="font-family: monospace;" class="collight">1</td> <td style="font-family: monospace;" class="collight">95</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="collight">193</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="collight">6</td> <td style="font-family: monospace;" class="collight">302</td> <td style="font-family: monospace;" class="collight">1264</td> <td style="font-family: monospace;" class="collight">235</td> <td style="font-family: monospace;" class="collight">12</td> <td style="font-family: monospace;" class="collight">1</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">3</td> <td style="font-family: monospace;" class="collight">240</td> <td style="font-family: monospace;" class="collight">459</td> <td style="font-family: monospace;" class="collight">211</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">2</td> <td style="font-family: monospace;" class="collight">97</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">102800</td> <td style="font-family: monospace;" class="collight">1960</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="collight">0</td> <td style="font-family: monospace; color: rgb(204, 0, 0);" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">464</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">107</td> <td style="font-family: monospace;" class="collight">146</td> <td style="font-family: monospace;" class="collight">29</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">100</td> </tr> </tbody> </table> <span style="text-decoration: underline;"></span><br /><ul><li><b>procs</b> (Reports the number of processes in each of the following states) <ul><li><b>r</b> : number of processes in the run queue</li><li><b>b</b> : number of processes blocked and waiting for resources such as disk or terminal input. (I/O, paging etc.). Ideally close to 0.<br /> </li><li><b>w</b> : number of runnable processes, or processes swapped but in a sleep state of less than twenty seconds. If this is to high, you may need more memory.<br /> </li></ul> </li><li><b>memory</b> (Reports on usage of virtual and real memory) <ul><li><b>swpd</b> : Available swap space (Kbytes)</li><li><b>free</b> : Amount of free physical memory (Kbytes)</li></ul> </li><li><b>page</b> (Reports information about page faults and paging activity (units per second) <ul><li><b>re</b> : The number of pages reclaimed.<br /> </li><li><b>mf</b> : The number of major and minor faults.<br /> </li><li><b style="color: rgb(204, 0, 0);">pi</b><span style="color: rgb(204, 0, 0);"> : Kbytes paged in per second. </span><br /> </li><li style="color: rgb(204, 0, 0);"><b>po</b> : Kbytes paged out per second</li><li><b>fr</b> : Kbytes freed </li><li><b>de</b> : The amount of anticipated memory needed by processes that have recently swapped in. (Kbytes)</li><li><b>sr</b> : The pages scanned by the page daemon. High scan rates are caused by a shortage of available memory</li></ul> </li><li><b>disk</b> (lists the number of disk operations per second and can show data for up to four disks at a time)</li><li><b>faults</b> (Reports the trap/interupt rates (per second) <ul><li><b>in</b> : number if device interrupts per second<br /> </li><li><b>sy</b> : number of system calls</li><li><b>cs</b> : CPU context switches</li></ul> </li><li><b>cpu</b> (Reports the amount of time spent in user mode, kernel mode and idle are reported. ) <ul><li><b>us</b> : user time</li><li><b>sy</b> : system time</li><li><b>id</b> : idle time</li></ul> </li></ul> vmstat can also report on swapping and cache flushing. The -S adds two fields to the beginning of the paging statistics. These are si, which lists the number of pages swapped in per second, and so, which gives the number of entire processes swapped out.<br /><br />The buffer memory is used to save metadata from files like i-nodes.<br />The cache memory is used for file data.<br /><br />Here there are NO pageouts (po or so) occurring on this system. It is OK and normal to have page out (po or so) activity. You should get worried when the number of page ins (pi or si) starts rising. This indicates that you system is starting to page<br />There are no processes that are waiting to be run (r), blocked (b), or waiting for IO (w) in the RUN QUEUE (When a process is ready to be processed by a CPU it will be placed on the waiting line or RUN-QUEUE). You want to keep the RUN-QUEUE under 5-6 for a single CPU machine.<br /><br /><br /><h2 style="text-decoration: underline;"><a name="CPU_Usage"></a>CPU Usage</h2> <h2 style="text-decoration: underline;"><a name="sar"></a>sar</h2> <b>$ sar -u 10 8</b><br />The sar command reports CPU Utilization (on this case 10 seconds apart; 8 times): Nice column is the priority of that process, bigger numbers less priority<br /><table border="0"> <tbody> <tr align="left"> <td style="font-family: monospace;" class="colname">Time</td> <td style="font-family: monospace;" class="colname">%usr</td> <td style="font-family: monospace;" class="colname">%sys</td> <td style="font-family: monospace;" class="colname">%wio</td> <td style="font-family: monospace;" class="colname">%idle</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:57:31</td> <td style="font-family: monospace;" class="collight">72</td> <td style="font-family: monospace;" class="collight">28</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:57:41</td> <td style="font-family: monospace;" class="collight">70</td> <td style="font-family: monospace;" class="collight">30</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:57:51</td> <td style="font-family: monospace;" class="collight">70</td> <td style="font-family: monospace;" class="collight">30</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:58:01</td> <td style="font-family: monospace;" class="collight">68</td> <td style="font-family: monospace;" class="collight">32</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:58:11</td> <td style="font-family: monospace;" class="collight">67</td> <td style="font-family: monospace;" class="collight">33</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:58:21</td> <td style="font-family: monospace;" class="collight">65</td> <td style="font-family: monospace;" class="collight">28</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">7</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:58:31</td> <td style="font-family: monospace;" class="collight">73</td> <td style="font-family: monospace;" class="collight">27</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">11:58:41</td> <td style="font-family: monospace;" class="collight">69</td> <td style="font-family: monospace;" class="collight">31</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">Average</td> <td style="font-family: monospace;" class="collight">69</td> <td style="font-family: monospace;" class="collight">30</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">1</td> </tr> </tbody> </table><br /><b>%usr</b>: Percent of CPU in user mode<br /><b>%sys</b>: Percent of CPU in system mode<br /><b>%wio</b>: Percent of CPU running idle with a process waiting for block I/O<br /><b>%idle</b>: Percent of CPU that is idle<br /><br />If the %idle is near zero, your CPU is overloaded. If the %iowait is large, your disks are overloaded.<br /><br /><br /><br />Once it is established that the system has high CPU usage, the next step is to find out who is using the CPU.<br /><span style="font-family:monospace;">Ps -fe | grep smon</span><br />or<br /><span style="font-family:monospace;">ps -e -o pcpu -o pid -o user -o args | sort -k 1 | tail -21r</span><br />Displays the top 10 CPU users on the system.<br /><br /><span style="font-family:monospace;">UID PID PPID C STIME TTY TIME CMD <-- Label added for clarity.</span><br /><span style="font-family:monospace;">usupport 28180 1 0 Oct 31 - 0:48 ora_smon_V734</span><br /><span style="font-family:monospace;">usupport 30262 1 0 Nov 01 - 0:00 ora_smon_VKHILL</span><br /><span style="font-family:monospace;">usupport 30900 1 0 Oct 14 - 9:03 ora_smon_V806</span><br /><span style="font-family:monospace;">usupport 31958 1 <span style="font-weight: bold;"><span style="color: rgb(204, 0, 0);">111</span> </span>Oct 24 - 3:31 ora_smon_V815 <-- Notice the C column</span><br /><span style="font-family:monospace;">usupport 37986 1 0 Nov 06 - 14:00 ora_smon_V805</span><br /><br />Here we can see a smon of the database V815 using a lot of CPU by looking at the C column which reflects the CPU units of processing that are being used.<br />There are 100 units per CPU so the reason why this number is above 100 is that this machine has 2 cpus.<br /><br /><b>$ ps -e -o pcpu -o pid -o user -o args | sort -k 1 | tail -21r</b><br />Displays the top 20 CPU users on the system.<br /><table border="0"> <tbody> <tr align="left"> <td style="font-family: monospace;" class="colname">%CPU</td> <td style="font-family: monospace;" class="colname">PID</td> <td style="font-family: monospace;" class="colname">USER</td> <td style="font-family: monospace;" class="colname">COMMAND</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">78.1</td> <td style="font-family: monospace;" class="collight">4789</td> <td style="font-family: monospace;" class="collight">oracle</td> <td style="font-family: monospace;" class="collight" align="left">ora_dbwr_DDDS2</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">8.5</td> <td style="font-family: monospace;" class="collight">4793</td> <td style="font-family: monospace;" class="collight">oracle</td> <td style="font-family: monospace;" class="collight" align="left">ora_lgwr_DDDS2</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">2.4</td> <td style="font-family: monospace;" class="collight">6206</td> <td style="font-family: monospace;" class="collight">oracle</td> <td style="font-family: monospace;" class="collight" align="left">oracleDDDS2 (LOCAL=NO)</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0.1</td> <td style="font-family: monospace;" class="collight">4797</td> <td style="font-family: monospace;" class="collight">oracle</td> <td style="font-family: monospace;" class="collight" align="left">ora_smon_DDDS2</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0.1</td> <td style="font-family: monospace;" class="collight">6207</td> <td style="font-family: monospace;" class="collight">oracle</td> <td style="font-family: monospace;" class="collight" align="left">oracleDDDS2 (LOCAL=NO)</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">etc.</td> <td style="font-family: monospace;" class="collight">etc.</td> <td style="font-family: monospace;" class="collight">etc.</td> <td style="font-family: monospace;" class="collight" align="left">etc.</td> </tr> </tbody> </table><br />The <b>PID</b> column can then be matched with the <b>SPID</b> column on the V$PROCESS view to provide more information on the process:<br /><blockquote> <pre>SELECT a.username, a.osuser, a.program, spid, sid, a.serial#<br />FROM v$session a, v$process b<br />WHERE a.paddr = b.addr<br /> AND spid = '&pid';</pre> </blockquote><br /><br /><h2><a name="mpstat"></a><span style="text-decoration: underline;">mpstat</span></h2> <b>$ mpstat 10 2</b><br />Reports per-processor statistics on Sun Solaris (10 seconds apart; 8 times):<br /><table border="0"> <tbody> <tr align="left"> <td style="font-family: monospace;" class="colname">CPU</td> <td style="font-family: monospace;" class="colname">minf</td> <td style="font-family: monospace;" class="colname">mjf</td> <td style="font-family: monospace;" class="colname">xcal</td> <td style="font-family: monospace;" class="colname">intr</td> <td style="font-family: monospace;" class="colname">ithr</td> <td style="font-family: monospace;" class="colname">csw</td> <td style="font-family: monospace;" class="colname">icsw</td> <td style="font-family: monospace;" class="colname">migr</td> <td style="font-family: monospace;" class="colname">smtx</td> <td style="font-family: monospace;" class="colname">srw</td> <td style="font-family: monospace;" class="colname">syscl</td> <td style="font-family: monospace;" class="colname">usr</td> <td style="font-family: monospace;" class="colname">sys</td> <td style="font-family: monospace;" class="colname">wt</td> <td style="font-family: monospace;" class="colname">idl</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">6</td> <td style="font-family: monospace;" class="collight">8</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">438</td> <td style="font-family: monospace;" class="collight">237</td> <td style="font-family: monospace;" class="collight">246</td> <td style="font-family: monospace;" class="collight">85</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">21</td> <td style="font-family: monospace;" class="collight">8542</td> <td style="font-family: monospace;" class="collight">23</td> <td style="font-family: monospace;" class="collight">9</td> <td style="font-family: monospace;" class="collight">9</td> <td style="font-family: monospace;" class="collight">59</td> </tr> <tr align="right"> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">29</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">744</td> <td style="font-family: monospace;" class="collight">544</td> <td style="font-family: monospace;" class="collight">494</td> <td style="font-family: monospace;" class="collight">206</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">0</td> <td style="font-family: monospace;" class="collight">95</td> <td style="font-family: monospace;" class="collight">110911</td> <td style="font-family: monospace;" class="collight">65</td> <td style="font-family: monospace;" class="collight">29</td> <td style="font-family: monospace;" class="collight">6</td> <td style="font-family: monospace;" class="collight">0</td> </tr> </tbody> </table><br /><br /><h2><a name="Automatic_Startup_Scripts_on_Linux"></a><span style="text-decoration: underline;">Automatic Startup Scripts on Linux</span></h2> Create a file in the <code>/etc/init.d/</code> directory, in this case the file is called <code>myservice</code>, containing the commands you wish to run at startup and/or shutdown.<br /><br />Use the <code>chmod</code> command to set the privileges to 750:<br /><blockquote> <pre>chmod 750 /etc/init.d/myservice</pre> </blockquote> Link the file into the appropriate run-level script directories:<br /><blockquote> <pre>ln -s /etc/init.d/myservice /etc/rc0.d/K10myservice<br />ln -s /etc/init.d/myservice /etc/rc3.d/S99myservice</pre> </blockquote> Associate the <code>myservice</code> service with the appropriate run levels:<br /><blockquote> <pre>chkconfig --level 345 dbora on</pre> </blockquote> The script should now be automatically run at startup and shutdown (with "start" or "stop" as a commandline parameter) like other service initialization scripts.<br /><h2><br /></h2> <h2><a style="text-decoration: underline;" name="CRON"></a><span style="text-decoration: underline;">CRON and & Command</span><br /></h2> <span style="font-weight: bold; text-decoration: underline;">Jobs in background</span><br />You can add the <span style="font-weight: bold; color: rgb(204, 0, 0);">'&'</span> command at the end of any command to run in background<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">cp * /tmp &</span><br /></div> You can also use the <span style="font-weight: bold; color: rgb(204, 0, 0);">"nohup"</span> command to avoid the termination of a background job even if the shell terminates<br /><div style="margin-left: 40px;"><span style="font-family:monospace;">nohup cp * /tmp &</span><br /></div><br />You can use <span style="font-weight: bold; color: rgb(204, 0, 0);">"bg"</span> to take a job to the background. Before issuing this command, press ^Z, to suspend the process and then use bg, to put it in the background<br /><br />You can use <span style="color: rgb(204, 0, 0); font-weight: bold;">"fg"</span> to bring a background job to foreground.<br /><br />Finally, the command <span style="font-weight: bold; color: rgb(204, 0, 0);">"jobs" </span>will list the current jobs in the shell.<br /><br /><span style="font-weight: bold; text-decoration: underline;">Cron Commands</span><br /><span style=";font-family:Times New Roman;font-size:100%;" >Cron is a unix utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon often termed as cron jobs.<br /></span><span style=";font-family:Times New Roman;font-size:100%;" >Crontab (CRON TABLE) is a file which contains the schedule of cron entries to be run and at specified times, you can invoke it with the </span>"crontab -e" command.<br /><span style="font-weight: 400;"><span style=";font-family:Times New Roman;font-size:100%;" ><br /><b>syntax</b><br />A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval. You can also specify a range of values.<br /><br /></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">* * * * * command to be executed</span></span><span style=";font-family:Times New Roman;font-size:100%;" ><br /></span><span style="font-size:100%;"><span style="font-family:monospace;">- - - - -</span><br /><span style="font-family:monospace;">| | | | |</span><br /><span style="font-family:monospace;">| | | | +-----> day of week (1 - 7) (monday = 1)</span><br /><span style="font-family:monospace;">| | | +-----</span></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">----</span></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">--> month (1 - 12)</span><br /><span style="font-family:monospace;">| | +--------</span></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">--------</span></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">-> day of month (1 - 31)</span><br /><span style="font-family:monospace;">| +-----</span></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">-----------------</span></span></span><span style="font-weight: 400;"><span style="font-size:100%;"><span style="font-family:monospace;">-> hour (0 - 23)</span><br /><span style="font-family:monospace;">+-----------------------------> min (0 - 59)</span></span><span style=";font-family:Times New Roman;font-size:100%;" ><br /><br /></span></span>The first 5 fields can be specified using the following rules:<br /><blockquote> <pre>* - All available values or "first-last".<br />3-4 - A single range representing each possible from the start to the end of the range inclusive.<br />1,2,5,6 - A specific list of values.<br />1-3,5-8 - A specific list of ranges.<br />0-23/2 - Every other value in the specified range.</pre> </blockquote> <span style="font-weight: 400;"><span style=";font-family:Times New Roman;font-size:100%;" ><span style="font-weight: bold;">Examples<br /></span></span></span>The following entry runs a cleanup script a 01:00 each Sunday. Any output or errors from the script are piped to /dev/null to prevent a buildup of mails to root:<span style="font-family:monospace;"><br />0 1 * * 0 /u01/app/oracle/dba/weekly_cleanup > /dev/null 2>&1</span><br /><br /><span style="font-weight: 400;"># Execute the file save.sh every day at 0.05 and send results to a log file:<br /><span style="font-family:monospace;">5 0 * * * /home/oracle/save.sh.sh 1>>/home/</span></span><span style="font-weight: 400;"><span style="font-family:monospace;">oracle</span></span><span style="font-weight: 400;"><span style="font-family:monospace;">/log 2>&1 </span><br /><o:p></o:p><br /># </span><span style="font-weight: 400;"><span style="font-weight: 400;">Execute at </span></span><span style="font-weight: 400;">2:15pm the first day of each month and do not send the results:<br /><span style="font-family:monospace;">15 14 1 * * /home/</span></span><span style="font-weight: 400;"><span style="font-family:monospace;">oracle</span></span><span style="font-weight: 400;"><span style="font-family:monospace;">/mensual.sh 1>/dev/null 2>&1 </span><br /><o:p></o:p><br /># </span><span style="font-weight: 400;"><span style="font-weight: 400;">Execute </span></span><span style="font-weight: 400;">from Monday to Friday at 10PM<br /><span style="font-family:monospace;">0 22 * * 1-5 shutdown -h now 1>/dev/null 2>&1 </span><br /><o:p></o:p><br /># </span><span style="font-weight: 400;"><span style="font-weight: 400;">Execute every minute</span></span><span style="font-weight: 400;"><br /><span style="font-family:monospace;">* * * * * /home/</span></span><span style="font-weight: 400;"><span style="font-family:monospace;">oracle</span></span><span style="font-weight: 400;"><span style="font-family:monospace;">/espia.sh</span></span><br /><h2><br /></h2> <h2 style="text-decoration: underline;"><a name="Cluster_Wide_CRON_Jobs_On_Tru64"></a>Cluster Wide CRON Jobs On Tru64</h2> On clustered systems cron is node-specific. If you need a job to fire once per cluster, rather than once per node you need an alternative approach to the standard cron job. One approach is put forward in the HP best practices document (<a href="http://h30097.www3.hp.com/docs/best_practices/BP_CRON/TITLE.HTM">Using cron in a TruCluster Server Cluster</a>), but in my opinion a more elegant solution is proposed by Jason Orendorf of HP Tru64 Unix Enterprise Team (<a href="http://www.ornl.gov/its/archives/mailing-lists/tru64-unix-managers/2003/04/msg00018.html">TruCluster Clustercron</a>).<br /><br />In his solution Jason creates a file called /bin/cronrun with the following contents:<br /><blockquote> <pre>#!/bin/ksh<br />set -- $(/usr/sbin/cfsmgr -F raw /)<br />shift 12<br />[[ "$1" = "$(/bin/hostname -s)" ]] && exit 0<br />exit 1</pre> </blockquote> This script returns TRUE (0) only on the node which is the CFS serving cluster_root.<br /><br />All cluster wide jobs should have a crontab entry on each node of the cluster like:<br /><blockquote> <pre>5 * * * /bin/cronrun && /usr/local/bin/myjob</pre> </blockquote> Although the cron jobs fire on all nodes, the "/bin/cronrun &&" part of the entry prevents the script from running on all nodes except the current CFS serving cluster_root.<br /><h2><br /></h2> <h2 style="text-decoration: underline;"><a name="NFS_Mount_Sun"></a>NFS Mount (Sun)</h2> The following deamons must be running for the share to be seen by a PC:<br /><ul><li>/usr/lib/nfs/nfsd -a</li><li>/usr/lib/nfs/mountd</li><li>/opt/SUNWpcnfs/sbin/rpc.pcnfsd</li></ul> To see a list of the nfs mounted drives already present type:<br /><blockquote> <pre>exportfs</pre> </blockquote> First the mount point must be shared so it can be seen by remote machines:<br /><blockquote> <pre>share -F nfs -o ro /cdrom</pre> </blockquote> Next the share can be mounted on a remote machine by <b>root</b> using:<br /><blockquote> <pre>mkdir /cdrom#1<br /><br />mount -o ro myhost:/cdrom /cdrom#1</pre> </blockquote> <h2><br /></h2> <h2 style="text-decoration: underline;"><a name="NFS_Mount_Tru64"></a>NFS Mount (Tru64)</h2> On the server machine:<br /><br />If NFS is not currently setup do the following:<br /><ul><li>Application Manager -> System Admin -> Configuration -> NFS</li><li>Select the "Configure system as an NFS server" option.</li><li>Accept all defaults.</li></ul> Create mount point directory:<br /><blockquote> <pre>mkdir /u04/backup</pre> </blockquote> Append the following entry to the "/etc/exports" file:<br /><blockquote> <pre>/u04/backup</pre> </blockquote> Make sure the correct permissions are granted on the directory:<br /><blockquote> <pre>chmod -R 777 /u04/backup</pre> </blockquote> On the client machine:<br /><br />If NFS is not currently setup do the following:<br /><ul><li>Application Manager -> System Admin -> Configuration -> NFS</li><li>Select the "Configure system as an NFS client" option.</li><li>Accept all defaults.</li></ul> Create mount point directory:<br /><blockquote> <pre>mkdir /backup</pre> </blockquote> Append an following entry to the "/etc/fstab" file:<br /><blockquote> <pre>nfs-server-name:/u04/backup /backup nfs rw,bg,intr 0 0</pre> </blockquote> Finally, mount the fileset:<br /><blockquote> <pre>mount /backup</pre> </blockquote> At this point you can start to use the mount point from your client machine. Thanks to Bryan Mills for his help with Tru64.<br /><h2><br /></h2> <h2 style="text-decoration: underline;"><a name="PC_XStation_Configuration"></a>PC XStation Configuration</h2> - Download the CygWin setup.exe from <a href="http://www.cygwin.com/">http://www.cygwin.com</a>.<br />- Install, making sure to select all the X11R6 (or XFree86 in older versions) optional packages.<br />- If you need root access add the following entry into the /etc/securettys file on each server:<br /><blockquote> <pre><client-name>:0</client-name></pre> </blockquote> - From the command promot on the PC do the following:<br /><blockquote> <pre>set PATH=PATH;c:cygwinbin;c:cygwinusrX11R6bin<br />XWin.exe :0 -query <server-name></server-name></pre> </blockquote> - The X environment should start in a new window.<br /><br />- Many Linux distributions do not start XDMCP by default. To allow XDMCP access from Cygwin edit the "/etc/X11/gdm/gdm.conf" file. Under the "[xdmcp]" section set "Enable=true".<br /><br />- If you are starting any X applications during the session you will need to set the DISPLAY environment variable. Remember, you are acting as an XStation, not the server itself, so this variable must be set as follows:<br /><blockquote> <pre>DISPLAY=<client-name>:0.0; export DISPLAY</client-name></pre> </blockquote> <h2><br /></h2> <h2 style="text-decoration: underline;"><a name="Useful_Profile_Settings"></a>Useful Profile Settings</h2> The following .profile settings rely on the default shell for the user being set to the Korn shell (/bin/ksh).<br /><br />The backspace key can be configured by adding the following entry:<br /><blockquote> <pre>stty erase "^H"</pre> </blockquote> The command line history can be accessed using the [Esc][k] by adding the following entry:<br /><blockquote> <pre>set -o vi</pre> </blockquote> Auto completion of paths using a double strike of the [Esc] key can be configured by adding the following entry:<br /><blockquote> <pre>set filec</pre> </blockquote> <h2><br /></h2> <h2 style="text-decoration: underline;"><a name="Useful_Files"></a>Useful Files</h2> Here are some files that may be of use:<br /><br /><table border="1" cellpadding="3" cellspacing="3"> <tbody> <tr> <td><b>Path</b></td> <td><b>Contents</b></td> </tr> <tr> <td>/etc/passwd</td> <td>User settings</td> </tr> <tr> <td>/etc/group</td> <td>Group settings for users.</td> </tr> <tr> <td>/etc/hosts</td> <td>Hostname lookup information.</td> </tr> <tr> <td>/etc/system</td> <td>Kernel parameters for Solaris.</td> </tr> <tr> <td>/etc/sysconfigtab</td> <td>Kernel parameters for Tru64.</td></tr></tbody></table>khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com1tag:blogger.com,1999:blog-2435017743722636286.post-15227661683784470842009-04-21T21:32:00.000+05:302009-05-19T10:59:08.131+05:30Oracle Installation fails with OUI-10133: Invalid staging area<span style="font-weight: bold;"><u>Problem Description</u></span><br />While installing oracle on my windows machine it fails with the message,<br />OUI-10133: Invalid staging area. There are no top level components for Windows NT, Windows 2000 available for installation in this staging area.<br /><br />On linux system this message is like,<br />OUI-10133: Invalid staging area. There are no top level components for Linux available for installation in this staging area.<br /><br /><span style="font-weight: bold;"><u>Cause of the problem</u></span><br />There are many causes by which this problem can happen.<br /><br />1)The product.xml directory is not found and hence the problem occurs. In the oraparam.ini the source location is specified to product.xml but no product.xml exist in the stage directory.<br /><br />2)Someone has deleted the products.xml file from its location and installer could not find it.<br /><br />3)You have downloaded oracle from e-delivery.oracle.com which contains three zip files and you extracted three zip files into different locations.<br /><br />4)The staging area i.e stage folder is corrupted.<br /><br />5)The media is corrupted.<br /><br /><span style="font-weight: bold;"><u>Solution of the Problem</u></span><br />1)Open oraparam.ini configuration file which resides in the install directory both in windows and linux.<br /><br />2)Within the file you will see there is SOURCE parameter and within it the location of products.xml is specified.<br /><br />3)Go to that location and be sure that products.xml exist. There might be several reasons that installer could not find any. Suppose you copied oracle software from network and it is not fully copied. Someone deleted file from stage directory.<br /><br />4)If you download zipped file of oracle software from e-delivery.oracle.com for 11g, then there will be three zip file. Unzip those three files into the same directory and then start installing the software.<br /><br />5)After downloading software do cksum on source and verify with cksum in the OTN.<br /><br />6)Get a fresh copy of oracle software and install.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-90195676213153153282009-04-21T21:28:00.000+05:302009-05-19T10:59:08.124+05:30RMAN-06429: TARGET database is not compatible with this version of RMANWhenever you connect to a database (rman version if different from source database version in fact rman version is higher than the source database) through RMAN using/without recovery catalog it fails with RMAN-06429: TARGET database is not compatible with this version of RMAN as below.<br /><br /><span style="font-weight: bold;">C:\>rman target rman/rman@local2 catalog rman/rman@testdb</span><br /><br />Recovery Manager: Release 10.2.0.1.0 - Production on Tue Mar 24 05:07:14 2009<br /><br />Copyright (c) 1982, 2005, Oracle. All rights reserved.<br /><br />Recovery Manager incompatible with TARGET database: RMAN 8.0.4.0 to 10.1.0.0 req<br />uired<br />RMAN-00571: ===========================================================<br />RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============<br />RMAN-00571: ===========================================================<br />RMAN-00554: initialization of internal recovery manager package failed<br />RMAN-06429: TARGET database is not compatible with this version of RMAN<br /><br /><span style="font-weight: bold;"><u>Cause of the Problem</u></span><br />You have higher RMAN version than the target database (to which you are connecting). In order to successfully connect to database by rman the rman version must be less or equal to the target database version.<br /><br /><span style="font-weight: bold;"><u>Solution of the Problem</u></span><br />There is a rule of thumb in this connecting issue and it is best practice to use 'rule of thumb'. That is <span style="font-weight: bold;">"always use the target database RMAN executable and the latest release of Oracle for your recovery catalog."</span><br /><br />However, if you are going to use a lower version RMAN executable to backup higher<br />version databases then remember you are restricted to only the features available<br />in the database your using. But the executable you are using must be able to<br />handle the feature or configuration you are using.<br /><br />In the above example a 10.2.0.1 rman executable tried to connect to database version 10.1.0.2 which is not supported. If we follow rule of thumb then also use 10.1.0.2 rman executable to connect to database 10.1.0.2 in order to best use of all the available features. However we can connect to 10.1.0.2 database with lower version of rman executable but that is not recommended as in that case we might be restricted to use less features.<br /><br />One more thing is clear from the error message is,<br />"Recovery Manager incompatible with TARGET database: RMAN 8.0.4.0 to 10.1.0.0<br />required". So you can use any rman executable to connect to target database (which- the target database in fact 10.1g) but current RMAN executable is shown as 10.2.0.1.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-20534174067433355522009-04-21T21:27:00.000+05:302009-05-19T10:59:08.116+05:30What is kernel, shell, shell script<span style="font-weight: bold;"><u>Kernel</u></span><br />The linux kernel is the most important part or in order word called the heart of the linux operating system. It is the piece of code that manages both hardware and software components and communicates between them. It manages memory, processors, I/O devices with any applications and decides which applications will use hardware resources.<br /><br /><span style="font-weight: bold;"><u>Shell</u></span><br />On your windows machine you have seen command prompt (command.com or cmd.exe) where you type command like ping, ipconfig etc. Similarly, on linux there is such prompt where you can type command and that prompt is called shell. In order to define shell we can say shell is an interpreter that interprets commands as typed from a keyboard or from a text script.<br /><br />The shell is provided in order to interact with the computer systems. Shell itself is not part of the kernel, but it uses kernel to executes commands.<br /><br />There are various types of shell and can be broadly defined into four categories.<br /><br /><span style="font-weight: bold;">1)Bourne like shell(sh):</span> Almquist shell (ash), Bourne-Again shell (bash), Debian Almquist shell (dash), Korn shell (ksh), Z shell (zsh) falls into bourne like shell category.<br /><br /><span style="font-weight: bold;">2)C shell like (csh):</span> TENEX C shell (tcsh) falls within this category.<br /><br /><span style="font-weight: bold;">3)Non traditional shell:</span> es shell (es), scheme Shell (scsh) falls within this category.<br /><br /><span style="font-weight: bold;">4)Historical shell:</span> Thompson shell (sh), PWB shell or Mashey shell (sh) falls within this category.<br /><br />In order to know all available shells in your system issue, cat /etc/shells<br />On my system,<br /><span style="font-weight: bold;"># cat /etc/shells</span><br />/bin/sh<br />/bin/bash<br />/sbin/nologin<br />/bin/tcsh<br />/bin/csh<br /><br />To find out current shell you are using issue, echo $SHELL or ps $$<br />On my system,<br /><br /><span style="font-weight: bold;"># ps $$</span><br /> PID TTY STAT TIME COMMAND<br />4593 pts/0 Ss 0:00 -bash<br /><span style="font-weight: bold;"># echo $SHELL</span><br />/bin/bash<br /><br /><span style="font-weight: bold;"><u>Shell Script</u></span><br />Shell script is a plain text file which contains a series of commands. In this case instead of running command one by one we can store all commands in a script and we execute the script. It is similar to a batch file in MS-DOS.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-27633189492884028322009-04-21T21:26:00.000+05:302009-05-19T10:59:08.107+05:30A simple shell script to check whether user is rootIn many case inside your shell script you need to check whether the user who runs the script is root or not because based on the user you might need to take necessary action inside your shell script.<br /><br />Following shell script might help you in this case.<br /><br /><span style="font-weight: bold;"><u>Way 01: With system variable $LOGNAME</u></span><br />As echo $LOGNAME show the currently logged in user show we can use this variable to determine whether it is root user.<br />1)create script<br /><span style="font-weight: bold;"># vi test_root.sh</span><br />if [[ $LOGNAME = "root" ]]<br />then<br />echo "You are root user"<br />else<br />echo "You are $LOGNAME user"<br />fi<br /><br />2)Grant execute permission.<br /><span style="font-weight: bold;"># chmod +x test_root.sh</span><br /><br />3)Test it by running as root.<br /><span style="font-weight: bold;"># whoami</span><br />root<br /><br /><span style="font-weight: bold;"># ./test_root.sh</span><br />You are root user<br /><br />4)Test it by running as oracle user.<br /><span style="font-weight: bold;"># su - oracle</span><br /><br /><span style="font-weight: bold;">$ whoami</span><br />oracle<br /><br /><span style="font-weight: bold;">$ ./test_root.sh</span><br />You are oracle user<br /><br /><span style="font-weight: bold;"><u>Way 02: As system variable $UID:</u></span><br />As system variable $UID for root user is zero so we can use that in order to determine whether user is root.<br />1)create script<br /><span style="font-weight: bold;"># vi test_root2.sh</span><br />if [[ $UID = "0" ]]<br />then<br />echo "You are root user"<br />else<br />echo "You are not root user"<br />fi<br /><br />2)Give execute permission.<br /><span style="font-weight: bold;"># chmod +x test_root2.sh</span><br /><br />3)Test it by running as root user.<br /><span style="font-weight: bold;"># ./test_root2.sh</span><br />You are root user<br /><br /><span style="font-weight: bold;"><u>Way 03: With the command whoami</u></span><br />With the command whoami you can check the user who is logged in. So you can use that in your script.<br />1)Create script<br /><span style="font-weight: bold;"># vi test_root3.sh</span><br />if [ `whoami` = "root" ]<br />then<br />echo "You have logged on as root"<br />else<br />echo "You are not root user"<br />fi<br /><br />2)Grant execute permission.<br /><span style="font-weight: bold;"># chmod +x test_root3.sh</span><br /><br />3)Test the script by running it.<br /><span style="font-weight: bold;"># ./test_root3.sh</span><br />You have logged on as rootkhaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com0tag:blogger.com,1999:blog-2435017743722636286.post-31930323408231215872009-04-21T21:25:00.000+05:302009-05-19T10:59:08.091+05:30Different types of standby database in oracle data guardIn oracle data guard configuration, you need to setup one or more additional databases beside the primary database. These additional databases are called standby database. Up to nice standby database can be created for one primary database.<br /><br />Using a backup of primary database you can set up standby database and then you can made standby database as part of data guard configuration. Once you configured standby database, data guard automatically maintains standby database by transmitting redo log from the primary database and then applying redo to the standby database.<br /><br />A standby database can be of three types.<br /><br /><u><span style="font-weight: bold;">1)Physical Standby Database:</span></u> A physical standby database is an identical copy of the primary database. The disk structures are also identical with primary database. It is kept synchronized with the primary database by Redo Apply- which means the redo data is received from the primary database and then redo is applied to the physical standby database.<br /><br />Note that as of Oracle Database 11g release 1 (11.1), a physical standby database can receive and apply redo while it is open for read-only access. You can use physical standby database for query and reporting purpose along with data protection.<br /><br /><u><span style="font-weight: bold;">2)Logical Standby Database:</span></u> A logical standby database is the same logical information of the primary database. The physical data structure need not to be same on the standby database. It is kept synchronized with the primary database by SQL Apply- which means the redo data is received from the primary database, transforms redo data into SQL statements and at last executes the SQL statements on the standby database.<br /><br />You can use logical standby database for query and reporting purpose along with data protection. Also you have to facility to upgrade oracle database software and patch sets along with data protection with help of logical standby database.<br /><br /><u><span style="font-weight: bold;">3)Snapshot Standby Database:</span></u> A snapshot standby database is a convertible copy of the physical standby database but the difference from the physical or logical standby database is, the redo data that it received does not apply into it. The redo is applied whenever it is converted back to the physical standby database. You can play with the snapshot standby database and while converting to physical standby database from snapshot standby database these local updates are discarded.<br /><br />Note that in case of snapshot standby database, the time needed to perform a role transition is directly proportional to the amount of redo data that needs to be applied.khaleel shaikhttp://www.blogger.com/profile/07051406652176384225noreply@blogger.com2