Archive for February, 2010

List tablespaces short of free space

February 24, 2010 1 comment

Autoextend enabled and maxsize must be set

select a.tablespace_name,
b.Sumbytes/1024/1024 Total_Mbytes,
b.Maxbytes/1024/1024 Max_Mbytes,
nvl(a.Freebytes/1024/1024,0) Free_Mbytes
(select tablespace_name,sum(bytes) Freebytes from dba_free_space group by tablespace_name) a,
(select tablespace_name,sum(bytes) Sumbytes, sum(decode (AUTOEXTENSIBLE,'YES',MAXBYTES,BYTES)) Maxbytes from dba_data_files group by tablespace_name ) b

Categories: Oracle

Tracing a users session quickly

February 24, 2010 Leave a comment

If you need to trace a users session as soon as the user logs on you can create a database “on_logon” trigger

execute immediate
'alter session set events ''10046 trace name context forever, level 12''';

The trigger can contain various other conditions

Dont forget to disable or drop the trigger once it is no longer required



Categories: Oracle

Check Database for Block Corruptions

February 23, 2010 Leave a comment

To verify the full db please do:

1) Set your $ORACLE_SID and $ORACLE_HOME appropriately if not already set.

2) Start RMAN in nocatalog mode and connect to your database:
From the operating system prompt issue:
$ rman target / nocatalog
$ rman target sys/<sys_password> nocatalog

3) From the RMAN> prompt issue the validate command with the “check logical”

The following example shows how to validate all datafiles:

run {
allocate channel d1 type disk;
backup check logical validate database;
release channel d1;

4)Execute the script

create table segment_corrupted (owner varchar2(10), segment_name varchar2(50), segment_type varchar(20));
truncate table segment_corrupted;

set serveroutput on

nrows number;
badsegs number;
for i in (select file#,block# from v$database_block_corruption) loop
insert into segment_corrupted select owner, segment_name, segment_type from dba_extents where file_id=i.file# and i.block# between block_id and block_id + blocks -1;
end loop;
if nrows>0 then
dbms_output.put_line('List of segments affected by corruption');
end if;
for i in (select distinct owner, segment_name, segment_type from segment_corrupted) loop
dbms_output.put_line(i.segment_type||' '||i.owner||'.'||i.segment_name);
end loop;
dbms_output.put_line('Total blocks corrupted: '||to_char(nrows)||'. Total segments affected: '||to_char(badsegs));

5) run the sql

 select * from v$database_block_corruption;

All these steps are valid only if the database is in ARCHIVELOG mode

Categories: Oracle

Install oracle Rac on linux

February 23, 2010 Leave a comment
Categories: Oracle Rac

ORA-1000: Maximum open cursors exceeded

February 10, 2010 2 comments

–who has how many cursors open?

select a.osuser,  a.sid,a.username, a.machine, count(*) from
v$session a, V$OPEN_CURSOR b
where a.sid  =  b.sid
group by
a.osuser, a.sid, a.username,a.machine
order by count(*) desc;

Monitoring Open and Cached Cursors on

Categories: Oracle

Virtual Indexes in Oracle

February 8, 2010 Leave a comment

Virtual indexes are a undocumented feature in Oracle.  virtual index has nosegment asscociated with it.
It is not seen by other sessions, so it doesn’t affect the normal running of your system.

Virtual indexes dont have their use in a development system but can be used, if a query or group of queries have to be tested in production

useful example on virtual indexes

Categories: Oracle

Data Pump In Oracle 10g

February 5, 2010 Leave a comment

The dumpfile are created on the server side so we need to use the directory parameter to specify the location of the dump file

SQL> Create directory test_dir as '/opt/app/oracle/dump_files';

Directory created.

The directory /opt/app/oracle/dump_files must exist on the server In order to for the user to use the directory, you need to grant priviledge to the user on the directory

SQL> grant read, write on directory test_dir to scott;

Data Pump Export Parameters
Following is the brief list of parameters that can be passed with data pump export job. For complete list you can do expdp help=y

DIRECTORY – Used to specify the location for dump file or logfile or SQL file.

DUMPFILE – Used to specify the name of dump file. And yes it replaces the FILE parameter in traditional export

FILESIZE – Used to specify the size of the dump file that it should create. If the size exceeds, then it will write the dump in another file if you have given multiple filenames in DUMPFILE parameter or used %U while giving the name of dump file.

PARFILE – Used to provide the name and location of parameter file which has all the required parameter for the export job to run

LOGFILE and NOLOGFILE – You can specify LOGFILE parameter to to create log file for export job. If you dont specify LOGFILE parameter, Oracle will still create the log file named export.log. Subsequent export jobs will overwrite it as this is the default name. You can specify NOLOGFILE parameter if you want to totally avoid creation of log file for your export job.

CONTENT – Using content parameter, you can filter what goes into the dump file. CONTENT parameter can take 3 values:

EXCLUDE and INCLUDE – Used for fine grained filtering of the objects that can go in export dump file

QUERY – Used to control the rows of the table that can go into the export dump file

ESTIMATE – Used to estimate the time and space required for export dump file in addition to doing export

ESTIMATE_ONLY – Used to only estimate the time and space requirement for the export job. This parameter will not export and data into the dump file.

NETWORK_LINK – Used to get the export dump from some other database into the current database server. This parameter expects a
database network link. Example   expdp hr/hr DIRECTORY=test_dir NETWORK_LINK=finance@prod1 … Here finance is the remote database on prod1 server from which you are exporting the data.

ATTACH – Used to connect to the currently running job.

JOBNAME – Used to provide custom name to the export jobs.

STATUS – Used to display the status of the job. This parameter takes the numeric values in seconds.

Data Pump Import Parameters

Some of the parameters in import job are same in export like PARFILE, DIRECTORY, DUMPFILE, LOGFILE, NOLOGFILE etc

Some of the other parameters in import are

SQLFILE – Used to extract DDL from the export dump file. If you specify the SQLFILE parameters it extracts the DDLs, no import takes place.

REUSE_DATAFILES – This parameter tells data pump whether it should existing datafile for creating tablespace during import. If REUSE_DATAFILES is set to Y then import utility will write over your existing datafiles.

CONTENT – Using content parameter, you can filter what goes into the dump file. CONTENT parameter can take 3 values: ALL, DATA_ONLY, METADATA_ONLY

TABLE_EXISTS_ACTION – In case the table already exist in the database which a import job is trying to import from the export dump file, you can use this parameter to tell import job what to do. The valid values are SKIP, APPEND, TRUNCATE, REPLACE

TRANSPORTABLE_TABLESPACE – used when you want to transport the tablespace from one database to another

REMAP_SCHEMA – Used to move objects from one user to another. Replacement of old TOUSER parameter.

REMAP_DATAFILE – Used to remap the data file names specially when transferring the data from one platform to another. Example from windows to UNIX

REMAP_TABLESPACE – Sometimes you want the tablespace in which you are importing the data to be different from the source database tablespace. This parameter comes handy for doing that.

NETWORK_LINK – This parameters is used to import the data directory into the database from some other database without having to create a dump file.

TRANSFORM – In case you don’t want to import the objects storage attributes but just the content, you can use this parameter. TRANSFORM parameter instruct the data pump import job to modify the DDL that creates the object during import. You can modify 2 basic attributes SEGMENT_ATTRIBUTE and STORAGE.

FLASHBACK_TIME – Enables you to import data consistent as of flashback time you specify in the import job. Replacement of old CONSISTENT parameter in traditional import utility.


Table exports/Imports

expdp ds_repos/dsrepos@db10g tables=ACCOUNTS,DEPT directory=TEST_DIR dumpfile=DS_REPOS_TBS.dmp logfile=expdpDS_REPOS_TBS.log
impdp ds_repos/dsrepos@db10g tables=ACCOUNTS,DEPT directory=TEST_DIR dumpfile=DS_REPOS_TBS.dmp logfile=impdpDS_REPOS_TBS.log

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schemas exports/imports

expdp ds_repos/dsrepos@db10g schemas=DS_REPOS directory=TEST_DIR dumpfile=DS_REPOS.dmp logfile=expdpDS_REPOS.log
impdp ds_repos/dsrepos@db10g schemas=DS_REPOS directory=TEST_DIR dumpfile=DS_REPOS.dmp logfile=impdpDS_REPOS.log

Database exports/imports

expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=NESPETS.dmp logfile=expdpNESPETS.log
impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=NESPETS.dmp logfile=impNESPETS.log

Monitoring data Pump Jobs

SQL> select owner_name , job_name , operation , job_mode, state , degree, attached_sessions from dba_datapump_jobs;

OWNER_NAME JOB_NAME                       OPERATION  JOB_MODE                       STATE                      DEGREE ATTACHED_SESSIONS
---------- ------------------------------ ---------- ------------------------------ ------------------------------ ---------- -----------------
SYSTEM     SYS_EXPORT_FULL_01             EXPORT     FULL                           EXECUTING                       1                 1

SQL> select owner_name, job_name, saddr from dba_datapump_sessions;

OWNER_NAME JOB_NAME                       SADDR
---------- ------------------------------ ----------------
SYSTEM     SYS_EXPORT_FULL_01             000000007F496CB8
SYSTEM     SYS_EXPORT_FULL_01             000000007F49ECE0
SYSTEM     SYS_EXPORT_FULL_01             000000007F48EC90
SQL> select username , opname, target_desc, sofar, totalwork,message from v$session_longops where target_desc='EXPORT';

USERNAME                       OPNAME               TARGET_DES      SOFAR  TOTALWORK MESSAGE
------------------------------ -------------------- ---------- ---------- ---------- --------------------------------------------------
SYSTEM                         SYS_EXPORT_FULL_01   EXPORT             34        150 SYS_EXPORT_FULL_01: EXPORT : 34 out of 150 MB done

Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the “%U”
wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read.

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects.
When the INCLUDE parameter is used, only those objects specified by it will be included in the export.
When the EXCLUDE parameter is used all objects except those specified by it will be included in the export.

expdp scott/tiger@db10g schemas=SCOTT include=TABLE:”IN (‘EMP’, ‘DEPT’)” directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:”= ‘BONUS'” directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

Categories: Oracle