DBA SCRIPTS



    1. Please suggest new scripts that can be added to this script collections
    For any queries or suggestion ,Please post in forum.dbaclass.com.

Keep visiting us.

151 thoughts on “DBA SCRIPTS”

    1. Dear Pavan,

      There is some issue with the backend code, on which we are working. By Tomorow EOD, it will be fixed.

  1. Really great help. Makes our regular works much easier and efficient.
    Thanks a lot and hope the alignment issue will be fixed soon.

    Regards
    Ranajit

    1. Dear Karteek,

      Please post the question in our forum.(forum.dbaclass.com).

      Toad is the best utility for doing schema comparison. If you need, I can share the steps.

      Regards
      Rajkishore

    1. You can use the below script.

      SELECT o.OWNER , o.OBJECT_NAME , o.SUBOBJECT_NAME , o.OBJECT_TYPE ,
      t.NAME “Tablespace Name”, s.growth/(1024*1024) “Growth in MB”,
      (SELECT sum(bytes)/(1024*1024)
      FROM dba_segments
      WHERE segment_name=o.object_name) “Total Size(MB)”
      FROM DBA_OBJECTS o,
      ( SELECT TS#,OBJ#,
      SUM(SPACE_USED_DELTA) growth
      FROM DBA_HIST_SEG_STAT
      GROUP BY TS#,OBJ#
      HAVING SUM(SPACE_USED_DELTA) > 0
      ORDER BY 2 DESC ) s,
      v$tablespace t
      WHERE s.OBJ# = o.OBJECT_ID
      AND s.TS#=t.TS#
      AND rownum < 51 ORDER BY 6 DESC /

      1. Thanks for the script but i have a query here. I don’t find anything related to 60 days in the provided script. If we want to query the result for 90 days how can we change the given script.

  2. Hi Admin – Could you provide query to find out the debug enabled at db resource level..i.e package …table…etc..

    1. Hi Sai,

      Currently i dont have the zip format. But i am working a small interactive tool, Which will be a bundle of all these scripts.

      Regards
      Admin

  3. Hi Team,

    I am Not able to view any of the scripts under DBA SCRIPTS. It was going as Blank page.

    I was using Google Chrome Browser.

    Please suggest, how can I view all these scripts.

    Thanks in Advance.

    Regards,
    Abhinay.

    1. Hi, It seems java scripts/popup were blocked by your browser. These are working in my browser.
      Could you please check by removing adblocker if you have.

  4. Hi Admin,

    Please send a script for Performance scripts or commands for concurrent request.

    Thanks,
    Pratyush

  5. Hi Admin,

    Can you please help me and give us the consolidate command(.sh) or .sql file if I enter request id,it will fetch all the details.
    Request timings,
    Request history(sysdate-30),
    session details and its statistics,
    sql id,sql_text,
    progress details
    plan details(running with same or plan change).
    etc.

      1. request id stands for if user submit concurrent request from Oracle EBS Applications front END and he will to share a request id to monitor the request.
        That is why we need a consolidate script.

        Thanks
        Pratyush

  6. What is the input for DB Growth script per month :
    select to_char(creation_time, ‘MM-RRRR’) “Month”, sum(bytes)/1024/1024/1024 “Growth in GB
    from sys.v_$datafile
    where to_char(creation_time,’RRRR’)=’&YEAR_IN_YYYY_FORMAT’
    group by to_char(creation_time, ‘MM-RRRR’)
    order by to_char(creation_time, ‘MM-RRRR’);

      1. i have give year as : 2018 getting error
        select to_char(creation_time, ‘MM-RRRR’) “Month”, sum(bytes)/1024/1024/1024 “Growth in GB
        from sys.v_$datafile
        2 3 where to_char(creation_time,’RRRR’)=’&YEAR_IN_YYYY_FORMAT’
        group by to_char(creation_time, ‘MM-RRRR’)
        order by to_char(creation_time, ‘MM-RRRR’); 4 5
        Enter value for year_in_yyyy_format: year,2018
        old 3: where to_char(creation_time,’RRRR’)=’&YEAR_IN_YYYY_FORMAT’
        new 3: where to_char(creation_time,’RRRR’)=’year,2018′
        ERROR:
        ORA-00972: identifier is too long

  7. I am using 12c pluggable DB’s on Exadata with 44 cores (88 cpu’s). I can’t seem to find a convenient method for finding out how much CPU resources are being used using AWR statistics. AWR gives me the load stats as if all of the cores are available, but of course I am using the resource manager so those stats are not reflecting the resource manager settings for the CPU resource.

    Please suggest what I need to do.

    Thanks-
    Mark

  8. My Application table owner granted DML privs to PUBLIC as Grantee, I want to rollback all DML pivs from PUBLIC and need to allocate it to newly created ROLE. Could you please provide me the script for this. Unable to get it from Google. Please help.

    I need to assing privs to ROLE same as Public. Public user having nearly 10k+privs.

    1. Dear Gopal,

      Use similar below dynaimc script to generate required sql file.

      spool revoke_public.sql
      select ‘REVOKE ‘||PRIVILEGE||’ from ‘||OWNER||’.’||TABLE_NAME||’ from PUBLIC;’ from dba_tab_privs where grantee=’PUBLIC’ and owner=’APP’ and PRIVILEGE in (‘INSERT’,’UPDATE’,’DELETE’);
      spool off;

      For quick response , Please post in the Q&A Forum.

      Regards
      Admin

    1. what is your problem in visiting the site and accessing the scripts. You don’t have internet in you pc or mobile. These days many telecom operators are providing internet service at very low cost. Go and buy them. And better access scripts from website only.

  9. Hi Admin,

    Need your help to create blocking script and also kept in shell script which create a output in
    in html tabular format to get information of all waiter session at how much time with object_type,pid,sid,client information,module,action,program,last_call_et,status etc .

    Regards,
    Prasoon

  10. Hi Admin,

    These blocking commands are not working showing no rows but blocking is there..

    Can you please send the command and also add detail how much time holding/blocking session is blocked waiter session.

    Thanks,
    Prasoon

  11. set lines 120
    col sess format a15
    SELECT DECODE(request,0,’Holder: ‘,’ Waiter: ‘)||sid sess,id1,id2, lmode,inst_id, request, type
    FROM GV$LOCK WHERE (id1, id2, type)
    IN
    (SELECT id1, id2, type FROM GV$LOCK WHERE request>0) ORDER BY id1, request
    /

    Hi team,

    I have used this command to find out blocking session it is showing session is there but suggested commands is shoing no rows.
    Can you please send the command and also add detail how much time holding/blocking session is blocked waiter session?

    SESS ID1 ID2 LMODE INST_ID REQUEST TY
    ————— ———- ———- ———- ———- ———- —
    Holder: 2397 524319 548489 6 1 0 TX
    Waiter: 474 524319 548489 0 6 6 TX
    Holder: 2981 6815776 391294 6 1 0 TX
    Waiter: 542 6815776 391294 0 5 6 TX
    Holder: 1918 10223642 429877 6 2 0 TX
    Waiter: 1754 10223642 429877 0 2 6 TX
    Waiter: 2565 10223642 429877 0 1 6 TX
    Holder: 1924 11927575 573769 6 2 0 TX
    Waiter: 1918 11927575 573769 0 2 6 TX
    Holder: 2895 28180490 1078657 6 5 0 TX
    Waiter: 1905 28180490 1078657 0 2 6 TX

    SESS ID1 ID2 LMODE INST_ID REQUEST TY

    SQL> SELECT
    s.inst_id,
    s.blocking_session,
    s.sid,
    s.serial#,
    s.seconds_in_wait,
    s.event
    FROM
    gv$session s
    WHERE
    blocking_session IS NOT NULL and s.seconds_in_wait > 10; 2 3 4 5 6 7 8 9 10 11

    no rows selected

    SQL>

    Please clarify and help..

    Thanks,
    Prasoon

    1. Hi Pablo,

      Thanks for suggestions, However we dont have expertise on exadata. We would be glad, if you can help us.

      Regards
      Admin

  12. Great portal and collection Dear brother Good job …..exellent yar …very usful for all DBA

  13. Please add some RDS related queries also, rdsadmin.packages*, which will be help of the peoples working on AWS cloud RDS oracle

    1. Hi Suresh,

      We dont have expertise on amazon RDS. If you have set of scripts on RDS, share with us, we will happily publish the same.

      Regards
      Admin

  14. Hi Admin, i was searching for the dba scripts and came across this website, this has all the scripts for day to day dba tasks, i really appreciate your effort to gather all the scripts at one place.

    if you dont mind, can you please email me all the scripts to pradeep13923dba@gmail.com

    1. Dear,

      All the scripts are embedded in the website page. I don’t have it in one place with me.

      Admin

  15. Awesome web congrats, I realized people requesting those script, like send to me to my email,exuse me …
    you already have those script in the website .

  16. Hello,
    Add this script for tablespace utilization an it calculate the autoextendable size.

    set pages 50000 lines 32767
    col tablespace_name format a30
    col TABLESPACE_NAME heading “Tablespace|Name”
    col Allocated_size heading “Allocated|Size(GB)” form 99999999.99
    col Current_size heading “Current|Size(GB)” form 99999999.99
    col Used_size heading “Used|Size(GB)” form 99999999.99
    col Available_size heading “Available|Size(GB)” form 99999999.99
    col Pct_used heading “%Used (vs)|(Allocated)” form 99999999.99

    select a.tablespace_name
    ,a.alloc_size/1024/1024/1024 Allocated_size
    ,a.cur_size/1024/1024/1024 Current_Size
    ,(u.used+a.file_count*65536)/1024/1024/1024 Used_size
    ,(a.alloc_size-(u.used+a.file_count*65536))/1024/1024/1024 Available_size
    ,((u.used+a.file_count*65536)*100)/a.alloc_size Pct_used
    from dba_tablespaces t
    ,(select t1.tablespace_name
    ,nvl(sum(s.bytes),0) used
    from dba_segments s
    ,dba_tablespaces t1
    where t1.tablespace_name=s.tablespace_name(+)
    group by t1.tablespace_name) u
    ,(select d.tablespace_name
    ,sum(greatest(d.bytes,nvl(d.maxbytes,0))) alloc_size
    ,sum(d.bytes) cur_size
    ,count(*) file_count
    from dba_data_files d
    group by d.tablespace_name) a
    where t.tablespace_name=u.tablespace_name
    and t.tablespace_name=a.tablespace_name
    order by t.tablespace_name
    /

  17. Hi Team,

    Can you please add scripts for Golden gate like(Start/stop, Monitor,Replicate, Monitor)

    Also please add scripts for PSU patch RAC database when GG is there and when DG is configured.

    Thanks

  18. SELECT le.leseq “Current log sequence No”,
    100*cp.cpodr_bno/le.lesiz “Percent Full”,
    cp.cpodr_bno “Current Block No”,
    le.lesiz “Size of Log in Blocks”
    FROM x$kcccp cp, x$kccle le
    WHERE le.leseq =CP.cpodr_seq
    AND bitand(le.leflg,24) = 8
    /

    i have executed the above script. i’m unable to process the output process. Can you guide me.

  19. Dear Admin,

    Thanks for great stuff, if time permits please add performance administration scripts like plan fix, etc.

    Many thanks

  20. one particular tablespace in my database is growing rapidly and i want to know why is this happening. Is there a way i can check how much data has grown in last few months and the table which consumes more space belongs to which owner(user).

  21. Hi Team,

    Can you please add scripts for Golden gate like(Start/stop, Monitor,Replicate, Monitor)

    Also please add scripts for PSU patch RAC database when GG is there and when DG is configured.

    Please add script for relinking CRS and Oracle binary .

    Thanks

  22. Hello Blogger ,

    Issue in this script .

    select module,parsing_schema_name,inst_id,sql_id,plan_hash_value,child_number,sql_fulltext,
    to_char(last_active_time,’DD/MM/YY HH24:MI:SS’ ),sql_plan_baseline,executions,
    elapsed_time/executions/1000/1000,rows_processed from gv$sql
    where sql_id in (‘&sql_id’);

    SQL>
    SQL> select module,parsing_schema_name,inst_id,sql_id,plan_hash_value,child_number,sql_fulltext,
    to_char(last_active_time,’DD/MM/YY HH24:MI:SS’ ),sql_plan_baseline,executions,
    elapsed_time/executions/1000/1000,rows_processed from gv$sql
    where sql_id in (‘&sql_id’); 2 3 4
    Enter value for sql_id: 5573pf9s2vwmh
    old 4: where sql_id in (‘&sql_id’)
    new 4: where sql_id in (‘5573pf9s2vwmh’)
    ERROR:
    ORA-01476: divisor is equal to zero

    no rows selected

    1. Try below one:

      select module,parsing_schema_name,inst_id,sql_id,plan_hash_value,child_number,sql_fulltext,
      to_char(last_active_time,’DD/MM/YY HH24:MI:SS’ ),sql_plan_baseline,executions,
      elapsed_time/executions/1000/1000,rows_processed from gv$sql
      where sql_id in (‘&sql_id’) and executions >0

  23. Hi Admin,first of all i wanna say kudos for the great job put out here,I was
    wondering if you guys can come up with something this great in Golden gate.
    I have a personal issues,can i please get help with a monitoring script,precisely
    setting an alert on table increment…i received the below ticket and i have been working on it

    The issue is whenever there is a LOCK in INCREMENT table users will not be able to access the application and we will reach out to DBA team to get the information under which session the increment table got locked and once we receive the information we provide our confirmation to kill the session. Now we are looking to set up an proactive mechanism which will alert us before user reports the issue. Once the issue has been occurred we will expect the alert to provide us the session name and the server name details.
    Thanks in Advance Admin

  24. excellent information…you made all our days easier with the scripts…….Thank you so much and much appreciated.

  25. Hi Admin,

    Thank you very much for providing such great information in one place.
    Could you please let me know how can I download all these scripts to my desktop at once as we are not having permission to access the google at client place,is there any link like that if yes could you please send me @sp.pasha930@gmail.com.

  26. Hi Admin,

    How to find what bind value with input parameter passing in SQL query from Oracle database backend. Can you provide any other option suggest me.

    ” Application team insert,update,delete or select from third party applications. I need to see what bind (:b1) variables with input parameter values from Oracle database.” How to enable trace particular SQL ID, session, sid in Database.

  27. Thank you for your dedication in IT. I am new to database and this site has everything a newbies need.

    1. This scripts are popup scripts and embeded in the html page. Please let me know for which scripts you are facing issue?

  28. Hi Admin,

    Can you please share a script which we can use to check the ASM diskgroup growth
    having below columns.. grownth of 1-month, growth from last month and growth in last 6 months

  29. Hi Admin,

    Is it possible to share a script which can be used for monitoring the ASM diskgroup growth rate?

    it would be good if it will have below columns for reference.

    1). 1-month growth
    2). Growth from last month.
    3). 6-months growth

  30. Dear Admin,

    Would you please make these scripts downloadable as a .sql file.

    It will provide much more help for DBAs.

    Thank you.

      1. Hi Admin,

        Can any one please share script (SQL Query ) for tablespace report for Oracle RDS(AWS) to run from OEM EM repository database.
        I’m trying to generate one OEM report for all our rds target dbs to list tablespace ,datafiles growth and %uasge , Free space .
        Please help on sql qeury . Thank you.

  31. Hi Admin,

    Can any one please share script (SQL Query ) for tablespace report for Oracle RDS(AWS) to run from OEM EM repository database.
    I’m trying to generate one OEM report for all our rds target dbs to list tablespace ,datafiles growth and %uasge , Free space .
    Please someone help on oem repo sql qeury . Thank you.

  32. Very handy , appreciate for sharing , can you plz add some scripts on the table fragmentation and reorg on them.
    Moving into same tablespace or different tablespace with online and advance compressions etc.

  33. Dear Team,

    Is there a way we can monitor Tablespace usage and if the Tablespace growth is more than 20 or 30 GB then an alert is triggered

  34. I am trying to create one new schema… I need to give grant insert to that new schema.. can u suggest on the same.
    Request is like create schema and grant insert privilege to it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

select sesion.sid,
sesion.username,
optimizer_mode,
hash_value,
address,
cpu_time,
elapsed_time,
sql_text
from v$sqlarea sqlarea, v$session sesion
where sesion.sql_hash_value = sqlarea.hash_value
and sesion.sql_address = sqlarea.address
and sesion.username is not null;

set echo off
set linesize 95
set head on
set feedback on
col sid head "Sid" form 9999 trunc
col serial# form 99999 trunc head "Ser#"
col username form a8 trunc
col osuser form a7 trunc
col machine form a20 trunc head "Client|Machine"
col program form a15 trunc head "Client|Program"
col login form a11
col "last call" form 9999999 trunc head "Last Call|In Secs"
col status form a6 trunc
select sid,serial#,substr(username,1,10) username,substr(osuser,1,10) osuser,
substr(program||module,1,15) program,substr(machine,1,22) machine,
to_char(logon_time,'ddMon hh24:mi') login,
last_call_et "last call",status
from gv$session where status='ACTIVE'
order by 1
/

select a.sid,substr(b.username,1,10) username,substr(b.osuser,1,10) osuser,
substr(b.program||b.module,1,15) program,substr(b.machine,1,22) machine,
a.event,a.p1,b.sql_hash_value
from v$session_wait a,V$session b
where b.sid=a.sid
and a.event not in('SQL*Net message from client','SQL*Net message to client',
'smon timer','pmon timer')
and username is not null
order by 6
/

select a.sid, a.serial#, a.username, b.used_urec used_undo_record, b.used_ublk used_undo_blocks
from v$session a, v$transaction b
where a.saddr=b.ses_addr ;

SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M' AS temp_size,
a.inst_id as Instance,
a.sid||','||a.serial# AS sid_serial,
NVL(a.username, '(oracle)') AS username,
a.program,
a.status,
a.sql_id
FROM gv$session a,
gv$sort_usage b,
gv$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
AND a.inst_id=b.inst_id
AND a.inst_id=p.inst_id
ORDER BY temp_size desc
/

set lines 2000
set pages 1000
col sid for 99999
col name for a09
col username for a14
col PROGRAM for a21
col MODULE for a25
select s.sid,sn.SERIAL#,n.name, round(value/1024/1024,2) redo_mb, sn.username,sn.status,substr (sn.program,1,21) "program", sn.type, sn.module,sn.sql_id
from v$sesstat s join v$statname n on n.statistic# = s.statistic#
join v$session sn on sn.sid = s.sid where n.name like 'redo size' and s.value!=0 order by
redo_mb desc;

col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/

set feedback off
set pagesize 70;
set linesize 2000
set head on
COLUMN Tablespace format a25 heading 'Tablespace Name'
COLUMN autoextensible format a11 heading 'AutoExtend'
COLUMN files_in_tablespace format 999 heading 'Files'
COLUMN total_tablespace_space format 99999999 heading 'TotalSpace'
COLUMN total_used_space format 99999999 heading 'UsedSpace'
COLUMN total_tablespace_free_space format 99999999 heading 'FreeSpace'
COLUMN total_used_pct format 9999 heading '%Used'
COLUMN total_free_pct format 9999 heading '%Free'
COLUMN max_size_of_tablespace format 99999999 heading 'ExtendUpto'
COLUM total_auto_used_pct format 999.99 heading 'Max%Used'
COLUMN total_auto_free_pct format 999.99 heading 'Max%Free'
WITH tbs_auto AS
(SELECT DISTINCT tablespace_name, autoextensible
FROM dba_data_files
WHERE autoextensible = 'YES'),
files AS
(SELECT tablespace_name, COUNT (*) tbs_files,
SUM (BYTES/1024/1024) total_tbs_bytes
FROM dba_data_files
GROUP BY tablespace_name),
fragments AS
(SELECT tablespace_name, COUNT (*) tbs_fragments,
SUM (BYTES)/1024/1024 total_tbs_free_bytes,
MAX (BYTES)/1024/1024 max_free_chunk_bytes
FROM dba_free_space
GROUP BY tablespace_name),
AUTOEXTEND AS
(SELECT tablespace_name, SUM (size_to_grow) total_growth_tbs
FROM (SELECT tablespace_name, SUM (maxbytes)/1024/1024 size_to_grow
FROM dba_data_files
WHERE autoextensible = 'YES'
GROUP BY tablespace_name
UNION
SELECT tablespace_name, SUM (BYTES)/1024/1024 size_to_grow
FROM dba_data_files
WHERE autoextensible = 'NO'
GROUP BY tablespace_name)
GROUP BY tablespace_name)
SELECT c.instance_name,a.tablespace_name Tablespace,
CASE tbs_auto.autoextensible
WHEN 'YES'
THEN 'YES'
ELSE 'NO'
END AS autoextensible,
files.tbs_files files_in_tablespace,
files.total_tbs_bytes total_tablespace_space,
(files.total_tbs_bytes - fragments.total_tbs_free_bytes
) total_used_space,
fragments.total_tbs_free_bytes total_tablespace_free_space,
round(( ( (files.total_tbs_bytes - fragments.total_tbs_free_bytes)
/ files.total_tbs_bytes
)
* 100
)) total_used_pct,
round(((fragments.total_tbs_free_bytes / files.total_tbs_bytes) * 100
)) total_free_pct
FROM dba_tablespaces a,v$instance c , files, fragments, AUTOEXTEND, tbs_auto
WHERE a.tablespace_name = files.tablespace_name
AND a.tablespace_name = fragments.tablespace_name
AND a.tablespace_name = AUTOEXTEND.tablespace_name
AND a.tablespace_name = tbs_auto.tablespace_name(+)
order by total_free_pct;

select a.tablespace_name, SIZEMB, USAGEMB, (SIZEMB - USAGEMB) FREEMB
from (select sum(bytes) / 1024 / 1024 SIZEMB, b.tablespace_name
from dba_data_files a, dba_tablespaces b
where a.tablespace_name = b.tablespace_name
and b.contents = 'UNDO'
group by b.tablespace_name) a,
(select c.tablespace_name, sum(bytes) / 1024 / 1024 USAGEMB
from DBA_UNDO_EXTENTS c
where status <> 'EXPIRED'
group by c.tablespace_name) b
where a.tablespace_name = b.tablespace_name;

select a.tablespace_name tablespace,
d.TEMP_TOTAL_MB,
sum (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_USED_MB,
d.TEMP_TOTAL_MB - sum (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_FREE_MB
from v$sort_segment a,
(
select b.name, c.block_size, sum (c.bytes) / 1024 / 1024 TEMP_TOTAL_MB
from v$tablespace b, v$tempfile c
where b.ts#= c.ts#
group by b.name, c.block_size
) d
where a.tablespace_name = d.name
group by a.tablespace_name, d.TEMP_TOTAL_MB;

SELECT
s.inst_id,
s.blocking_session,
s.sid,
s.serial#,
s.seconds_in_wait
FROM
gv$session s
WHERE
blocking_session IS NOT NULL;

select sid,inst_id,opname,totalwork,sofar,start_time,time_remaining
from gv$session_longops
where totalwork<>sofar
/

col session_id head 'Sid' form 9999
col object_name head "Table|Locked" form a30
col oracle_username head "Oracle|Username" form a10 truncate
col os_user_name head "OS|Username" form a10 truncate
col process head "Client|Process|ID" form 99999999
col mode_held form a15
select lo.session_id,lo.oracle_username,lo.os_user_name,
lo.process,do.object_name,
decode(lo.locked_mode,0, 'None',1, 'Null',2, 'Row Share (SS)',
3, 'Row Excl (SX)',4, 'Share',5, 'Share Row Excl (SSX)',6, 'Exclusive',
to_char(lo.locked_mode)) mode_held
from v$locked_object lo, dba_objects do
where lo.object_id = do.object_id
order by 1,5
/

-- Below script will provide the dependent queries getting triggered from a procedure. 

SELECT s.sql_id, s.sql_text
FROM gv$sqlarea s JOIN dba_objects o ON s.program_id = o.object_id
and o.object_name = '&procedure_name';

- Get sid from os pid ( server process)

col sid format 999999
col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser
from v$session a, v$process b
where a.paddr= b.addr
and b.spid='&spid'
order by b.spid;

BEGIN
FOR r IN (select sid,serial# from v$session where username = 'TEST_ANB')
LOOP
EXECUTE IMMEDIATE 'alter system kill session ''' || r.sid
|| ',' || r.serial# || '''';
END LOOP;
END;
/

select 'alter system kill session ' ||''''||SID||','||SERIAL#||' immediate ;' from v$session
where sql_id='&sql_id';

FOR RAC

select 'alter system kill session ' ||''''||SID||','||SERIAL#||',@'||inst_id||''''||' immediate ;'
from gv$session where sql_id='&sql_id'

col username for a9
col sid for a8
set lines 299
select
s.inst_id,
decode(px.qcinst_id,NULL,s.username,
' - '||lower(substr(s.program,length(s.program)-4,4) ) ) "Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "Slave Set",
to_char(s.sid) "SID",
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
px.req_degree "Requested DOP",
px.degree "Actual DOP", p.spid
from
gv$px_session px,
gv$session s, gv$process p
where
px.sid=s.sid (+) and
px.serial#=s.serial# and
px.inst_id = s.inst_id
and p.inst_id = s.inst_id
and p.addr=s.paddr
order by 5 , 1 desc
/

It will generate kill session statements for all snipped sessions:

select 'alter system kill session '''||sid||','||serial#||''' immediate;' from v$session where status='SNIPED' ;

Queries in last 1 hour ( Run from Toad, for proper view)

 

Select module,parsing_schema_name,inst_id,sql_id,CHILD_NUMBER,sql_plan_baseline,sql_profile,plan_hash_value,sql_fulltext,

to_char(last_active_time,'DD/MM/YY HH24:MI:SS' ),executions, elapsed_time/executions/1000/1000,

rows_processed,sql_plan_baseline from gv$sql where last_active_time>sysdate-1/24 

and executions <> 0 order by elapsed_time/executions desc

 select
      s.inst_id,
      decode(px.qcinst_id,NULL,s.username,
            ' - '||lower(substr(s.program,length(s.program)-4,4) ) ) "Username",
      decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
      to_char( px.server_set) "Slave Set",
      to_char(s.sid) "SID",
      decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
      px.req_degree "Requested DOP",
     px.degree "Actual DOP", p.spid
   from
     gv$px_session px,
     gv$session s, gv$process p
   where
     px.sid=s.sid (+) and
     px.serial#=s.serial# and
     px.inst_id = s.inst_id
     and p.inst_id = s.inst_id
     and p.addr=s.paddr
  order by 5 , 1 desc

SET PAGESIZE 1000
SET VERIFY OFF

COLUMN owner FORMAT A20
COLUMN username FORMAT A20
COLUMN object_owner FORMAT A20
COLUMN object_name FORMAT A30
COLUMN locked_mode FORMAT A15

SELECT b.inst_id,
b.session_id AS sid,
NVL(b.oracle_username, '(oracle)') AS username,
a.owner AS object_owner,
a.object_name,
Decode(b.locked_mode, 0, 'None',
1, 'Null (NULL)',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share (S)',
5, 'S/Row-X (SSX)',
6, 'Exclusive (X)',
b.locked_mode) locked_mode,
b.os_user_name
FROM dba_objects a,
gv$locked_object b
WHERE a.object_id = b.object_id
ORDER BY 1, 2, 3, 4;

SET PAGESIZE 14
SET VERIFY ON

Current open cursor

select a.value, s.username, s.sid, s.serial#
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic# and s.sid=a.sid
and b.name = 'opened cursors current';

 Max allowed open cursor and total open cursor

select max(a.value) as highest_open_cur, p.value as max_open_cur
from v$sesstat a, v$statname b, v$parameter p
where a.statistic# = b.statistic# and b.name = 'opened cursors current'
and p.name= 'open_cursors'
group by p.value;

select c.username,a.SAMPLE_TIME, a.SQL_OPNAME, a.SQL_EXEC_START, a.program, a.module, a.machine, b.SQL_TEXT
from DBA_HIST_ACTIVE_SESS_HISTORY a, dba_hist_sqltext b, dba_users c
where a.SQL_ID = b.SQL_ID(+)
and a.user_id=c.user_id
and c.username='&username'
order by a.SQL_EXEC_START asc;

SELECT ROUND((1-(phy.value / (cur.value + con.value)))*100,2) "Cache Hit Ratio"
FROM v$sysstat cur, v$sysstat con, v$sysstat phy
WHERE cur.name = 'db block gets'
AND con.name = 'consistent gets'
AND phy.name = 'physical reads'
/

select username users, round(DISK_READS/Executions) DReadsExec,Executions Exec, DISK_READS DReads,sql_text
from gv$sqlarea a, dba_users b
where a.parsing_user_id = b.user_id
and Executions > 0
and DISK_READS > 100000
order by 2 desc;

set lines 123
col USERNAME for a15
col OSUSER for a8
col MACHINE for a15
col PROGRAM for a20
select b.spid, a.username, a.program , a.osuser ,a.machine, a.sid, a.serial#, a.status from gv$session a, gv$process b
where addr=paddr(+) and sid=&sid;

select sid, sql_id,serial#, status, username, program
from v$session
where PLSQL_ENTRY_OBJECT_ID in (select object_id
from dba_objects
where object_name in ('&PROCEDURE_NAME'));

col object_name format a30
col to_total format 999.99

SELECT owner, object_name, object_type, count, (count / value) * 100 to_total
FROM (
SELECT a.owner, a.object_name, a.object_type,
count(*) count
FROM dba_objects a,
x$bh b
WHERE a.object_id = b.obj
and a.owner not in ('SYS', 'SYSTEM')
GROUP BY a.owner, a.object_name, a.object_type
ORDER BY 4),
v$parameter
WHERE name = 'db_cache_size'
AND (count / value) * 100 > .005
ORDER BY to_total desc
/

select state,UNDOBLOCKSDONE,UNDOBLOCKSTOTAL,
UNDOBLOCKSDONE/UNDOBLOCKSTOTAL*100
from gv$fast_start_transactions;

alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo",
decode(cputime,0,'unknown',
sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) "Estimated time to complete"
from v$fast_start_transactions;

select a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk
from v$session a, v$transaction b
where a.saddr=b.ses_addr
order by 5 desc;

set lines 150
set pages 500
col table_name for a20
col column_name for a20
select a.object_name table_name, c.column_name,equality_preds, equijoin_preds, range_preds, like_preds
from dba_objects a, col_usage$ b, dba_tab_columns c
where a.object_id=b.OBJ#
and c.COLUMN_ID=b.INTCOL#
and a.object_name=c.table_name
and b.obj#=a.object_id
and a.object_name='&table_name'
and a.object_type='TABLE'
and a.owner='&owner'
order by 3 desc,4 desc, 5 desc;

col ksbddidn for a15
col ksmfsnam for a20
col ksbdddsc for a60
set lines 150 pages 5000
SELECT ksbdd.ksbddidn, ksmfsv.ksmfsnam, ksbdd.ksbdddsc
FROM x$ksbdd ksbdd, x$ksbdp ksbdp, x$ksmfsv ksmfsv
WHERE ksbdd.indx = ksbdp.indx
AND ksbdp.addr = ksmfsv.ksmfsadr
ORDER BY ksbdd.ksbddidn;

select
length(addr)*4 || '-bits' word_length
from
v$process
where
ROWNUM =1;

select
samp.dbid,
fu.name,
samp.version,
detected_usages,
total_samples,
decode(to_char(last_usage_date, 'MM/DD/YYYY, HH:MI:SS'),
NULL, 'FALSE',
to_char(last_sample_date, 'MM/DD/YYYY, HH:MI:SS'), 'TRUE',
'FALSE')
currently_used,
first_usage_date,
last_usage_date,
aux_count,
feature_info,
last_sample_date,
last_sample_period,
sample_interval,
mt.description
from
wri$_dbu_usage_sample samp,
wri$_dbu_feature_usage fu,
wri$_dbu_feature_metadata mt
where
samp.dbid = fu.dbid and
samp.version = fu.version and
fu.name = mt.name and
fu.name not like '_DBFUS_TEST%' and /* filter out test features */
bitand(mt.usg_det_method, 4) != 4 /* filter out disabled features */;

select OPERATION_NAME, DEFAULT_VALUE from
V$OPTIMIZER_PROCESSING_RATE where OPERATION_NAME
in ('IO_BYTES_PER_SEC','CPU_BYTES_PER_SEC', 'CPU_ROWS_PER_SEC');

Create a directory pointing to asm diskgroup( for dumpfiles)

SQL> create directory SOURCE_DUMP as '+NEWTST/TESTDB2/TEMPFILE';
Directory created

Create a directory pointing to a normal filesystem ( required for logfiles)

SQL> create directory EXPLOG as '/export/home/oracle';
Directory created.
export parfile

dumpfile=test.dmp
logfile=EXPLOG:test.log
directory=SOURCE_DUMP
tables=dbatest.EMPTAB
exclude=statistics

First get the child number of the sql_id .One sql_id can have multiple child number( one for each plan_hash_value)

SQL> select sql_id,child_number,plan_hash_value from gv$sql where sql_id='9n2a2c8pvu6bm';
SQL_ID         CHILD_NUMBER PLAN_HASH_VALUE
-------------  ------------ ---------------
9n2a2c8pvu6bm   1             13761463

Now get the explain plan for cursor:

SELECT * from TABLE(DBMS_XPLAN.DISPLAY_CURSOR('&sqlid',&child_number));

set lines 200

SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR('&sql_id'));

col sql_text form a80
set lines 120
select sql_text from gv$sqltext where hash_value=
(select sql_hash_value from gv$session where sid=&1 and inst_id=&inst_id)
order by piece
/

Generate explain plan
-- Syntax EXPLAIN PLAN FOR < SQL STATEMENT> ;

explain plan for
select count(*) from dbaclass;
View explain plan 

select * from table(dbms_xplan.display);

--- SYNTAX

--  SELECT * FROM   TABLE(DBMS_XPLAN.display_sql_plan_baseline(plan_name=>'<SQL BASELINE NAME>'));

SELECT * FROM   TABLE(DBMS_XPLAN.display_sql_plan_baseline(plan_name=>'SQL_PLAN_gbhrw1v44209a5b2f7514'));

SELECT
sql_id,
b. LAST_CAPTURED,
t.sql_text sql_text,
b.HASH_VALUE,
b.name bind_name,
b.value_string bind_value
FROM
gv$sql t
JOIN
gv$sql_bind_capture b using (sql_id)
WHERE
b.value_string is not null
AND
sql_id='&sqlid'
/

First get the address, hash_value of the sql_id

select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '5qd8a442c328k';

ADDRESS          HASH_VALUE
---------------  ------------
C000007067F39FF0  4000666812

Now flush the query

SQL> exec DBMS_SHARED_POOL.PURGE ('C000007067F39FF0, 4000666812', 'C');

Note : For RAC, same need to be executed on all the nodes .

--- CREATE THE BELOW TRIGGER TO ENABLE TRACE ALL SESSION OF USER ( SCOTT)
CREATE OR REPLACE TRIGGER USER_TRACE_TRG
AFTER LOGON ON DATABASE
BEGIN
    IF USER = 'SCOTT'
  THEN
    execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
  END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;

EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>321, serial#=>1234, sql_trace=>FALSE);

Get the trace file name

SELECT p.tracefile FROM v$session s JOIN v$process p ON s.paddr = p.addr WHERE s.sid = 321;

 

 

begin
dbms_sqldiag.dump_trace(p_sql_id=>'dmx08r6ayx800',
p_child_number=>0,
p_component=>'Compiler',
p_file_id=>'TEST_OBJ3_TRC');
END;
/

alter system set events 'sql_trace [sql:8krc88r46raff]';

select module,parsing_schema_name,inst_id,sql_id,plan_hash_value,child_number,sql_fulltext,
to_char(last_active_time,'DD/MM/YY HH24:MI:SS' ),sql_plan_baseline,executions,
elapsed_time/executions/1000/1000,rows_processed from gv$sql
where sql_id in ('&sql_id');

- Set to the listener you want to trace

LSNRCTL> set cur LISTENER_TEST

-- Enable Trace:

LSNRCTL> set trc_level ADMIN
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_TEST)))
LISTENER_TEST parameter "trc_level" set to admin
The command completed successfully

set lines 2000
SELECT SID, b.NAME, ROUND(a.VALUE/(1024*1024),2) MB FROM
v$sesstat a, v$statname b
WHERE (NAME LIKE '%session uga memory%' OR NAME LIKE '%session pga memory%')
AND a.statistic# = b.statistic# order by ROUND(a.VALUE/(1024*1024),2) desc

set pagesize 200
setlinesize 120
col segment_name format a20
col owner format a10
select segment_name,object_type,total_physical_reads
from ( select owner||'.'||object_name as segment_name,object_type,
value as total_physical_reads
from v$segment_statistics
where statistic_name in ('physical reads')
order by total_physical_reads desc)
where rownum <=10;

SELECT SUBSTR(t.name,1,50) AS file_name,
f.phyblkrd AS blocks_read,
f.phyblkwrt AS blocks_written,
f.phyblkrd + f.phyblkwrt AS total_io
FROM v$tempstat f,v$tempfile t
WHERE t.file# = f.file#
ORDER BY f.phyblkrd + f.phyblkwrt DESC;

 

select * from (SELECT u.tablespace, s.username, s.sid, s.serial#, s.logon_time, program, u.extents, ((u.blocks*8)/1024) as MB,
i.inst_id,i.host_name
FROM gv$session s, gv$sort_usage u ,gv$instance i
WHERE s.saddr=u.session_addr and u.inst_id=i.inst_id order by MB DESC) a where rownum<10;

select round(used.bytes /1024/1024 ,2) used_mb
, round(free.bytes /1024/1024 ,2) free_mb
, round(tot.bytes /1024/1024 ,2) total_mb
from (select sum(bytes) bytes
from v$sgastat
where name != 'free memory') used
, (select sum(bytes) bytes
from v$sgastat
where name = 'free memory') free
, (select sum(bytes) bytes
from v$sgastat) tot
/

Query to get list of top running sqls in PAST between sysdate-1 to sysdate-23/34 . You can change accordingly

SELECT active_session_history.user_id,
dba_users.username,
sqlarea.sql_text,
SUM(active_session_history.wait_time +
active_session_history.time_waited)/1000000 ttl_wait_time_in_seconds
FROM v$active_session_history active_session_history,
v$sqlarea sqlarea,
dba_users
WHERE active_session_history.sample_time BETWEEN SYSDATE - 1 AND SYSDATE-23/24
AND active_session_history.sql_id = sqlarea.sql_id
AND active_session_history.user_id = dba_users.user_id
and dba_users.username not in ('SYS','DBSNMP')
GROUP BY active_session_history.user_id,sqlarea.sql_text, dba_users.username
ORDER BY 4 DESC
/

Query will list the blocking session details between SYSDATE - 1 AND SYSDATE-23/24 ( PAST)

set pagesize 50
set linesize 120
col sql_id format a15
col inst_id format '9'
col sql_text format a50
col module format a10
col blocker_ses format '999999'
col blocker_ser format '999999'
SELECT distinct
a.sql_id ,
a.inst_id,
a.blocking_session blocker_ses,
a.blocking_session_serial# blocker_ser,
a.user_id,
s.sql_text,
a.module,a.sample_time
FROM GV$ACTIVE_SESSION_HISTORY a,
gv$sql s
where a.sql_id=s.sql_id
and blocking_session is not null
and a.user_id <> 0 -- exclude SYS user
and a.sample_time BETWEEN SYSDATE - 1 AND SYSDATE-23/24
/

col program form a30 heading "Program"
col CPUMins form 99990 heading "CPU in Mins"
select rownum as rank, a.*
from (
SELECT v.sid, program, v.value / (100 * 60) CPUMins
FROM v$statname s , v$sesstat v, v$session sess
WHERE s.name = 'CPU used by this session'
and sess.sid = v.sid
and v.statistic#=s.statistic#
and v.value>0
ORDER BY v.value DESC) a
where rownum < 11;

For standalone db:

select sid Waiter, p1raw,
substr(rawtohex(p1),1,30) Handle,
substr(rawtohex(p2),1,30) Pin_addr
from v$session_wait where wait_time=0 and event like '%library cache%';

For RAC DB:

select a.sid Waiter,b.SERIAL#,a.event,a.p1raw,
substr(rawtohex(a.p1),1,30) Handle,
substr(rawtohex(a.p2),1,30) Pin_addr
from v$session_wait a,v$session b where a.sid=b.sid
and a.wait_time=0 and a.event like 'library cache%';

or

set lines 152
col sid for a9999999999999
col name for a40
select a.sid,b.name,a.value,b.class
from gv$sesstat a , gv$statname b
where a.statistic#=b.statistic#
and name like '%library cache%';

select to_char(SESSION_ID,'999') sid ,
substr(LOCK_TYPE,1,30) Type,
substr(lock_id1,1,23) Object_Name,
substr(mode_held,1,4) HELD, substr(mode_requested,1,4) REQ,
lock_id2 Lock_addr
from dba_lock_internal
where
mode_requested'None'
and mode_requestedmode_held
and session_id in ( select sid
from v$session_wait where wait_time=0
and event like '%library cache%') ;

set lines 299
column object format a30
column owner format a10
select * from gv$access where owner='&OWNER' and object='&object_name' and
/

select sql_id,object_owner,object_name from V$SQL_PLAN where
operation='TABLE ACCESS' and
options='FULL' and
object_owner not in ('SYS','SYSTEM','DBSNMP');

select sum(gets) as "Gets", sum(getmisses) as "Misses",
(1-(sum(getmisses)/sum(gets)))*100 as "CACHE HIT RATIO"
from gv$rowcache;

NOTE - CACHE HIT RATIO SHOULD BE MORE THAN 95 PERCENT.

column mux format a18 heading 'Mutex Type' trunc;
column loc format a32 heading 'Location' trunc;
column sleeps format 9,999,999,990 heading 'Sleeps';
column wt format 9,999,990.9 heading 'Wait |Time (s)';
select e.mutex_type mux
, e.location loc
, e.sleeps - nvl(b.sleeps, 0) sleeps
, (e.wait_time - nvl(b.wait_time, 0))/1000000 wt
from DBA_HIST_MUTEX_SLEEP b
, DBA_HIST_MUTEX_SLEEP e
where b.snap_id(+) = &bid
and e.snap_id = &eid
and b.dbid(+) = e.dbid
and b.instance_number(+) = e.instance_number
and b.mutex_type(+) = e.mutex_type
and b.location(+) = e.location
and e.sleeps - nvl(b.sleeps, 0) > 0
order by e.wait_time - nvl(b.wait_time, 0) desc;

SELECT schema, sql_text, disk_reads, round(cpu,2) FROM
(SELECT s.parsing_schema_name schema, t.sql_id, t.sql_text, t.disk_reads,
t.sorts, t.cpu_time/1000000 cpu, t.rows_processed, t.elapsed_time
FROM v$sqlstats t join v$sql s on(t.sql_id = s.sql_id)
WHERE parsing_schema_name = 'SCOTT'
ORDER BY disk_reads DESC)
WHERE rownum <= 5;

col OBJECT_NAME for a30
col owner for a12
with bh_lc as
(select
lc.addr, lc.child#, lc.gets, lc.misses, lc.immediate_gets, lc.immediate_misses,
lc.spin_gets, lc.sleeps,
bh.hladdr, bh.tch tch, bh.file#, bh.dbablk, bh.class, bh.state, bh.obj
from
v$session_wait sw,
v$latchname ld,
v$latch_children lc,
x$bh bh
where lc.addr =sw.p1raw
and sw.p2= ld.latch#
and ld.name='cache buffers chains'
and lower(sw.event) like '%latch%'
and bh.hladdr=lc.addr
)
select bh_lc.hladdr, bh_lc.tch, o.owner, o.object_name, o.object_type,
bh_lc.child#,
bh_lc.gets, bh_lc.misses, bh_lc.immediate_gets,
bh_lc.immediate_misses, spin_gets, sleeps
from
bh_lc, dba_objects o
where bh_lc.obj = o.data_object_id(+)
order by 1,2 desc;

Set lines 160 pages 100
Column event format A35
Column name format A35
select x.event, x.sql_hash_value,
case when x.event like 'latch%' then
l.name
else ' '
end name,
x.cnt from (
select substr(w.event, 1, 28) event, s.sql_hash_value, w.p2,count(*) cnt
from v$session_wait w, v$session s, v$process p
where s.sid=w.sid
and p.addr = s.paddr
and s.username is not null
and w.event not like '%pipe%'
and w.event not like 'SQL*%'
group by substr(w.event, 1, 28), sql_hash_value,w.p2
) x,
v$latch l
where
x.p2 = l.latch#(+)
order by cnt;

Set lines 160 pages 100
Select * from x$ksmlru order by ksmlrnum;

Create tuning task

set long 1000000000
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id => 'apwfwjhgc9sk8',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 500,
task_name => 'apwfwjhgc9sk8_tuning_task_1',
description => 'Tuning task for statement apwfwjhgc9sk8');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

Execute tuning task

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'apwfwjhgc9sk8_tuning_task_1');

Generate report

SET LONG 10000000;
SET PAGESIZE 100000000
SET LINESIZE 200
SELECT DBMS_SQLTUNE.report_tuning_task('apwfwjhgc9sk8_tuning_task_1') AS recommendations FROM dual;
SET PAGESIZE 24

- STATISTICS_LEVEL should be TYPICAL/ALL.

SQL> show parameter statistics_level

NAME TYPE VALUE
------------------------------------ -------------------------------- --------------------------
statistics_level string TYPICAL

select * from v$sga_target_advice order by sga_size;

 

 

SELECT shared_pool_size_for_estimate "Size of Shared Pool in MB",
shared_pool_size_factor "Size Factor",
estd_lc_time_saved "Time Saved in sec" FROM v$shared_pool_advice;

cd $ORACLE_HOME/rdbms/admin

SQL> @addmrpt.sql

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1058
Begin Snapshot Id specified: 1058

Enter value for end_snap: 1059
End Snapshot Id specified: 1059

col member for a56
set pagesize 299
set lines 299
select l.group#, l.thread#,
f.member,
l.archived,
l.status,
(bytes/1024/1024) "Size (MB)"
from
v$log l, v$logfile f
where f.group# = l.group#
order by 1,2
/

set heading off;
set echo off;
Set pages 999;
set long 90000;
spool ddl_tablespace.sql
select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;
spool off

set feedback off pages 0 long 900000 lines 20000 pagesize 20000 serveroutput on
accept USERNAME prompt "Enter username :"
--This line add a semicolon at the end of each statement
execute dbms_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
-- This will generate the DDL for the user and add his objects,system and role grants
SELECT DBMS_METADATA.GET_DDL('USER',username) as script from DBA_USERS where username='&username'
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT',grantee)as script from DBA_SYS_PRIVS where grantee='&username' and rownum=1
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT',grantee)as script from DBA_ROLE_PRIVS where grantee='&username' and rownum=1
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT',grantee)as script from DBA_TAB_PRIVS where grantee='&username' and rownum=1;

col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/

Set lines 2000
col NAME for a45
col DESCRIPTION for a100
SELECT name,description from SYS.V$PARAMETER WHERE name LIKE '\_%' ESCAPE '\'

/

set lines 200
COL ACL_OWNER FOR A12
COL ACL FOR A67
COL HOST FOR A34
col PRINCIPAL for a20
col PRIVILEGE for a13
select ACL_OWNER,ACL,HOST,LOWER_PORT,UPPER_PORT FROM dba_network_acls;

select ACL_OWNER,ACL,PRINCIPAL,PRIVILEGE from dba_network_acl_privileges;

set lines 299
SELECT TO_CHAR(TRUNC(FIRST_TIME),'Mon DD') "DG Date",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'00',1,0)),'9999') "12AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'01',1,0)),'9999') "01AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'02',1,0)),'9999') "02AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'03',1,0)),'9999') "03AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'04',1,0)),'9999') "04AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'05',1,0)),'9999') "05AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'06',1,0)),'9999') "06AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'07',1,0)),'9999') "07AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'08',1,0)),'9999') "08AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'09',1,0)),'9999') "09AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'10',1,0)),'9999') "10AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'11',1,0)),'9999') "11AM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'12',1,0)),'9999') "12PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'13',1,0)),'9999') "1PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'14',1,0)),'9999') "2PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'15',1,0)),'9999') "3PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'16',1,0)),'9999') "4PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'17',1,0)),'9999') "5PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'18',1,0)),'9999') "6PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'19',1,0)),'9999') "7PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'20',1,0)),'9999') "8PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'21',1,0)),'9999') "9PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'22',1,0)),'9999') "10PM",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24'),'23',1,0)),'9999') "11PM"
FROM V$LOG_HISTORY
GROUP BY TRUNC(FIRST_TIME)
ORDER BY TRUNC(FIRST_TIME) DESC
/

col name format a10
col username format a8
col osuser format a8
col start_time format a17
col status format a12
tti 'Active transactions'

select s.sid,username,t.start_time, r.name, t.used_ublk "USED BLKS",
decode(t.space, 'YES', 'SPACE TX',
decode(t.recursive, 'YES', 'RECURSIVE TX',
decode(t.noundo, 'YES', 'NO UNDO TX', t.status)
)) status
from sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
where t.xidusn = r.usn
and t.ses_addr = s.saddr
/

-- Return code 1017 ( INVALID LOGIN ATTEMPT)
-- Return code 28000 ( ACCOUNT LOCKED)

set pagesize 1299
set lines 299
col username for a15
col userhost for a13
col timestamp for a39
col terminal for a23
SELECT username,userhost,terminal,timestamp,returncode
FROM dba_audit_session
WHERE username='&USER_NAME' and returncode in (1017,28000);

--- Reference metalink id - 332494.1
-- Save as duplicate.sql and run as @duplicate.sql

REM This is an example SQL*Plus Script to detect duplicate rows from
REM a table.
REM
set echo off
set verify off heading off
undefine t
undefine c
prompt
prompt
prompt Enter name of table with duplicate rows
prompt
accept t prompt 'Table: '
prompt
select 'Table '||upper('&&t') from dual;
describe &&t
prompt
prompt Enter name(s) of column(s) which should be unique. If more than
prompt one column is specified, you MUST separate with commas.
prompt
accept c prompt 'Column(s): '
prompt
select &&c from &&t
where rowid not in (select min(rowid) from &&t group by &&c)
/

generate resize datafile script without ORA-03297 error
 

select 'alter database datafile'||' '''||file_name||''''||' resize '||round(highwater+2)||' '||'m'||';' from (
select /*+ rule */
a.tablespace_name,
a.file_name,
a.bytes/1024/1024 file_size_MB,
(b.maximum+c.blocks-1)*d.db_block_size/1024/1024 highwater
from dba_data_files a        ,
(select file_id,max(block_id) maximum
from dba_extents
group by file_id) b,
dba_extents c,
(select value db_block_size
from v$parameter
where name='db_block_size') d
where a.file_id=  b.file_id
and   c.file_id  = b.file_id
and   c.block_id = b.maximum
order by a.tablespace_name,a.file_name);

select to_char(creation_time, 'MM-RRRR') "Month", sum(bytes)/1024/1024/1024 "Growth in GB
from sys.v_$datafile
where to_char(creation_time,'RRRR')='&YEAR_IN_YYYY_FORMAT'
group by to_char(creation_time, 'MM-RRRR')
order by to_char(creation_time, 'MM-RRRR');

select to_char(startup_time, 'DD-MM-YYYY HH24:MI:SS'),floor(sysdate-startup_time) DAYS from v$Instance;

-- Get current scn value:

select current_scn from v$database;

-- Get scn value at particular time:

select timestamp_to_scn('19-JAN-08:22:00:10') from dual;

-- Get timestamp from scn:

select scn_to_timestamp(224292)from dual;

-----  Connect to the user and run this.

BEGIN
FOR i IN (SELECT trigger_name
FROM user_triggers) LOOP

EXECUTE IMMEDIATE 'ALTER TRIGGER ' || i.trigger_name || ' DISABLE';
END LOOP;
END;
/

 

 

select table_name,
to_number(extractvalue(dbms_xmlgen.getXMLtype('select /*+ PARALLEL(8) */ count(*) cnt from "&&SCHEMA_NAME".'||table_name),'/ROWSET/ROW/CNT'))
rows_in_table from dba_TABLES
where owner='&&SCHEMA_NAME';

---Index monitoring is required, to find whether indexes are really in use or not. Unused can be dropped to avoid overhead.
-- First enable monitoring usage for the indexes.

alter index siebel.S_ASSET_TEST monitoring usage;

--Below query to find the index usage:

select * from v$object_usage;

We can spool output of an sql query to html format:

set pages 5000
SET MARKUP HTML ON SPOOL ON PREFORMAT OFF ENTMAP ON -
HEAD "<TITLE>EMPLOYEE REPORT</TITLE> -
<STYLE type='text/css'> -
<!-- BODY {background: #FFFFC6} --> -
</STYLE>" -
BODY "TEXT='#FF00Ff'" -
TABLE "WIDTH='90%' BORDER='5'"
spool report.html
Select * from scott.emp;
spool off
exit

--- From 12c onward

set lines 2000
select patch_id,status,description from dba_registry_sqlpatch;

--- For 11g and below:

set lines 2000
select * from dba_registry_history;

-- Find the orphaned Data Pump jobs:

SELECT owner_name, job_name, rtrim(operation) "OPERATION",
rtrim(job_mode) "JOB_MODE", state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%' and state='NOT RUNNING'
ORDER BY 1,2;

-- Drop the tables

SELECT 'drop table ' || owner_name || '.' || job_name || ';'
FROM dba_datapump_jobs WHERE state='NOT RUNNING' and job_name NOT LIKE 'BIN$%'

col comp_id for a10
col comp_name for a56
col version for a12
col status for a10
set pagesize 200
set lines 200
set long 999
select comp_id,comp_name,version,status from dba_registry;

set pagesize 200
set lines 200
select parameter,value from v$nls_parameters where parameter like 'NLS_%CHAR%';

-- View current AWR retention period

select retention from dba_hist_wr_control;

-- Modify retention period to 7 days and interval to 30 min

select dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 10080);

NOTE - 7 DAYS = 7*24*3600= 10080 minutes

SELECT d.undo_size / (1024 * 1024) "ACTUAL UNDO SIZE [MByte]",
SUBSTR(e.value, 1, 25) "UNDO RETENTION [Sec]",
(TO_NUMBER(e.value) * TO_NUMBER(f.value) * g.undo_block_per_sec) /
(1024 * 1024) "NEEDED UNDO SIZE [MByte]"
FROM (SELECT SUM(a.bytes) undo_size
FROM gv$datafile a, gv$tablespace b, dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#) d,
gv$parameter e,
gv$parameter f,
(SELECT MAX(undoblks / ((end_time - begin_time) * 3600 * 24)) undo_block_per_sec
FROM v$undostat) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size';

-- Find the AWR snapshot details.

select snap_id,begin_interval_time,end_interval_time from sys.wrm$_snapshot order by snap_id

-- Purge snapshot between snapid 612 to 700

execute dbms_workload_repository.drop_snapshot_range(low_snap_id =>612 , high_snap_id =>700);

-- Verify again

select snap_id,begin_interval_time,end_interval_time from sys.wrm$_snapshot order by snap_id

-- Check the current moving window baseline size:

select BASELINE_TYPE,MOVING_WINDOW_SIZE from dba_hist_baseline;

-- Modify window_size to (7 days):

execute dbms_workload_repository.modify_baseline_window_size(window_size=> 7);

set pagesize 200
set lines 200
col db_link for a19
set long 999
SELECT db_link,
owner_id,
logged_on,
heterogeneous,
open_cursors,
in_transaction,
update_sent
FROM gv$dblink
ORDER BY db_link;

SELECT le.leseq "Current log sequence No",
100*cp.cpodr_bno/le.lesiz "Percent Full",
cp.cpodr_bno "Current Block No",
le.lesiz "Size of Log in Blocks"
FROM x$kcccp cp, x$kccle le
WHERE le.leseq =CP.cpodr_seq
AND bitand(le.leflg,24) = 8
/

select * from (
select c.table_name, co.column_name, co.position column_position
from user_constraints c, user_cons_columns co
where c.constraint_name = co.constraint_name
and c.constraint_type = 'R'
minus
select ui.table_name, uic.column_name, uic.column_position
from user_indexes ui, user_ind_columns uic
where ui.index_name = uic.index_name
)
order by table_name, column_position;

select
a.constraint_name cons_name
,a.table_name tab_name
,b.column_name cons_column
,nvl(c.column_name,'***No Index***') ind_column
from user_constraints a
join
user_cons_columns b on a.constraint_name = b.constraint_name
left outer join
user_ind_columns c on b.column_name = c.column_name
and b.table_name = c.table_name
where constraint_type = 'R'
order by 2,1;

set pagesize 200
set lines 200
col name for a21
col stat_name for a25
col value for a13
col comments for a56
select STAT_NAME,to_char(VALUE) as VALUE ,COMMENTS from v$osstat where
stat_name IN ('NUM_CPUS','NUM_CPU_CORES','NUM_CPU_SOCKETS')

union
select STAT_NAME,VALUE/1024/1024/1024 || ' GB' ,COMMENTS from
v$osstat where stat_name IN ('PHYSICAL_MEMORY_BYTES');

set heading off
set feedback off
select 'Incarnation Destination Configuration' from dual;
select '*************************************' from dual;
set heading on
set feedback on

select INCARNATION# INC#, RESETLOGS_CHANGE# RS_CHANGE#, RESETLOGS_TIME,
PRIOR_RESETLOGS_CHANGE# PRIOR_RS_CHANGE#, STATUS,
FLASHBACK_DATABASE_ALLOWED FB_OK from v$database_incarnation;

SELECT version FROM v$timezone_file;

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

set pagesize 2000
set lines 2000
set long 999
col path for a54
select name, path, header_status, total_mb free_mb, trunc(bytes_read/1024/1024) read_mb, trunc(bytes_written/1024/1024) write_mb from v$asm_disk;

SELECT name, free_mb, total_mb, free_mb/total_mb*100 as percentage
FROM v$asm_diskgroup;

-----Dropping one disk:

alter diskgroup data drop disk DATA_ASM0001;

-----Dropping multiple disk:

alter diskgroup data drop disk DATA_ASM0001,DATA_ASM00002, DATA_ASM0003 rebalance power 100;

---- Monitoring the rebalance operation:

select * from v$asm_operation;

set pagesize 299
set lines 2999
select GROUP_NUMBER,OPERATION,STATE,POWER,
ACTUAL,ACTUAL,EST_MINUTES from gv$asm_operation;

Runcluvfy.sh script is available after unzipping the grid software.

syntax - ./runcluvfy.sh stage -pre crsinst -n host1,host2,host3 -verbose

./runcluvfy.sh stage -pre crsinst -n classpredb1,classpredb2 -verbose

ASM file can be copied to remote asm instance(diskgroup) using asmcmd command.

SYNTAX - asmcmd> cp - -port asm_port  file_name remote_asm_user/remote_asm_pwd@remote_host:Instancce_name:TARGET_ASM_PATH

ASMCMD> cp --port 1521 s_srv_new21.dbf sys/oracle@172.20.17.69.+ASM1:+ARCL/s_srv_new21.dbf

To mount a diskgroup,(This is instance specific, for mounting on all nodes, run the same on all nodes)

SQL>alter diskgroup DATA mount; 

or

asmcmd>mount DATA

To unmount a diskgroup,(This is instance specific, for unmounting on all nodes, run the same on all nodes)

SQL>alter diskgroup DATA dismount; 

or

asmcmd>umount DATA

To mount/Dismount all the diskgroups

SQL>alter diskgroup ALL mount; 

SQL>alter diskgroup ALL dismount; 

 

-- To drop a diskgroup, make sure the diskgroup has been dismounted from all the remote nodes,  It should be mounted only on the local nodes, where we will run the drop command.

drop diskgroup DSMREDOA including contents;

Clock Synchronization across the cluster nodes

cd $GRID_HOME/bin
cluvfy comp clocksync -n all

 Check whether ctss or ntp is running

crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

Observer means - Time sync between nodes are taken care by NTP
Active means - Time sync between nodes are taken care by CTSS

Check the asm disk labelling
#/etc/init.d/oracleasm querydisk /dev/sdn1
Device "/dev/sdn" is not marked as an ASM disk

Create asm disk
# /etc/init.d/oracleasm createdisk ARCDATA /dev/sdn1
Marking disk "ARCDATA" as an ASM disk: [ OK ]

Check the asm disk labelling
# /etc/init.d/oracleasm querydisk /dev/sdn1
Device "/dev/sdn1" is marked an ASM disk with the label "ARCDATA"

List the asm disks present
# /etc/init.d/oracleasm listdisks
ARCDATA

-- Oracle RAC in standalone is known as oracle restart, where only HAS(high availability service)  component is available.

crsctl stop has

crsctl start has

 For oracle 12c  only 

ASMCMD> pwcreate –dbuniquename {db_unique_name} {file_path}  {sys_password}

ASMCMD> pwcreate --dbuniquename PRDPRE +DATA/PWDFILE/pwdPRDPRE oracle

For all version.

orapwd file='+DATA/orapwPRODPRE' ENTRIES=10 DBUNIQUENAME='PRODPRE'

-- Default value of asm_power_limit.

SQL> show parameter asm_power_limit
NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------------------
asm_power_limit                      integer     1

-- Check for ongoing rebalance operations and their power.

select INST_ID,GROUP_NUMBER, OPERATION, STATE, POWER, EST_RATE, EST_MINUTES from GV$ASM_OPERATION;

- Alter the asm rebalance. 

alter diskgroup SALDATA rebalance power 4;

-- list asm users 

ASMCMD> lspwusr
Username sysdba sysoper sysasm
SYS      TRUE   TRUE     TRUE
ASMSNMP  TRUE   FALSE    FALSE -- > 

 

-- Modify user password 

ASMCMD> orapwusr --modify asmsnmp
Enter password: ********

--- Run from toad,sql devl

select * from V$ASM_DISK_IOSTAT;

$export DBI_TRACE=1

$ asmcmd

$ export ORACLE_SID=+ASM
$ asmcmd

ASMCMD> orapwusr --modify --password sys
Enter password: ******
ASMCMD> exit

Alternatively, we can use orapwd to recreate pwd file.

-- To find current asm spfile:

$asmcmd
ASMCMD> spget

--- To update asm spfile to a new pfile:

$asmcmd
ASMCMD> spset +OCR_VOTING/CLUSTER/ASMPARAMETERFILE/spfileASM.ora

ASMCMD> spcopy +DATA/ASM/ASMPARAMETERFILE/registry.263.3728290 +MGMT/spfileCopyASM.ora

-- SYNTAX FOR STOP DB
--- srvctl stop database -d db_name [-o stop_options] where stop_options is normal/immediate(default)/transactional/abort
e.g
srvctl stop database -d PRODB -o normal
srvctl stop database -d PRODB -o immediate
srvctl stop database -d PRODB -o transactional
srvctl stop database -d PRODB -o abort

-- SYNTAX FOR START DB
-- srvctl start database -d db_name [-o start_options] where start_option is nomount/mount/open(default)

e.g 
srvctl start database -d PRODB -o nomount
srvctl start database -d PRODB -o mount
srvctl start database -d PRODB -o open

SYNTAX FOR REMOVING DB SERVICE: 
---srvctl remove database -d db_unique_name [-f] [-y] [-v]
e.g:
srvctl remove database -d PRODB -f -y

SYNTAX FOR ADDING DB SERVICE :
-- srvctl add database -d db_unique_name -o ORACLE_HOME  [-p spfile]
e.g:
srvctl add database -d PRODB -o /u01/app/oracle/product/12.1.0.2/dbhome_1 -p +DATA/PRODDB/parameterfile/spfilePRODB.ora

SYNTAX FOR REMOVING INSTANCE
---srvctl remove instance -d DB_UNIQUE_NAME -i INSTANCE_NAME
e.g
srvctl remove instance -d PRODB - I PRODB1

SYNTAX FOR ADDING INSTANCE 
--- srvctl add instance –d db_unique_name –i inst_name -n node_name
e.g
srvctl add instance -d PRODB - i PRODB1 -n rachost1

SYNTAX FOR STOPPING INSTANCE
-- srvctl stop instance -d db_unique_name [-i "instance_name_list"]} [-o stop_options] [-f]
e.g
srvctl stop instance -d PRODB  -i PRODB1 

SYNTAX FOR STARTING INSTANCE
-- srvctl start instance -d db_unique_name  [-i "instance_name_list"} [-o start_options]
e.g
srvctl start instance -d PRODB -i PRODB1 

-- ENABLE - Reenables management by Oracle Restart for a component.
-- DISABLE - Disables management by Oracle Restart for a component.

srvctl enable instance -d DB_UNIQUE_NAME-i INSTANCE_NAME
srvctl disable instance -d DB_UNIQUE_NAME-i INSTANCE_NAME
srvctl enable database -d DB_UNIQUE_NAME
srvctl disable database -d DB_UNIQUE_NAME

SYNTAX -

srvctl relocate service -d  {database_name}  -s  {service_name}  -i  {old_inst_name}  -r  {new_inst_name}

EXAMPLE:(Relocating service PRDB_SRV from PREDB2 to PREDB1)

srvctl relocate service -d PREDB -s PRDB_SVC -i PREDB2 -t PREDB1

Check the status of service

srvctl status service -d PREDB -s PRDB_SVC

SYNTAX:
---------
srvctl start service -d {DB_NAME} -s {SERVICE_NAME}
srvctl stop service -d {DB_NAME} -s {SERVICE_NAME}

EXAMPLE:
---------------
srvctl start service -d PREDB -s PRDB_SRV
srvctl stop service -d PREDB -s PRDB_SRV

ADDING A SERVICE:
--------------------
SYNTAX:
------------
srvctl add service -d {DB_NAME} -s {SERVICE_NAME} -r {"preferred_list"} -a {"available_list"} [-P {BASIC | NONE | PRECONNECT}]

EXAMPLE:
---------------
srvctl add service -d PREDB -s PRDB_SRV -r "PREDB1,PREDB2" -a "PREDB2" -P BASIC

REMOVING A SERVICE:
------------------------------------------

SYNTAX:
-------------
srvctl remove service -d {DB_NAME} -s {SERVICE_NAME}

EXAMPLE:
--------
srvctl remove service -d PREDB -s PRDB_SRV

-- set this to enable trace at os

SRVM_TRACE=true
export SRVM_TRACE

-- run any srvctl command
srvctl status database -d ORACL

-- setenv to set env variables.(ORCL is the db_unique_name)

srvctl setenv database -db ORCL -env "ORACLE_HOME=/oracle/app/oracle/product/12.1.0.2/dbhome_1"
srvctl setenv database -db ORCL -env "TNS_ADMIN=/oracle/app/oracle/product/12.1.0.2/dbhome_1/network/admin"

--getenv to view the env setting:

srvctl getenv database -db ORCL

-- check status of mgmtdb in orcle 12c RAC

srvctl status mgmtdb

-- stop and start MGMT db.

srvctl stop mgmtdb
srvctl start mgmtdb

srvctl config asm

srvctl config asm -detail

 Run as root user

$GRID_HOME/bin/crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.

$GRID_HOME/bin/crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.

$GRID_HOME/bin/cemutlo -n

or

$GRID_HOME/bin/olsnodes -c

-- stop crs ( run from root)

$GRID_HOME/bin/crsctl stop crs

-- start crs( run from root)

$GRID_HOME/bin/crsctl start crs

 

Find voting disk location

$GRID_HOME/bin/crsctl query css votedisk

Find OCR location.

$GRID_HOME/bin/ocrcheck

SYNTAX - $GRID_HOME/bin/crsctl query crs softwareversion

$GRID_HOME/bin/crsctl query crs softwareversion 

$GRID_HOME/bin/crsctl stat res -t

$GRID_HOME/bin/crsctl check crs

$GRID_HOME/bin/crsctl check cssd

$GRID_HOME/bin/crsctl check crsd

$GRID_HOME/bin/crsctl check evmd

$GRID_HOME/bin/oifcfg getif

app-ipmp0 172.21.39.128 global public
loypredbib0 172.16.3.192 global cluster_interconnect
loypredbib1 172.16.4.0 global cluster_interconnect

 

select NAME,IP_ADDRESS from v$cluster_interconnects;

NAME IP_ADDRESS
--------------- ----------------
loypredbib0 172.16.3.193
loypredbib1 172.16.4.1

 

List down the backups of OCR

$GRID_HOME/bin/ocrconfig -showbackup

 

Take manual OCR backup

$GRID_HOME/bin/ocrconfig -manualbackup

$GRID_HOME/bin/crsctl replace votedisk +NEW_DG

Check the status using below command

$GRID_HOME/bin/crsctl query css votedisk

Disk timeout from node to voting disk(disktimeout)

crsctl get css disktimeout

CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.

Network latency in the node interconnect (Misscount)

crsctl get css misscount

CRS-4678: Successful get misscount 30 for Cluster Synchronization Services.

-- List of nodes in the cluster
olsnodes

-- Nodes with node number
olsnodes -n

-- Node with vip
olsnodes -i
olsnodes -s -t

-- Leaf or Hub
olsnodes -a

-- Getting private ip details of the local node
olsnodes -l -p

-- Get cluster name
olsnodes -c

oifcfg iflist -p -n

backup0 172.21.56.0 PRIVATE 255.255.254.0
cdnet0 162.168.1.0 PRIVATE 255.255.255.0
cdnet0 169.254.0.0 PUBLIC 255.255.128.0
cdnet1 162.168.2.0 PRIVATE 255.255.255.0
cdnet1 169.254.128.0 PUBLIC 255.255.128.0
pap-ipmp0 172.20.179.128 PUBLIC 255.255.255.128
tan-ipmp0 172.20.128.0 PRIVATE 255.255.252.0
dppp0 162.168.224.0 PRIVATE 255.255.255.0

-- OLR(ORACLE LOCAL REGISTRY)

Get current OLR location:(run from root only)

$GRID_HOME/bin/ocrcheck -local

List the OLR backups:

$GRID_HOME/bin/ocrconfig -local -showbackup

Take manual OLR backup:

$GRID_HOME/bin/ocrconfig -local -manualbackup

$ crsctl get cluster configuration
Name          : dbaclass-cluster
Configuration : Cluster
Class         : Standalone Cluster
Type          : flex
The cluster is not extended.
--------------------------------------------------------------------------------
MEMBER CLUSTER INFORMATION

Name             Version             GUID            Deployed           Deconfigured
================================================================================
================================================================================

 

$ crsctl get node role status -all
Node 'hostnode1' active role is 'hub'
Node 'hostnode2' active role is 'hub'

 

DECLARE

  l_plans_loaded  PLS_INTEGER;
BEGIN
  l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(
    sql_id => '&sql_id');
END;
/

-- Create baseline with a particular hash value

DECLARE
  l_plans_loaded  PLS_INTEGER;
BEGIN
  l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(
    sql_id => '&sql_id', plan_hash_value => '&plan_hash_value');
END;
/

declare
drop_result pls_integer;
begin
drop_result := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
plan_name => '&sql_plan_baseline_name');
dbms_output.put_line(drop_result);
end;
/

You can get the sql baseline from a sql_id from below command:

SELECT sql_handle, plan_name FROM dba_sql_plan_baselines WHERE signature IN
( SELECT exact_matching_signature FROM gv$sql WHERE sql_id='&SQL_ID');

 

DECLARE
nRet NUMBER;
BEGIN
nRet := dbms_spm.load_plans_from_cursor_cache(
attribute_name => 'PARSING_SCHEMA_NAME',
attribute_value => '&schema_name'
);
END;

BEGIN
DBMS_SQLTUNE.drop_sql_profile (
name => '&sql_profile',
ignore => TRUE);
END;
/

You can get the respective sql_profile of a sql_id from below:

select distinct
p.name sql_profile_name,
s.sql_id
from
dba_sql_profiles p,
DBA_HIST_SQLSTAT s
where
p.name=s.sql_profile and s.sql_id='&sql_id';

-- Script for getting sql_profile created for a sql_id

select distinct
p.name sql_profile_name,
s.sql_id
from
dba_sql_profiles p,
DBA_HIST_SQLSTAT s
where
p.name=s.sql_profile and s.sql_id='&sql_id';

Create tuning task:

DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id => '12xca9smf3hfy',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 500,
task_name => '12xca9smf3hfy_tuning_task',
description => 'Tuning task1 for statement 12xca9smf3hfy');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

Execute tuning task:

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '12xca9smf3hfy_tuning_task');

Get the tuning advisory report

set long 65536
set longchunksize 65536
set linesize 100
select dbms_sqltune.report_tuning_task('12xca9smf3hfy_tuning_task') from dual;

EXEC DBMS_SQLTUNE.ALTER_SQL_PROFILE('&sql_profile_name','STATUS','DISABLED');

Pass the sql_id to get the respective sql baseline

SELECT sql_handle, plan_name FROM dba_sql_plan_baselines WHERE
signature IN ( SELECT exact_matching_signature FROM gv$sql WHERE sql_id='&SQL_ID')

--- To disable a baseline:

Begin
dbms_spm.alter_sql_plan_baseline(sql_handle =>'SQL_SQL_5818768f40d7be2a',
plan_name => 'SQL_PLAN_aaxsg8yktm4h100404251',
attribute_name=> 'enabled',
attribute_value=>'NO');
END;
/
Begin
dbms_spm.alter_sql_plan_baseline(sql_handle =>'SQL_SQL_5818768f40d7be2a',
plan_name => 'SQL_PLAN_aaxsg8yktm4h100404251',
attribute_name=> 'fixed',
attribute_value=>'NO');
END;
/

-- To enable again, just modify the attribute_value to YES,

-- SYNTAX : ALTER TABLE <SCHEMA_NAME>.<TABLE_NAME> ADD PARTITION < PARTITION_NAME> VALUES LESS THAN < HIGH_VALUE> TABLESPACE <TABLESPACE_NAME > < UPDATE GLOBAL INDEXES(optional)>;
-- NOTE: UPDATE GLOBAL INDEXES is required if GLOBAL INDEX is present

ALTER TABLE CMADMIN.DBACLASS ADD PARTITION DBACLASS_JAN VALUES
LESS THAN (TO_DATE('01-FEB-2016','DD-MON-YYYY')) TABLESPACE USERS  UPDATE GLOBAL INDEXES;

 

-- In oracle 12c(new feature), we can add multiple partition in one command:

ALTER TABLE CMADMIN.DBACLASS ADD
PARTITION DBACLASS_JAN VALUES LESS THAN (TO_DATE('01-FEB-2016','DD-MON-YYYY')) TABLESPACE USERS,
PARTITION DBACLASS_FEB VALUES LESS THAN (TO_DATE('01-MAR-2016','DD-MON-YYYY')) TABLESPACE USERS,
PARTITION DBACLASS_MAR VALUES LESS THAN (TO_DATE('01-APR-2016','DD-MON-YYYY')) TABLESPACE USERS,
UPDATE GLOBAL INDEXES;

-- SYNTAX : ALTER TABLE <SCHEMA_NAME>.<TABLE_NAME> DROP PARTITION < PARTITION_NAME> < UPDATE GLOBAL INDEXES(optional)>;
--- NOTE: UPDATE GLOBAL INDEXES is required if GLOBAL INDEX is present

ALTER TABLE CMADMIN.DBACLASS DROP PARTITION DBACLASS_JAN UPDATE GLOBAL INDEXES;

--- In oracle 12c, we can drop multiple partitions in one command

ALTER TABLE CMADMIN.DBACLASS DROP PARTITIONS DBACLASS_JAN, DBACLASS_FEB, DBACLASS_MAR UPDATE GLOBAL INDEXES;

- SYNTAX : ALTER TABLE <SCHEMA_NAME>.<TABLE_NAME> TRUNCATE PARTITION < PARTITION_NAME> < UPDATE GLOBAL INDEXES(optional)>;
--- NOTE: UPDATE GLOBAL INDEXES is required if GLOBAL INDEX is present

ALTER TABLE CMADMIN.DBACLASS TRUNCATE PARTITION DBACLASS_JAN UPDATE GLOBAL INDEXES;

--- In oracle 12c, we can truncate multiple partitions in one command

ALTER TABLE CMADMIN.DBACLASS TRUNCATE PARTITIONS DBACLASS_JAN, DBACLASS_FEB, DBACLASS_MAR UPDATE GLOBAL INDEXES;

-- From oracle 12.2.0.1 Relase, we can make few partitions of a table read only.

SQL> alter table dbatest.ORDER_TAB modify partition CREATED_2105_P10 read only;

Table altered.

SQL> select partition_name,read_only from dba_tab_partitions where table_name='ORDER_TAB';

PARTITION_NAME READ
-------------------------------- ----
CREATED_2105_P10 YES
CREATED_2105_P11 NO
CREATED_2105_P12 NO
CREATED_2105_P8 NO
CREATED_2105_P9 NO
CREATED_MX NO

6 rows selected.

SQL> alter table order_tab split partition CREATED_MX into
(partition CREATED_2106_P2 VALUES LESS THAN (TO_DATE('01/03/2016', 'DD/MM/YYYY')),PARTITION CREATED_MX) ONLINE; 

Table altered.

SQL> select partition_name,read_only,high_value from dba_tab_partitions where table_name='ORDER_TAB';

 

 

-- In Oracle 12cR2, we can convert non partitioned table to partitioned online using alter table command.

alter table BSSTDBA.ORDER_TAB modify
PARTITION BY RANGE (CREATED)
(partition created_2105_p8 VALUES LESS THAN (TO_DATE('01/09/2015', 'DD/MM/YYYY')),
partition created_2105_p9 VALUES LESS THAN (TO_DATE('01/10/2015', 'DD/MM/YYYY')),
partition created_2105_p10 VALUES LESS THAN (TO_DATE('01/11/2015', 'DD/MM/YYYY')),
partition created_2105_p11 VALUES LESS THAN (TO_DATE('01/12/2015', 'DD/MM/YYYY')),
partition created_2105_p12 VALUES LESS THAN (TO_DATE('01/01/2016', 'DD/MM/YYYY')),
PARTITION Created_MX VALUES LESS THAN (MAXVALUE)) ONLINE;

ALTER TABLE employee RENAME PARTITION  TAB3 TO TAB4;

set serverout on size 1000000
set verify off
declare
sql_stmt varchar2(1024);
row_count number;
cursor get_tab is
select table_name,partition_name
from dba_tab_partitions
where table_owner=upper('&&TABLE_OWNER') and table_name='&&TABLE_NAME';
begin
dbms_output.put_line('Checking Record Counts for table_name');
dbms_output.put_line('Log file to numrows_part_&&TABLE_OWNER.lst ....');
dbms_output.put_line('....');
for get_tab_rec in get_tab loop
BEGIN
sql_stmt := 'select count(*) from &&TABLE_OWNER..'||get_tab_rec.table_name
||' partition ( '||get_tab_rec.partition_name||' )';

EXECUTE IMMEDIATE sql_stmt INTO row_count;
dbms_output.put_line('Table '||rpad(get_tab_rec.table_name
||'('||get_tab_rec.partition_name||')',50)
||' '||TO_CHAR(row_count)||' rows.');
exception when others then
dbms_output.put_line
('Error counting rows for table '||get_tab_rec.table_name);
END;
end loop;
end;
/
set verify on

--- describes the partitioning key columns for all partitioned objects of a schema

set pagesize 200
set lines 200
set long 999
col owner for a12
col name for a20
col object_type for a20
col column_name for a32
SELECT owner, NAME, OBJECT_TYPE,column_name
FROM dba_part_key_columns where owner='&OWNER'
ORDER BY owner, NAME;

- Move a single partition to a new tablespace

ALTER TABLE SCOTT.EMP MOVE PARTITION EMP_Q1 TABLESPACE TS_USERS;

--- Move a single partition to a new tablespace WITH PARALLEL

ALTER TABLE SCOTT.EMP MOVE PARTITION EMP_Q1 TABLESPACE TS_USERS PARALLEL(DEGREE 4) NOLOGGING;

- Dynamic script to move all partitions of a table

select
'ALTER TABLE '||TABLE_OWNER ||'.'||table_name||' MOVE PARTITION '||partition_name||' TABLESPACE TS_USERS PARALLEL(DEGREE 4) NOLOGGING;'
from dba_tab_partitions where table_name='&TABLE_NAME' and table_owner='&SCHEMA_NAME';

BEGIN
DBMS_STATS.GATHER_TABLE_STATS (
ownname => 'SCOTT',
tabname => 'TEST',
cascade => true, ---- For collecting stats for respective indexes
method_opt=>'for all indexed columns size 1',
granularity => 'ALL',
estimate_percent =>dbms_stats.auto_sample_size,
degree => 8);
END;
/

-- For a single table partition

BEGIN
DBMS_STATS.GATHER_TABLE_STATS (
ownname => 'SCOTT',
tabname => 'TEST', --- TABLE NAME
partname => 'TEST_JAN2016' --- PARTITOIN NAME
method_opt=>'for all indexed columns size 1',
GRANULARITY => 'APPROX_GLOBAL AND PARTITION',
degree => 8);
END;
/

Begin
dbms_stats.gather_schema_stats(
ownname => 'SCOTT', --- schema name
options => 'GATHER AUTO',
estimate_percent => dbms_stats.auto_sample_size,
method_opt => 'for all columns size repeat',
degree => 24
);
END;
/

--- Lock  statistics

EXEC DBMS_STATS.lock_schema_stats('SCOTT');
EXEC DBMS_STATS.lock_table_stats('SCOTT', 'TEST');
EXEC DBMS_STATS.lock_partition_stats('SCOTT', 'TEST', 'TEST_JAN2016');

-- Unlock statistics

EXEC DBMS_STATS.unlock_schema_stats('SCOTT');
EXEC DBMS_STATS.unlock_table_stats('SCOTT', 'TEST');
EXEC DBMS_STATS.unlock_partition_stats('SCOTT', 'TEST', 'TEST_JAN2016');

--- check stats status:

SELECT stattype_locked FROM dba_tab_statistics WHERE table_name = 'TEST' and owner = 'SCOTT';

--- Create staging table to store the statistics data

exec dbms_stats.create_stat_table(ownname => 'SCOTT', stattab => 'STAT_BACKUP',tblspace=>'USERS');

-- Export stats

exec dbms_stats.export_table_stats(ownname=>'SCOTT', tabname=>'EMP', stattab=>'STAT_BACKUP', cascade=>true);

-- Import stats

exec dbms_stats.import_table_stats(ownname=>'SCOTT', tabname=>'EMP', stattab=>'STAT_BACKUP', cascade=>true);

 

STALE STATS FOR TABLE

select owner,table_name,STALE_STATS from dba_tab_statistics where owner='&SCHEMA_NAME' and table_name='&TABLE_NAME';

FOR INDEX

select owner,INDEX_NAME,TABLE_NAME from DBA_IND_STATISTICS where owner='&SCHEMA_NAME' and index_name='&INDEX_NAME';

For getting history of TABLE statistics

setlines 200
col owner for a12
col table_name for a21
select owner,TABLE_NAME,STATS_UPDATE_TIME from dba_tab_stats_history where table_name='&TABLE_NAME';

Publish Pending stats for table

EXEC DBMS_STATS.PUBLISH_PENDING_STATS ('SCHEMA_NAME,'TABLE_NAME');

Publish pending stats for a schema

exec dbms_stats.publish_pending_stats('SCHEMA_NAME',null);

 

 

Setting Publish preference

exec dbms_stats.set_table_prefs('SCOTT','EMP','PUBLISH','FALSE');

Check the publish preference status

select dbms_stats.get_prefs('PUBLISH', 'SCOTT','EMP') FROM DUAL;

Similarly for schema also use as below:

select dbms_stats.get_prefs('PUBLISH', 'SCOTT') from dual

exec dbms_stats.SET_SCHEMA_PREFS('DBATEST','PUBLISH','FALSE');

--- FOR INDEX

SET_INDEX_STATS
GET_INDEX_STATS

-- FOR DATABASE

SET_DATABASE_PREFS

-- View current stats retention

select dbms_stats.get_stats_history_retention from dual;

-- Modify the stats retention

exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(60);

--- Space currently used to store statistics in SYSAUX in KBytes,

select occupant_desc, space_usage_kbytes from v$sysaux_occupants
where OCCUPANT_DESC like '%Statistics%';

-- Check the status of incremental pref

select dbms_stats.get_prefs('INCREMENTAL', tabname=>'EMPLOYEE',ownname=>'SCOTT') from dual;

FALSE

-- Enable incremental stats collection

SQL> exec DBMS_STATS.SET_TABLE_PREFS('SCOTT','EMPLOYEE','INCREMENTAL','TRUE');

PL/SQL procedure successfully completed.

-- Check the pref again:

select dbms_stats.get_prefs('INCREMENTAL', tabname=>'EMPLOYEE',ownname=>'SCOTT') from dual;

TRUE

Delete statistics of the complete database

EXEC DBMS_STATS.delete_database_stats;

-- Delete statistics of a single schema

EXEC DBMS_STATS.delete_schema_stats('DBACLASS');

-- Delete statistics of single tabale
EXEC DBMS_STATS.delete_table_stats('DBACLASS', 'DEPT');

-- Delete statistics of a column
EXEC DBMS_STATS.delete_column_stats('DBACLASS', 'DEPT', 'CLASS');

--Delete statistics of an index

EXEC DBMS_STATS.delete_index_stats('DBACLASS', 'CLASS_IDX');

--Delete dictionary a in db

EXEC DBMS_STATS.delete_dictionary_stats;

-- If we are importing stats table from higher version to lower version,
then before importing in the database, we need to upgrade the stats table.

EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE(OWNNAME =>'RAJ',STATTAB =>'STAT_TEST');

ALTER TABLE DBACLASS.EMP ENABLE ROW MOVEMENT;
FLASHBACK TABLE DBACLASS.EMP TO TIMESTAMP
TO_TIMESTAMP('2017-01-10 09:00:00', `YYYY-MM-DD HH24:MI:SS');

Restore the dropped table with same name:

SQL>flashback table DBACLASS.EMP to before drop;

Restore the dropped table with a new name

SQL>Flashback table DBACLASS.EMP to before drop rename to EMP_BACKUP;

Note - To recover the table, table should be present in recyclebin:

select * from dba_recyclebin;

SELECT * FROM DBACLASS.EMP AS OF TIMESTAMP
TO_TIMESTAMP('2017-01-07 10:00:00', 'YYYY-MM-DD HH:MI:SS');

SELECT * FROM DBACLASS.EMP AS OF TIMESTAMP SYSDATE -1/24;

Make sure database is in archivelog mode

alter system set db_recovery_file_dest_size=10G scope=both;
alter system set db_recovery_file_dest='/dumparea/FRA/B2BRBMT3' scope=both;
alter database flashback on;

-- To create a guarantee flashback restore point;

SQL>create restore point BEFORE_UPG guarantee flashback database;

-- Check the restore_points present in database

SQL>select * from v$restore_point;

-- Drop restore point;

SQL> drop restore point BEFORE_UPG;

--- Below are the steps for flashback database to a guaranteed restore point;

1. Get the restore point name:

SQL> select NAME,time from v$restore_point;

NAME                                                            TIME
--------------------------------          -----------------------------------------------
GRP_1490100093811                         21-MAR-17 03.41.33.000000000 PM

2. Shutdown database and start db in Mount stage:

shutdown immediate;
startup mount;

3. flashback db to restore point:

flashback database to restore point GRP_1490100093811;

4. Open with resetlog:

alter database open resetlogs:

--- Like, tables ,If you have dropped or recreated a package/procedure, by using flashback ,we can get the proc code, before drop. 

get the object_id

SQL> select object_id from dba_objects where owner='DBACLASS' and object_name='VOL_DISCOUNT_INSERT';

OBJECT_ID
----------
2201943

Now get the flashback code using timestamp

select SOURCE from sys.source$ as of timestamp
to_timestamp('23-Apr-2017 10:00:20','DD-Mon-YYYY hh24:MI:SS')
where obj#=2201943 ;

--How Far Back Can We Flashback To (Time)

select to_char(oldest_flashback_time,’dd-mon-yyyy hh24:mi:ss’) “Oldest Flashback Time”
from v$flashback_database_log;

--How Far Back Can We Flashback To (SCN)

col oldest_flashback_scn format 99999999999999999999999999
select oldest_flashback_scn from v$flashback_database_log;

ALTER TABLE DBACLASS.EMP ENABLE ROW MOVEMENT;
FLASHBACK TABLE DBACLASS.EMP TO TIMESTAMP
TO_TIMESTAMP('2017-01-10 09:00:00', `YYYY-MM-DD HH24:MI:SS');

ALTER TABLE DBACLASS.EMP ENABLE ROW MOVEMENT;
FLASHBACK TABLE DBACLASS.EMP TO TIMESTAMP
TO_TIMESTAMP('2017-01-10 09:00:00', `YYYY-MM-DD HH24:MI:SS');

ALTER TABLE DBACLASS.EMP ENABLE ROW MOVEMENT;
FLASHBACK TABLE DBACLASS.EMP TO TIMESTAMP
TO_TIMESTAMP('2017-01-10 09:00:00', `YYYY-MM-DD HH24:MI:SS');

configure backup optimization on;
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '/archiva/backup/%F';
configure maxsetsize to unlimited;
configure device type disk parallelism 4;
run
{
allocate channel c1 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c2 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c3 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c4 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
backup as compressed backupset incremental level 0 check logical database plus archivelog;
release channel c1 ;
release channel c2 ;
release channel c3 ;
release channel c4 ;
}

configure backup optimization on;
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '/archiva/backup/%F';
configure maxsetsize to unlimited;
configure device type disk parallelism 4;
run
{
allocate channel c1 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c2 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c3 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c4 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
backup as compressed backupset incremental level 1 check logical database plus archivelog;
release channel c1 ;
release channel c2 ;
release channel c3 ;
release channel c4 ;
}

configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '/archiva/backup/%F';
configure maxsetsize to unlimited;
configure device type disk parallelism 4;
run
{
allocate channel c1 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c2 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
backup tablespace USERS,TOOLS;
release channel c1 ;
release channel c2 ;
}

configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '/archiva/backup/%F';
configure maxsetsize to unlimited;
configure device type disk parallelism 4;
run
{
allocate channel c1 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3G;
allocate channel c2 type disk format '/archiva/backup/%I-%Y%M%D-%U' maxpiecesize 3g;
backup datafile 3,4;
release channel c1 ;
release channel c2 ;
}

DELETE ARCHIVELOG ALL COMPLETED BEFORE 'sysdate-1';
CROSSCHECK ARCHIVELOG ALL;
DELETE EXPIRED ARCHIVELOG ALL;

Backup all archivelogs known to controlfile

backup archivelog all;

Backup all archivelogs known to controlfile and delete them once backed up

backup archivelog all delete input ;

Backup archivlogs known to controlfile and the logs which haven't backed up once also

backup archivelog all not backed up 1 times;

--- Copy archive log from ASM to regular mount point using RMAN:
--- Connect to RMAN in RAC db

RMAN> copy archivelog '+B2BSTARC/thread_2_seq_34.933' to '/data/thread_2_seq_34.933';

For taking backup of archivelog between seq number 1000 to 1050

RMAN> backup format '/archive/%d_%s_%p_%c_%t.arc.bkp'
archivelog from sequence 1000 until sequence 1050;

For RAC ,need to mention the thread number also

RMAN> backup format '/archive/%d_%s_%p_%c_%t.arc.bkp'
archivelog from sequence 1000 until sequence 1050 thread 2;

 

 

-- To diagnose rman script, use trace as below.

spool trace to '/tmp/rman_trace.out';
report schema;
list backup summary;
list backup of datafile 1;
list copy of datafile 1;
spool trace off;

RMAN>recover table SCOTT.SALGRADE until time “to_date(’08/09/2016 18:49:40′,’mm/dd/yyyy hh24:mi:ss’)”
auxiliary destination ‘/u03/arch/TEST/BACKUP’
datapump destination ‘/u03/arch/TEST/BACKUP';

 

auxiliary destination – Location where all the related files for auxiliary instance will be placed

datapump destination – Location where the export dump of the table will be placed

NOTE - This feature is available only in oracle 12c and later.

SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE 'RMAN%'
AND OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK != 0
AND SOFAR <> TOTALWORK;

-----Below  script will restore the archive sequences from 7630 to 7640 to /dumparea location

connect target sys/******@CRM_DB
connect catalog RMAN_tst/*****@catdb

run
{
allocate channel t1 type SBT_TAPE parms ‘ENV=(NSR_SERVER=nwwerpw,NSR_CLIENT=tsc_test01,NSR_DATA_VOLUME_POOL=DD086A1)’connect sys/****@CRM_DB;
set archivelog destination to ‘/dumparea/';
restore archivelog from sequence 7630 until sequence 7640;
release channel t1;
}

-- Enable block change tracking

alter database enable block change tracking using file
'/export/home/oracle/RMAN/TESTDB/TRACKING_FILE/block_change_TESTDB.log';

-- Check status:

select filename,status from v$block_change_tracking;

--- check the syntax of RMAN commands interactively without actually executing the commands

$ rman checksyntax

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Jan 29 12:04:24 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

-- Now put the command for checking syntax

RMAN> backup database;

The command has no syntax errors

SYNTAX :

create user identified by default tablespace   temporary tablespace ;

Eg:

create user SCOTT identified by oracle#41234
default tablespace users
temporary tablespace TEMP;

-To create an user, which will prompt for new password upon login:

create user SCOTT identified by oracle#41234
default tablespace users
temporary tablespace TEMP
password expire;

-- Change password of an user

ALTER USER SCOTT identified by NEW_PWD;

-- Change user profile;

ALTER USER SCOTT PROFILE SIEBEL_PROFILE;

-- Unlock/lock a user

ALTER USER SCOTT account unlock;
ALTER USER SCOTT account lock;

-- Make sure account expiry, so upon login, it will ask for new one

ALTER USER SCOTT password expire;

-- Get default tablespace of a user:

set lines 200
col username for a23
select username,DEFAULT_TABLESPACE from dba_users where username='SCOTT';

USERNAME               DEFAULT_TABLESPACE
----------------------- ------------------------------
SCOTT                          USERS

-- Change default tablespace of a user:

ALTER USER SCOTT DEFAULT TABLESPACE DATATS;

select username,DEFAULT_TABLESPACE from dba_users where username='SCOTT';

USERNAME               DEFAULT_TABLESPACE
----------------------- ------------------------------
SCOTT                           DATATS

-- Get the current tablespace quota information of an user
set lines 299
select TABLESPACE_NAME,BYTES/1024/1024 "UTILIZIED_SPACE" ,MAX_BYTES/1024/1024 "QUOTA_ALLOCATED" from dba_ts_quotas where username='&USER_NAME';

TABLESPACE_NAME                                 UTILIZIED_SPACE                  QUOTA_ALLOCATED
------------------------------         ---------------------------        --------------------------
USERS                                           .0625                                    1024

--- Change the tablespace quota for the user to 5G

ALTER USER SCOTT QUOTA 5G ON USERS;

--- Grant unlimited tablespace quota:

ALTER USER SCOTT QUOTA UNLIMITED ON USERS;

-- System privileges granted to an user ( scott)

SELECT * FROM DBA_SYS_PRIVS where grantee='SCOTT';

-- Roles granted to an user ( scott)

SELECT * FROM DBA_ROLE_PRIVS where grantee='SCOTT';

-- Object privileges granted to an user ( SCOTT)

SELECT * FROM DBA_TAB_PRIVS WHERE GRANTEE='SCOTT';

-- Column specific privileges granted

SELECT * FROM DBA_COL_PRIVS WHERE WHERE GRANTEE='SCOTT';

-- Table privileges

GRANT READ ANY TABLE TO SCOTT;

GRANT SELECT ANY TABLE TO SCOTT;

GRANT INSERT, UPDATE, DELETE ON TESTUSER1.EMPTABL on SCOTT;
GRANT ALL ON TESTUSER1.EMPTABL on SCOTT;

-- Grant privilege on few columns of a table
--Only INSERT,UPDATE can be granted at COLUMN level.

GRANT insert (emp_id) ON TESTUSER1.EMPTABL TO SCOTT;
GRANT UPDATE(emp_id) ON TESTUSER1.EMPTABL TO SCOTT;

--- You can connect to another user without knowing the password, with grant connect through privilege
--- Suppose a user TEST1 wants to connect to TEST2 user and create a table and we don’t know the password of TEST2.

Conn / as sysdba
SQL >alter user TEST2 grant connect through TEST1;

User altered.

SQL >conn TEST1[TEST2]
Enter password:< Give password for TEST1>

SQL >show user
USER is "TEST2"
SQL >create table emp_test as select * from emp;

Table created.

SQL > conn / as sysdba
connected
SQL > select owner from dba_tables where table_name='EMP_TEST';

OWNER
------
TEST2

---A user that is present in both root container and PDB is known as common user. User need to be created after connecting to CDB root.

create user c##dbaclass identified by dbaclass container=all;

-- Similar to user, common role we can create in CDB root.

Create role C##DBAROLE;

SELECT NAME, type#, ctime, ptime, exptime, ltime
FROM sys.user$
WHERE NAME IN ('SYS', 'SYSTEM')
ORDER BY NAME;

NOTE:

CTIME -> USER CREATION TIME
PTIME -> LAST PASSWORD CHANGE TIME
EXPTIME -> PASSWORD EXPIRY DATE
LTIME - > ACCOUNT LOCK TIME

-- CREATE PROFILE

CREATE PROFILE "APP_PROFILE"
LIMIT
COMPOSITE_LIMIT UNLIMITED
SESSIONS_PER_USER UNLIMITED
CPU_PER_SESSION UNLIMITED
CPU_PER_CALL UNLIMITED
LOGICAL_READS_PER_SESSION UNLIMITED
LOGICAL_READS_PER_CALL UNLIMITED
IDLE_TIME 90
CONNECT_TIME UNLIMITED
PRIVATE_SGA UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LIFE_TIME 180
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_VERIFY_FUNCTION NULL
PASSWORD_LOCK_TIME UNLIMITED
PASSWORD_GRACE_TIME UNLIMITED;

-- ALTER PROFILE:

ALTER PROFILE APP_PROFILE LIMIT PASSWORD_LIFE_TIME UNLIMITED;

 

*SESSION_PER_USER – No. of allowed concurrent sessions for a user
*CPU_PER_SESSION – CPU time limit for a session, expressed in hundredth of seconds.
*CPU_PER_CALL – Specify the CPU time limit for a call (a parse, execute, or fetch), expressed in hundredths of seconds.
*CONNECT_TIME – Specify the total elapsed time limit for a session, expressed in minutes.
*IDLE_TIME – Specify the permitted periods of continuous inactive time during a session, expressed in minutes.
*LOGICAL_READS_PER_SESSION – Specify the permitted number of data blocks read in a session, including blocks read from memory and disk
*LOGICAL_READS_PER_CALL –permitted number of data blocks read for a call to process a SQL statement (a parse, execute, or fetch).
*PRIVATE_SGA – SGA a session can allocate in the shared pool of the system global area (SGA), expressed in bytes.

*FAILED_LOGIN_ATTEMPTS – No. of failed attempts to log in to the user account before the account is locked
*PASSWORD_LIFE_TIME : No. of days the account will be open. after that it will expiry.
*PASSWORD_REUSE_TIME : number of days before which a password cannot be reused
*PASSWORD_REUSE_MAX : number of days before which a password can be reused
*PASSWORD_LOCK_TIME :Number of days the user account remains locked after failed login
*PASSWORD_GRACE_TIME :Number of grace days for user to change password
*PASSWORD_VERIFY_FUNCTION :PL/SQL that can be used for password verification

-- List default users ( valid from 12c onwards)
select username from dba_users where ORACLE_MAINTAINED='Y';

-- Create New tablespace

Create tablespace DATA datafile '/u01/dbaclass/oradata/data01.dbf' size 5G autoextend on next 500M;

-- Create tablespace on ASM diskgroup

Create tablespace DATA datafile '+DATAG' size 5G autoextend on next 500M;

-- Create big tablespace:

CREATE BIGFILE TABLESPACE BIGTS datafile '/u01/dbaclass/oradata/bigts01.dbf' size 100G autoextend on NEXT 1G;

SQL❯ select file_id,file_name,tablespace_name from dba_data_files where file_id=37;
   FILE_ID FILE_NAME						    TABLESPACE_NAME
---------- -------------------------------------------------------- ------------------------------
	37 /home/oracle/app/oracle/oradata/cdb1/testin1.dbf	    TESTING
 
 
--- Rename the tablespace_name from TESTING to PRODUCING;
 
SQL❯ alter tablespace TESTING rename to PRODUCING; 
Tablespace altered.
SQL❯ select file_id,file_name,tablespace_name from dba_data_files where file_id=37; 
   FILE_ID FILE_NAME						    TABLESPACE_NAME
---------- -------------------------------------------------------- ------------------------------
	37 /home/oracle/app/oracle/oradata/cdb1/testin1.dbf	    PRODUCING
-- Drop a tablespace without removing the physical database files.

SQL❯ drop tablespace TESTING;
Tablespace dropped.
 
SQL❯ select file_name from dba_data_files where tablespace_name='TESTING';
 
no rows selected

-- Drop tablespace including the physical datafiles.

SQL❯ drop tablespace TESTING including contents and datafiles; 
Tablespace dropped.

 

-- Add a datafile to a tablespace

Alter tablespace USERS add datafile '/u01/data/users02.dbf' size 5G;

-- Enable autoextend on for a datafile;

Alter database datafile '/u01/data/users02.dbf' autoextend on;

-- Resize a datafile

alter database datafile '/u01/data/users02.dbf' resize 10G;

-- Make a datafile offline/online

Alter database datafile '/u01/data/users02.dbf' offline;

Alter database datafile '/u01/data/users02.dbf' online;

-- Drop a datafile:

Alter tablespace USERS drop datafile '/u01/data/users02.dbf';

 

 

 

 

-- Add tempfile to temp tablespace:

alter tablespace TEMP1 add tempfile '/u01/dbaclass/tempfile/temp02.dbf' size 1G autoextend on next 200M;

-- Resize temp file:

alter database tempfile '/u01/dbaclass/tempfile/temp02.dbf' resize 2G;

-- Drop tempfile:

ALTER DATABASE TEMPFILE '/u01/dbaclass/tempfile/temp02.dbf' DROP INCLUDING DATAFILES;

 For oracle 12c, move or rename of datafile can be done online with one line:

SQL> alter database move datafile '/home/oracle/producing1.dbf' to '/home/oracle/app/oracle/oradata/cdb1/testin1.dbf';

For 11g, u have to follow below steps:( It needs downtime for the datafile)

--Make the tablespace offline:

alter database datafile '/home/oracle/app/oracle/oradata/cdb1/testin1.dbf' offline;

-- Move the file physically to a new location.

mv /home/oracle/app/oracle/oradata/cdb1/testin1.dbf /home/oracle/producing1.dbf

-- Rename at db level

alter database rename file '/home/oracle/app/oracle/oradata/cdb1/testin1.dbf' to '/home/oracle/producing1.dbf';

-- Recover the datafile:

recover datafile 37;

-- Make the datafile online:

alter database datafile '/home/oracle/producing1.dbf' online;

-- REFERENCE - ORAFAQ

set feed off
set pagesize 10000
set linesize 500
break on grantee skip 1
column datum new_value datum noprint
column file_nr format 999999 heading 'File#'
column checkpoint_time format A20 heading 'Checkpoint|Time'
column file_name format A59 heading 'Filename'

select FILE# file_nr,
to_char(CHECKPOINT_TIME,'DD.MM.YYYY:HH24:MI:SS') checkpoint_time,
name file_name
from v$datafile_header;

select occupant_name,occupant_desc,space_usage_kbytes
from v$sysaux_occupants;

SQL> select dbid,name,open_mode,TOTAL_SIZE/1024/1024 from v$pdbs;

DBID NAME OPEN_MODE TOTAL_SIZE/1024/1024
---------- ------------------------------ ---------- --------------------
3987628790 PDB$SEED READ ONLY 830
1360187792 PDB1 READ WRITE 905
3819422575 PDB2 MOUNTED 0

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED

SET LINES 132 PAGES 100
COL con_name FORM A15 HEAD "Container|Name"
COL tablespace_name FORM A15
COL fsm FORM 999,999,999,999 HEAD "Free|Space Meg."
COL apm FORM 999,999,999,999 HEAD "Alloc|Space Meg."
--
COMPUTE SUM OF fsm apm ON REPORT
BREAK ON REPORT ON con_id ON con_name ON tablespace_name
--
WITH x AS (SELECT c1.con_id, cf1.tablespace_name, SUM(cf1.bytes)/1024/1024 fsm
FROM cdb_free_space cf1
,v$containers c1
WHERE cf1.con_id = c1.con_id
GROUP BY c1.con_id, cf1.tablespace_name),
y AS (SELECT c2.con_id, cd.tablespace_name, SUM(cd.bytes)/1024/1024 apm
FROM cdb_data_files cd
,v$containers c2
WHERE cd.con_id = c2.con_id
GROUP BY c2.con_id
,cd.tablespace_name)
SELECT x.con_id, v.name con_name, x.tablespace_name, x.fsm, y.apm
FROM x, y, v$containers v
WHERE x.con_id = y.con_id
AND x.tablespace_name = y.tablespace_name
AND v.con_id = y.con_id
UNION
SELECT vc2.con_id, vc2.name, tf.tablespace_name, null, SUM(tf.bytes)/1024/1024
FROM v$containers vc2, cdb_temp_files tf
WHERE vc2.con_id = tf.con_id
GROUP BY vc2.con_id, vc2.name, tf.tablespace_name
ORDER BY 1, 2;

select a.name,b.FILE_ID,b.tablespace_name,b.file_name from V$CONTAINERS a , CDB_TEMP_FILES b where a.con_id=b.con_id;

set lines 299
set pagesize 299
col db_name for a10
col CLONED_FROM_PDB_NAME for a12
col pdb_name for a18
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME FROM CDB_PDB_HISTORY;

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> select sys_context('USERENV','CON_NAME') FROM DUAL;

SYS_CONTEXT('USERENV','CON_NAME')
-----------------------------------
PDB1

-- Open/close all the pluggable db:
-- Connect to root container:

alter pluggable database all open;

alter pluggable database all close immediate;

-- Stop/start a pluggable db:

SQL> alter session set container=PDB1;

Session altered.

SQL> startup
Pluggable Database opened.
SQL> shutdown
Pluggable Database closed.

-- Need to run from root container;

SQL> show con_name

CON_NAME
------------------------
CDB$ROOT

ALTER PLUGGABLE DATABASE PDB1 CLOSE IMMEDIATE;

DROP PLUGGABLE DATABASE PDB1 INCLUDING DATAFILE;

-- Local undo mode means that each container has its own undo tablespace for every instance in which it is open.
-- Shared undo mode means that there is one active undo tablespace for a single-instance CDB

select * from database_properties where property_name='LOCAL_UNDO_ENABLED';

-- If the output is YES mean it is a multitenant database, else normal db

SQL> SELECT CDB FROM V$DATABASE;
CDB
---
YES

COLUMN NETWORK_NAME FOR A34
COLUMN PDB FOR A15
COLUMN CON_ID FOR 999
SELECT PDB, NETWORK_NAME, CON_ID FROM CDB_SERVICES WHERE PDB IS NOT NULL AND CON_ID > 2 ORDER BY PDB;

COLUMN NAME FORM A8
SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS;

Enable a job

EXECUTE DBMS_SCHEDULER.ENABLE('SCOTT.MONTHLYBILLING');

Disable a job

EXECUTE DBMS_SCHEDULER.DISABLE('SCOTT.MONTHLYBILLING');

Stop a running job

EXECUTE DBMS_SCHEDULER.STOP_JOB('SCOTT.MONTHLYBILLING');

Drop a running job

EXECUTE DBMS_SCHEDULER.DROP_JOB('SCOTT.MONTHLYBILLING');

Run a job immediately

EXECUTE DBMS_SCHEDULER.RUN_JOB('SCOTT.MONTHLYBILLING');

-- TO schedule a job, first create a schedule, then a program and then a job
--Create a schedule

BEGIN
DBMS_SCHEDULER.CREATE_SCHEDULE (
Schedule_name => 'DAILYBILLINGJOB',
Start_date => SYSTIMESTAMP,
Repeat_interval =>'FREQ=DAILY;BYHOUR=11; BYMINUTE=30',
Comments => 'DAILY BILLING JOB'
);
END;

-- Create a program

BEGIN
DBMS_SCHEDULER.CREATE_PROGRAM (
program_name => 'DAILYBILLINGJOB',
program_type => 'STORED_PROCEDURE',
program_action => 'DAILYJOB.BILLINGPROC'
number_of_arguments =>0,
enabled => TRUE,
comments => 'DAILY BILLING JOB'
);
END;

-- Now create the job:

 

BEGIN DBMS_SCHEDULER.CREATE_JOB (
job_name => 'DAILYBILLINGJOB_RUN',
program_name => 'DAILYBILLINGJOB',
schedule_name => 'DAILYBILLINGJOB_SCHED',
enabled => FLASE,
comments => 'daily billing job'
);
END;

-- ENABLE THE JOB

EXECUTE DBMS_SCHEDULER.ENABLE('DAILYBILLINGJOB_RUN');

BEGIN
DBMS_SCHEDULER.DROP_SCHEDULE(
schedule_name => 'DAILYBILLINGJOB_SCHED',
force => TRUE
);
END;

-- This feature in available from oracle 12c onward

-- Create an credential store:

BEGIN
dbms_credential.create_credential (
CREDENTIAL_NAME => 'ORACLEOSUSER',
USERNAME => 'oracle',
PASSWORD => 'oracle@98765',
DATABASE_ROLE => NULL,
WINDOWS_DOMAIN => NULL,
COMMENTS => 'Oracle OS User',
ENABLED => true
);
END;
/

-- Create the job:

exec dbms_scheduler.create_job(-
job_name=>'myscript4',-
job_type=>'external_script',-
job_action=>'/export/home/oracle/ttest.2.sh',-
enabled=>true,-
START_DATE=>sysdate,-
REPEAT_INTERVAL =>'FREQ=MINUTELY; byminute=1',-
auto_drop=>false,-
credential_name=>'ORACLEOSUSER');

-- Monitor currently running jobs

SELECT job_name, session_id, running_instance, elapsed_time, FROM dba_scheduler_running_jobs;

-- View the job run details

select * from DBA_SCHEDULER_JOB_RUN_DETAILS;

-- View the job related logs:

select * from DBA_SCHEDULER_JOB_LOG;

--ALL SCHEDULER WINDOWS
--Reference : Gwen Shapira

set pagesize 300 linesize 200
select * from dba_scheduler_windows;A

set pagesize 200
set lines 299
col START_DATE for a45
col REPEAT_INTERVAL for a45
col schedule_name for a34
select schedule_name, schedule_type, start_date, repeat_interval from dba_scheduler_schedules;

set pagesize 299
set lines 299
col JOB_NAME for a24
col actual_start_date for a56
col RUN_DURATION for a34
select job_name,status,actual_start_date,run_duration from DBA_SCHEDULER_JOB_RUN_DETAILS order by ACTUAL_START_DATE desc;

set pagesize 299
set lines 299
col job_name for a24
col log_date for a40
col operation for a19
col additional_info a79
select job_name,log_date,status,OPERATION,ADDITIONAL_INFO from dba_scheduler_job_log order by log_date desc;

syntax - > select dbms_metadata.get_ddl('PROCOBJ','JOB_NAME','JOB_OWNER') from dual;

select dbms_metadata.get_ddl('PROCOBJ','DUP_ACC','SCOTT') from dual;

select CON_ID, JOB_NAME,JOB_TYPE,ENABLED, STATE,NEXT_RUN_DATE, REPEAT_INTERVAL from cdb_scheduler_jobs;

Below will copy job my_job_2 from user scott to dbaclass.

exec dbms_scheduler.copy_job('SCOTT.MY_JOB_2','DBACLASS.MY_JOB_2');

-- First get the job_id and owner;
select job,log_user,schema_user from dba_jobs;

743,DBATEST
--- connect to the owner , and get the definition of the job

alter session set current_schema=DBATEST;

set serveroutput on
SQL> DECLARE
callstr VARCHAR2(500);
BEGIN
dbms_job.user_export(743, callstr);
dbms_output.put_line(callstr);
END;
/

-- Get the job number from dba_jobs.

select job "jobno",schema_user,what from dba_jobs;

-- Disable a job

EXEC DBMS_IJOB.BROKEN(jobno,TRUE);

-- Enable a job

EXEC DBMS_IJOB.BROKEN(jobno,FALSE);

--REMOVE A DBMS_JOBS:

EXEC DBMS_IJOB.remove(jobno) ;

SELECT DATABASE_ROLE, DB_UNIQUE_NAME INSTANCE, OPEN_MODE, PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY ;

SELECT MESSAGE FROM V$DATAGUARD_STATUS;

select 'Last Log applied : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time
from v$archived_log
where sequence# = (select max(sequence#) from v$archived_log where applied='YES')
union
select 'Last Log received : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time
from v$archived_log
where sequence# = (select max(sequence#) from v$archived_log);

set lines 100 pages 999
col member format a70
select st.group#
, st.sequence#
, ceil(st.bytes / 1048576) mb
, lf.member
from v$standby_log st
, v$logfile lf
where st.group# = lf.group#
/

-- Applicable for 2 NODE RAC ALSO

column applied_time for a30
set linesize 140
select to_char(sysdate,'mm-dd-yyyy hh24:mi:ss') "Current Time" from dual;
SELECT DB_NAME, APPLIED_TIME, LOG_ARCHIVED-LOG_APPLIED LOG_GAP ,
(case when ((APPLIED_TIME is not null and (LOG_ARCHIVED-LOG_APPLIED) is null) or
(APPLIED_TIME is null and (LOG_ARCHIVED-LOG_APPLIED) is not null) or
((LOG_ARCHIVED-LOG_APPLIED) > 5))
then 'Error! Log Gap is '
else 'OK!'
end) Status
FROM
(
SELECT INSTANCE_NAME DB_NAME
FROM GV$INSTANCE
where INST_ID = 1
),
(
SELECT MAX(SEQUENCE#) LOG_ARCHIVED
FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=1
),
(
SELECT MAX(SEQUENCE#) LOG_APPLIED
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=1
),
(
SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=1
)
UNION
SELECT DB_NAME, APPLIED_TIME, LOG_ARCHIVED-LOG_APPLIED LOG_GAP,
(case when ((APPLIED_TIME is not null and (LOG_ARCHIVED-LOG_APPLIED) is null) or
(APPLIED_TIME is null and (LOG_ARCHIVED-LOG_APPLIED) is not null) or
((LOG_ARCHIVED-LOG_APPLIED) > 5))
then 'Error! Log Gap is '
else 'OK!'
end) Status
from (
SELECT INSTANCE_NAME DB_NAME
FROM GV$INSTANCE
where INST_ID = 2
),
(
SELECT MAX(SEQUENCE#) LOG_ARCHIVED
FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=2
),
(
SELECT MAX(SEQUENCE#) LOG_APPLIED
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=2
),
(
SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=2
)
/

select to_char(START_TIME,'DD-MON-YYYY HH24:MI:SS') "Recovery Start Time",to_char(item)||' = '||to_char(sofar)||' '||to_char(units) "Progress"
from v$recovery_progress where start_time=(select max(start_time) from v$recovery_progress);

--Cancel MRP(media recovery) process in standby:

alter database recover managed standby database cancel;

--Start MRP(media recovery):

alter database recover managed standby database disconnect from session;

-- For real time media recovery

alter database recover managed standby database using current logfile disconnect from session;

-- Find the lob segment details

select table_name,COLUMN_NAME,SEGMENT_NAME,TABLESPACE_NAME from dba_lobs where OWNER='DBACLASS'

-- Move to new tablespace

alter table DBACLASS.LOB_SEG1 move lob (PAYLOAD) store as SYS_LOB0000100201C00011$$ ( tablespace USERS);

 

set pagesize 200
set lines 200
set long 999
col owner for a15
col table_name for a20
col column_name for a21
select a.owner,a.table_name,a.column_name, data_type
from dba_lobs a, dba_tab_columns b
where a.column_name=b.column_name
and a.table_name = b.table_name
and a.owner = b.owner
and b.owner not in ('SYS','SYSTEM','DBSNMP','WMSYS');

-- Add column

alter table scott.emp add( empname varchar2(20));
alter table scott.emp add( empid number,deptid number);

-- Drop column

alter table scott.emp drop (empname);
alter table scott.emp drop (empid,deptid);

-- Rename column

alter table scott.emp rename column empname to asocname;

-- set column unused

alter table scott.emp set unused (empname);

-- Drop unused columns from a table

alter table scott.emp drop unused columns.

SPACE USED BY LOB COLUMN:

SELECT s.bytes FROM dba_segments s JOIN dba_lobs l USING (owner, segment_name)
WHERE l.table_name = '&table_name';

ACTUAL SPACE USED BY LOB:

SELECT nvl((sum(dbms_lob.getlength( &lob_column ))),0) AS bytes FROM &table_name;

-- First, analyze the table as below:

ANALYZE TABLE SCOTT.EMPTABLE LIST CHAINED ROWS;

-- Then check the row_count in chained_row table

select count(*) from chained_rows where table_name='EMPTABLE';

The output of this query returns the number of chained rows in that table.

set lines 132 pages 1000
col object_name format a30 heading "Object Name";
col object_type format a10 heading "Object|Type";
col created format a30 heading "Created";
col status format a30 heading "Status";
select OWNER,object_name,object_type,created,status from dba_objects
where (object_name = lower(object_name) or
object_name = initcap(lower(object_name)))
and object_name != upper(object_name);

--- Script to find nested tables of a schema:

set pagesize 200
set lines 200
set long 999
col owner for a18
col table_name for a20
col table_type_owner for a20
col table_type_name for a20
col parent_table_name for a20
col parent_table_column for a20
SELECT owner, table_name, table_type_owner, table_type_name,
parent_table_name, parent_table_column,
LTRIM (storage_spec) storage_spec, LTRIM (return_type) return_type
FROM dba_nested_tables
WHERE owner='&SCHEMA_NAME'
And upper(table_name) like '&&TABLE_NAME'
ORDER BY owner;

-- Create public database link

Create public database link LINK_PUB connect to system identified by oracle using 'PRODB';

where PRODB - > tnsname of the target db added in tnsnames.ora

-- Create private database link under Scott

connect scott/tiger

create database link LINK_PRIV connect to system identified by oracle using 'PRODB';

-- Drop public database link

drop public database link TEST_LINK ;

-- Drop private database link

connect scott/tiger

drop database link LINK_PRIV;

NOTE - Private database link can be dropped only by the owner of the database link

--- Find index name and their sizes of a table

SELECT idx.table_name,bytes/1024/1024/1024
FROM dba_segments seg,
dba_indexes idx
where idx.table_name='&TABLE_NAME'
AND idx.index_name = seg.segment_name
GROUP BY idx.table_name order by 1;

-- Find total_index_size of respective tables in a schema

SELECT idx.table_name, SUM(bytes/1024/1024/1024)
FROM dba_segments seg,
dba_indexes idx
WHERE idx.table_owner = 'SIEBEL'
AND idx.owner = seg.owner
AND idx.index_name = seg.segment_name
GROUP BY idx.table_name order by 1

-- Create public synonym

CREATE PUBLIC SYNONYM emp_view FOR scott.emp;

-- Create private synonym

CREATE SYNONYM priv_view FOR scott.emp;

-- Drop synonym

DROP PUBLIC SYNONYM emp_view;
DROP SYNONYM priv_view;

-- View synonym related info

SELECT * FROM DBA_SYNONYMS;

set lines 150
set pages 500
col table_name for a20
col column_name for a20
select a.object_name table_name, c.column_name,equality_preds, equijoin_preds, range_preds, like_preds
from dba_objects a, col_usage$ b, dba_tab_columns c
where a.object_id=b.OBJ#
and c.COLUMN_ID=b.INTCOL#
and a.object_name=c.table_name
and b.obj#=a.object_id
and a.object_name='&table_name'
and a.object_type='TABLE'
and a.owner='&owner'
order by 3 desc,4 desc, 5 desc;

--Below script is to get the required space for index creation, before actually it is being created.
--- Lets check for create index DBACLASS.INDEX1 on DBACLASS.EMP(EMPNO)

SET SERVEROUTPUT ON
DECLARE
v_used_bytes NUMBER(10);
v_Allocated_Bytes NUMBER(10);
BEGIN

DBMS_SPACE.CREATE_INDEX_COST
(
'create index DBACLASS.INDEX1 on DBACLASS.EMP(EMPNO)',
v_used_Bytes,
v_Allocated_Bytes
);
DBMS_OUTPUT.PUT_LINE('Used Bytes MB: ' || round(v_used_Bytes/1024/1024));
DBMS_OUTPUT.PUT_LINE('Allocated Bytes MB: ' || round(v_Allocated_Bytes/1024/1024));
END;
/

-- To compile all objects in database

@$ORACLE_HOME/rdbms/admin/utlrp.sql

-- Compile objects of a particular schema:

EXEC DBMS_UTILITY.compile_schema(schema => 'APPS');

-- Compiling a package;

ALTER PACKAGE APPS.DAIL_REPORT COMPILE;

ALTER PACKAGE APPS.DAIL_REPORT COMPILE BODY;

-- Compiling a procedure:

ALTER PROCEDURE APPS.REPORT_PROC COMPILE;

-- Compiling a view:

ALTER VIEW APPS.USERSTATUS_VW COMPILE;

-- Compiling a function:

ALTER FUNCTION APPS.SYNC_FUN COMPILE;

select * from dba_dependencies where owner='&SCHEMA_NAME' and name='&OBJECT_NAME';
select * from dba_dependencies where referenced_owner = 'USER_NAME' and referenced_name = 'OBJECT_NAME';

-- Index rebuild online

alter index TEST_INDX rebuild online ;

--- Fast Index rebuild

alter index TEST_INDX rebuild online parallel 8 nologging;
alter index TEST_INDX noparallel;
alter index TEST_INDX logging;

SQL> select VISIBILITY from dba_indexes where index_name='IDX_SESS' and owner='DBACLASS';
VISIBILIT
---------
VISIBLE

SQL> ALTER INDEX DBACLASS.IDX_SESS INVISIBLE;
Index altered.

SQL> select VISIBILITY from dba_indexes where index_name='IDX_SESS' and owner='DBACLASS';

VISIBILIT
---------
INVISIBLE

-- Revert again to visible.

SQL> ALTER INDEX DBACLASS.IDX_SESS VISIBLE;
Index altered.

-- Auditing is disabled, when audit_trail is set to NONE,

SQL> show parameter audit_trail
NAME               TYPE               VALUE
------------------------------------ ----------- --------------------------
audit_trail       string                NONE

- Either set audit_trail to DB or DB,EXTENDED.

alter system set audit_trail='DB' scope=spfile;
(or)
alter system set audit_trail='DB, EXTENDED' scope=spfile;

-- Restart the database.

shutdown immediate;
startup;

SQL> show parameter audit_trail
NAME                 TYPE                   VALUE
------------------------------------ ----------- --------------------------
audit_trail           string               DB

col user_name for a12 heading "User name"
col audit_option format a30 heading "Audit Option"
set pages 1000
prompt
prompt System auditing options across the system and by user
select user_name,audit_option,success,failure from sys.dba_stmt_audit_opts
order by user_name, proxy_name, audit_option
/

col user_name for a12 heading "User name"
col privilege for a30 heading "Privilege"
set pages 1000
prompt
prompt System Privileges audited across system
select user_name,privilege,success,failure from dba_priv_audit_opts
order by user_name, proxy_name, privilege
/

col user_name for a12 heading "User name"
col timest format a13
col userid format a8 trunc
col obn format a10 trunc
col name format a13 trunc
col object_name format a10
col object_type format a6
col priv_used format a15 trunc
set verify off
set pages 1000
SET PAGESIZE 200
SET LINES 299
select username userid, to_char(timestamp,'dd-mon hh24:mi') timest ,
action_name acname, priv_used, obj_name obn, ses_actions
from sys.dba_audit_trail
where timestamp>sysdate-&HOURS*(1/24) and username='&USER_NAME'
order by timestamp
/

SQL>ALTER SYSTEM SET audit_sys_operations=true SCOPE=spfile;

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> show parameter audit_sys_operations

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_sys_operations                   boolean          TRUE

-- False means mixed auditing;

SELECT value FROM v$option WHERE parameter = 'Unified Auditing';
VALUE
-----------------
FALSE

-- relink the library as mentioned

shutdown immediate;

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk unaiaud_on ioracle

startup

SELECT value FROM v$option WHERE parameter = 'Unified Auditing';

VALUE
-----------------
TRUE

-- Audit policies present in db:

select distinct POLICY_NAME from AUDIT_UNIFIED_POLICIES;

-- Enabled audit policies in db:

select distinct policy_name from AUDIT_UNIFIED_ENABLED_POLICIES;

-- Get the audit options included in an policy

select AUDIT_OPTION from AUDIT_UNIFIED_POLICIES where POLICY_NAME='ORA_SECURECONFIG';

- Unified report for last 1 hour:
set lines 299
col SQL_TEXT for a23
col action_name for a18
col UNIFIED_AUDIT_POLICIES for a23
select action_name,SQL_TEXT,UNIFIED_AUDIT_POLICIES ,EVENT_TIMESTAMP from unified_AUDIT_trail
where EVENT_TIMESTAMP > sysdate -1/24;

-- Create audit policy with audit options:

create audit policy test_case2
ACTIONS CREATE TABLE,
INSERT ON bsstdba.EMP_TAB,
TRUNCATE TABLE,
select on bsstdba.PROD_TAB;

select POLICY_NAME,audit_option,AUDIT_CONDITION,OBJECT_SCHEMA,OBJECT_NAME FROM
AUDIT_UNIFIED_POLICIES where POLICY_NAME='TEST_CASE2';

-- Enable policy:

audit policy TEST_CASE2;

select distinct policy_name from AUDIT_UNIFIED_ENABLED_POLICIES where policy_name='TEST_CASE2';

-- Create policy

create audit policy expdp_aduit actions component=datapump export;

-- Enable policy

audit policy expdp_aduit;

-- View audit report:

select DBUSERNAME,DP_TEXT_PARAMETERS1 from UNIFIED_AUDIT_TRAIL where DP_TEXT_PARAMETERS1 is not null;

-- Moving aud$ table to new tablespace AUDIT_DATA

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
audit_trail_location_value => 'AUDIT_DATA');
END;
/
-- Query to view new tablespace

select owner,segment_name,segment_type,tablespace_name,bytes/1024/1024 from
dba_segments where segment_name='AUD$';

-- Encryption wallet path and status:

SELECT * FROM gv$encryption_wallet;

-- Before encrypting decrypting make sure TDE is enabled in database.
-- Encrypt a column

alter table SCOTT.EMP modify ( emp_name encrypt);

alter table SCOTT.EMP modify ( emp_name encrypt using 'AES256');

-- Decrypt a column:

alter table SCOTT.EMP modify ( emp_name decrypt);

NOTE - This activity will take time, according to the table size and it might block other session.
Better to take downtime before doing this activity.

-- List down redaction policy details
SQL> select * from redaction_policies;

-- List down redacted column details

SQL>select * from REDACTION_COLUMNS;

-- Find the policy details

SELECT OBJECT_OWNER,OBJECT_NAME,POLICY_NAME FROM REDACTION_POLICIES ;

-- Drop the policy

BEGIN
DBMS_REDACT.drop_policy (
object_schema => 'SCOTT',
object_name => 'CREDIT_CARD_TAB',
policy_name => 'CREDIT_HIDE_POLICY'
);
END;
/

- Set to the listener you want to trace

LSNRCTL> set cur LISTENER_TEST

-- Enable Trace:

LSNRCTL> set trc_level ADMIN

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_TEST)))
LISTENER_TEST parameter "trc_level" set to admin
The command completed successfully

-- Create public database link

Create public database link LINK_PUB connect to system identified by oracle using 'PRODB';

where PRODB - > tnsname of the target db added in tnsnames.ora

-- Create private database link under Scott

connect scott/tiger

create database link LINK_PRIV connect to system identified by oracle using 'PRODB';

-- Drop public database link

drop public database link TEST_LINK ;

-- Drop private database link

connect scott/tiger

drop database link LINK_PRIV;

NOTE - Private database link can be dropped only by the owner of the database link

create public database link IMFP connect to iwf identified by thr3iwf USING
'(DESCRIPTION=(ADDRESS_LIST=(
ADDRESS=(PROTOCOL=TCP)(HOST=testoracle.com)(PORT=1522)))
(CONNECT_DATA=(SERVICE_NAME=IMFP)))';

-- Modify the scan listener to use new port 1523:

srvctl modify scan_listener -p 1523

-- Restart scan_listenr

$GRID_HOME/bin/srvctl stop scan_listener
$GRID_HOME/bin/srvctl start scan_listener

-- update remote_listener in database

Alter system set remote_listener='orcl-scan.stc.com.sa:1523' scope=both sid='*';

LISTENER_DBACLASS =
        (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.20.211.236)(PORT = 1527))
           )
SID_LIST_LISTENER_DBACLASS =
        (SID_LIST =
        (SID_DESC =
         (ORACLE_HOME = /oracle/app/oracle/product/12.1.0/dbhome_1)
         (SID_NAME = DBACLASS)
     )
     )

lsnrctl start LISTENER_DBACLASS 

-- stop/start listener

lsnrctl stop LISTENER_DBACLASS
lsnrctl start LISTENER_DBACLASS

-- Reload listener

lsnrctl reload LISTENER_DBACLASS

-- Check status of listener
lsnrctl status LISTENER_DBACLASS

--- view listener version
lsnrctl version LISTENER_DBACLASS

-- View listener services

lsnrctl services LISTENER_DBACLASS

-- View listener service acl summary:

lsnrctl servacls LISTENER_DBACLASS

-- Create ACLS
exec DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('scott_utl_mail.xml','Allow mail to be send','SCOTT', TRUE, 'connect');

-- Assign ACL to nework

exec DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('scott_utl_mail.xml','*',25);

-- grant privilege to user:

exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('scott_utl_mail.xml','SCOTT', TRUE, 'connect');
exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('scott_utl_mail.xml' ,'SCOTT', TRUE, 'resolve');

--Unassign network from ACL:

exec DBMS_NETWORK_ACL_ADMIN.UNASSIGN_ACL('scott_utl_mail.xml','*',25);

-- remove privilege from an user:

exec DBMS_NETWORK_ACL_ADMIN.DELETE_PRIVILEGE('scott_utl_mail.xml','SCOTT', TRUE, 'connect');

-- Drop ACL:

exec DBMS_NETWORK_ACL_ADMIN.DROP_ACL ('scott_utl_mail.xml' );

--- It will show all the registered services for the database.

col NETWORK_NAME for a25
set pagesize 299
set lines 299
select NAME,INST_ID,NETWORK_NAME,CREATION_DATE,GOAL,GLOBAL from GV$ACTIVE_SERVICES where name not like 'SYS$%';

-- Make the sure the a listener is already running with that port(i.e 1524 here)

alter system set LOCAL_LISTENER='(ADDRESS = (PROTOCOL = TCP)(HOST = 162.20.217.15)(PORT = 1524))' scope=both;
alter system register;

select type, value from v$listener_network where TYPE='LOCAL LISTENER';

set lines 200
COL ACL_OWNER FOR A12
COL ACL FOR A67
COL HOST FOR A34
col PRINCIPAL for a20
col PRIVILEGE for a13
select ACL_OWNER,ACL,HOST,LOWER_PORT,UPPER_PORT FROM dba_network_acls;
select ACL_OWNER,ACL,PRINCIPAL,PRIVILEGE from dba_network_acl_privileges;

----stop/start oms in oem 12c/13c.
cd $ORACLE_HOME/bin

emctl stop oms
emctl start oms

-- status of oms

emctl status oms

--- stop/start agent in oem 12c//13c

cd $AGENT_HOME/bin

./emctl start agent
./emctl stop agent

---- status of agent

./emctl status agent

--- Oem repository is a target db ,which contains all target details

cd $OMS_HOME/bin
./emctl config oms -list_repos_details

-- OMS URL Details

cd $OMS_HOME/bin
./emctl status oms -details

-- agent url details

cd $AGENT_HOME/bin

./emctl status agent -details

-- Run from oms server($OMS_HOME/bin)

-- List all the target

./emcli get_targets

-- List the target types present:

./emcli get_target_types

-- List targets of particular target_type(say oracle_database)

./emcli get_targets -targets="oracle_database"

-- Run from OMS server

-- List of plugins installed on OMS server.

./emcli list_plugins_on_server

-- List of plugins installed on the target agents.

./emcli list_plugins_on_agent

-- List plugins deployed on particular agent

./emcli list_plugins_on_agent -agent_names="172.15.36.93"

-- Syntax to update sysman password in oms repository

./emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd -new_pwd < new sysman password>

-- Example (need only existing sys password)

./emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd oracle1234 -new_pwd oracle1234

-- Restart oms

./emctl stop oms
./emctl start oms

-- Check whether em is enabled or not.(if output 0 means, emexpress disabled)

select dbms_xdb.getHttpPort() from dual;
select dbms_xdb_config.getHttpsPort() from dual;

-- Enable emexpress with https:

SQL> exec dbms_xdb_config.sethttpsport(5500);

-- Enable emexpress with http:

SQL> exec dbms_xdb_config.sethttpport(8080);

-- Disable em express (set port to 0)

SQL> exec dbms_xdb_config.sethttpsport(0);

SQL> exec dbms_xdb_config.sethttpport(0);

-- Create the directory if not present
create directory EXPDIR as '/export/home/oracle/ORADUMP'

-- Below is the parfile for full db export

cat parfile=compressed.par

dumpfile=schema.dmp
logfile=tables.log
directory=EXPDIR
FULL=Y
compression=ALL

-- Run expdp command
expdp parfile=compressed.par

-- Create the directory if not present
create directory EXPDIR as '/export/home/oracle/ORADUMP'

-- Par file for export with parallel degree 4

cat parfile=parallel.par

dumpfile=parallel_%U.dmp
logfile=tables.log
directory=EXPDIR
schemas=PROD_DATA
parallel=4

NOTE - mention parallel value as per cpu core.

-- Run expdp command

expdp parfile=parallel.par

Same is the command for IMPDP.

-- Create the directory if not present

create directory EXPDIR as '/export/home/oracle/ORADUMP'

-- Par file for export of SCHEMAS(PROD_DATA,DEV_DATA)

cat parfile=schema.par

dumpfile=schema.dmp
logfile=tables.log
directory=EXPDIR
schemas=PROD_DATA,
DEV_DATA

-- Run expdp

expdp parfile=schema.par

For impdp also use the similar command.

-- Create the directory if not present

create directory EXPDIR as '/export/home/oracle/ORADUMP'

--- Par file for export of multiple tables

cat parfile=tables.par

dumpfile=tables.dmp
logfile=tables.log
directory=EXPDIR
tables=PROD_DATA.EMPLOYEE,
PROD_DATA.DEPT,
DEV_DATA.STAGING

-- Run expdp command

expdp parfile=tables.par

--- For exporting table data with query condition

----select * from DBACLASS.EMP_TAB WHERE created > sysdate -40;

-- Parfile

cat expdp_query.par

dumpfile=test.dmp
logfile=test1.log
directory=TEST
tables=dbaclass.EMP_TAB
QUERY=dbaclass.EMP_TAB:"WHERE created > sysdate -40"

It can be used, only with impdp. This helps in generating the DDLs from a dumpfile.
Suppose We have a dump file of table DBACLASS.DEP_TAB . If you need the DDL of the table, then use sqlfile with impdp command as below.

PARFILE SAMPLE:

dumpfile=test.dmp
logfile=test1.log
directory=TEST
tables=DBACLASS.DEP_TAB
sqlfile=emp_tab.sql

note- DDL output will be logged in the emp_tab.sql file

TABLE_EXISTS_ACTION option in IMPDP:

TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.

 

TABLE_EXISTS_ACTION=SKIP:

This is the default option with impdp. I.e if the the table exists, it will skip that table.

TABLE_EXISTS_ACTION=APPEND:

while importing the table, if the table exists in the database, then it will append the data on top the existing data in the table.

impdp dumpfile=emp_tab.dmp logfile=emp_tab.log directory=VEN table_exists_action=APPEND

TABLE_EXISTS_ACTION=TRUNCATE:

While importing the table, if the table exists in database, it will truncate the table and load the data.

impdp dumpfile=emp_tab.dmp logfile=emp_tab.log directory=VEN table_exists_action=TRUNCATE

TABLE_EXISTS_ACTION=REPLACE:

While importing, if the table exists in database, then it will drop it and recreate it from the dump

impdp dumpfile=emp_tab.dmp logfile=emp_tab.log directory=VEN table_exists_action=REPLACE

EXCLUDE/INCLUDE option:

These two options can be used in both expdp or impdp to exclude or include, particular objects or object_types:

Export a schemas DBACLASS, excluding TABLE EMP_TAB and DEPT

dumpfile=test.dmp
logfile=test1.log
directory=TEST
exclude=TABLE:"IN ('EMP_TAB','DEPT')"
schemas=DBACLASS

Exclude few schemas while import:

dumpfile=test.dmp
logfile=test1.log
directory=TEST
EXCLUDE=SCHEMA:"IN ('WMSYS', 'OUTLN')"

export/Import only TABLE and INDEX ( OBJECT_TYPE)

dumpfile=FULL.dmp
logfile=full.log
directory=exp_dir
directory=DBATEST
INCLUDE=TABLE,INDEX

Suppose you wish to take a expdp backup of a big table,
but you don’t sufficient space in a single mount point to keep the dump.
In this case, we take expdp dump to multiple directory.

Create directories to pointing to diff PATH

SQL> create directory DIR1 as '/home/oracle/DIR1';

Directory created.

SQL> create directory DIR2 as '/home/oracle/DIR2';

Directory created.

parfile content

dumpfile=DIR1:test_%U.dmp,
DIR2:test_%U.dmp
logfile=test.log
directory=DIR1
parallel=2
tables=raj.test

Create a directory pointing to asm diskgroup( for dumpfiles)

SQL> create directory SOURCE_DUMP as '+NEWTST/TESTDB2/TEMPFILE';
Directory created

Create a directory pointing to a normal filesystem ( required for logfiles)

SQL> create directory EXPLOG as '/export/home/oracle';
Directory created.

export parfile

dumpfile=test.dmp
logfile=EXPLOG:test.log
directory=SOURCE_DUMP
tables=dbatest.EMPTAB
exclude=statistics

In a RAC database, if you are taking export with parallel option and the
datapump directory is not shared between the nodes, then set CLUSTER=N in expdp/impdp

parfile content:

dumpfile=asset_%U.dmp
logfile=asset.log
directory=VEN
parallel=32
cluster=N