ORACLE DBA Activity Checklist

December 28, 2016 | Author: Vimlendu Kumar | Category: N/A
Share Embed Donate


Short Description

Download ORACLE DBA Activity Checklist...

Description

ORACLE DBA Activity Checklist Purpose: This document gives details for performance/maintenance task daily, weekly, and monthly for checking the status of one or more Oracle databases. These proactive activities will give the complete health check report of databases.

Daily Activity 1. Oracle Database instance is running or not 2. Database Listener is running or not. 3. Check any session blocking the other session 4. Check the alert log for an error 5. Check is there any dbms jobs running & check the status of the same 6. Check the Top session using more Physical I/O 7. Check the number of log switch per hour 8. How_much_redo_generated_per_hour.sql 9. Run the statpack report 10. Detect lock objects 11. Check the SQL query consuming lot of resources. 12. Check the usage of SGA 13. Display database sessions using rollback segments 14. State of all the DB Block Buffer Weekly Activity 1. Check the objects fragmented 2. Check the Chaining & Migrated Rows 3. Check the size of tables & check weather it need to partition or not 4. Check for Block corruption 5. Check the tables without PK 6. Check the tables having no Indexes 7. Check the tables having more Indexes 8. Check the tables having FK but there is no Index 9. Check the objects having the more extents 10. Check the frequently pin objects & place them in separate tablespace & in cache Check the objects reload in memory many time 11. Check the free space at O/s Level 12. Check the CPU, Memory usage at O/s level define the threshold for the same. 13. Check the used & free Block at object level as well as on tablespaces. 14. Check the objects reaching to it’s Max extents 15. Check free Space in the tablespace 16. Check invalid objects of the database 17. Check open cursor not reaching to the max limit 18. Check locks not reaching to the max lock

19. Check free quota limited available of each user 20. Check I/O of each data file Monthly Activity 1. Check the database size & compare it previous size to find the exact growth of the database 2. Find Tablespace Status, segment management, initial & Max Extents and Extent Management 3. Check location of data file also check auto extendable or not 4. Check default tablespace & temporary tablespace of each user 5. Check the Indexes which is not used yet 6. Check the Extents of each object and compare if any object extent are overridden which is define at tablespace level 7. Tablespace need coalescing 8. Check the overall database statistics 9. Trend Analysis of objects with tablespace, last analyzed, no. of Rows, Growth in days & growth in KB Nightly Activity 1. Analyzed the objects routinely. 2. Check the Index need to Rebuild 3. Check the tablespace for respective Tables & Indexes One Time Activity 1. 2. 3. 4. 5. 6.

Database user creation with required privileges Make the portal of Oracle Predefined error with possible solution. Check database startup time(if not 24X7) Check location of control file Check location of log file Prepare the Backup strategy and test all the recovery scenario Daily Activity

1. Oracle Database instance is running or not Run the script instance_running.sql to verify weather all oracle databases are running or not. If any of the instances is not running then run startup.sql script. select name,open_mode from V$database;

2. Database Listener is running or not.

Run listener status on terminal and verify weather database listener is running or not. If it is not running then run listener start to run the listener. Follow the listener_troubleshooting document for more details. LSNRCTL STATUS

3. Check any session blocking the other session Run block_session.sql to verify is there any session blocking another session if yes then kills the session and follows the doc “How to Kill Session” select * from V$lock

4. Check the alert log for in error Verify there no error in alert log file if any error found then resolve the same on the basis of error you got. 5. Check is there any dbms jobs run & check the status of the same Verify all the dbms job runn successfully using script check_dbms_jobs.sql if any of the job fail the follow dbms_job_troubleshooting document. select * from DBA_JOBS

6. Check the Top session using more Physical I/O Run the script top_session_using_more_physical_IO.sql to verify the session using most Physical I/O select sid, username, round(100 * total_user_io/total_io,2) tot_io_pct from (select b.sid sid, nvl(b.username,p.name) username, sum(value) total_user_io from sys.v_$statname c, sys.v_$sesstat a, sys.v_$session b,

sys.v_$bgprocess p where a.statistic#=c.statistic# and p.paddr (+) = b.paddr and b.sid=a.sid and c.name in ('physical reads', 'physical writes', 'physical writes direct', 'physical reads direct', 'physical writes direct (lob)', 'physical reads direct (lob)') group by b.sid, nvl(b.username,p.name)), (select sum(value) total_io from sys.v_$statname c, sys.v_$sesstat a where a.statistic#=c.statistic# and c.name in ('physical reads', 'physical writes', 'physical writes direct', 'physical reads direct', 'physical writes direct (lob)', 'physical reads direct (lob)')) order by 3 desc;

7. Check the number of log switch per hour Run the script log_switch_per_hour.sql to verify the log switch per hour. If log switch more than 5 in every hour then there is need to increase the size of Redo Log file. Follow Troubleshooting_On_Redo_Log file to increase the size of redo log . select substr(first_time,1,5) day, to_char(sum(decode(substr(first_time,10,2),'00',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'01',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'02',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'03',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'04',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'05',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'06',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'07',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'08',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'09',1,0)),'99')

"00", "01", "02", "03", "04", "05", "06", "07", "08", "09",

to_char(sum(decode(substr(first_time,10,2),'10',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'11',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'12',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'13',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'14',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'15',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'16',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'17',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'18',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'19',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'20',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'21',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'22',1,0)),'99') to_char(sum(decode(substr(first_time,10,2),'23',1,0)),'99') from v$log_history group by substr(first_time,1,5) /

"10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23"

8. How Much redo generated per hour Run the script how_much_redo_generated_per_hour.sql to find the amount of redo generated per hour. SELECT Start_Date, Start_Time, Num_Logs, Round(Num_Logs * (Vl.Bytes / (1024 * 1024)), 2) AS Mbytes, Vdb.NAME AS Dbname FROM (SELECT To_Char(Vlh.First_Time, 'YYYY-MM-DD') AS Start_Date, To_Char(Vlh.First_Time, 'HH24') || ':00' AS Start_Time, COUNT(Vlh.Thread#) Num_Logs FROM V$log_History Vlh GROUP BY To_Char(Vlh.First_Time, 'YYYY-MM-DD'), To_Char(Vlh.First_Time, 'HH24') || ':00') Log_Hist, V$log Vl, V$database Vdb WHERE Vl.Group# = 1 ORDER BY Log_Hist.Start_Date, Log_Hist.Start_Time;

9. Run the statpack report

Run the statpack twice in a day to collect database statistics & compare it with previous statistics. Follow the Statpack document to create the statpack report. 10. Check the SQL query consuming lot of resources Need to Run the stats pack report or problematic_sql_query.sql to verify query consuming lot of resource then find the explain plan of this particular query & compare it with previous plan. Follow the Sql Tuning troubleshooting Document. Break on User_Name On Disk_Reads on Buffer_Gets on Rows_Processed Select A.User_Name, B.Disk_Reads, B.Buffer_Gets, B.Rows_Processed, C.SQL_Text From V$Open_Cursor A, V$SQLArea B, V$SQLText C Where A.User_Name = Upper('&&User') And A.Address = C.Address And A.Address = B.Address Order By A.User_Name, A.Address, C.Piece;

11. Check the usage of SGA Run share_pool_used.sql script to find the usage to share poll if used space is more than 80% then increase the share poll using the document how to increase share pool. select 100-round(a.bytes/b.sm*100,2) pctused from (select bytes from v$sgastat where name='free memory' AND pool='shared pool') a, (select sum(bytes) sm from v$sgastat where pool = 'shared pool') b

12. Detect Lock Objects Run the script detect_lock_object.sql to find the lock objects select o.object_name,l.oracle_username,l.os_user_name,l.session_id ,decode(l.locked_mode,2,'Row-S',3,'Row-X',4,'Share',5,'S/RowX',6 ,'Exclusive','NULL') from user_objects o , v$locked_object l where o.object_id = l.object_id;

13. Display database sessions using rollback segments Run the Script session_using_rollback_segment.sql to monitor the session using the rollback segment col RBS format a5 trunc

col col col col

SID format 9990 USER format a10 trunc COMMAND format a78 trunc status format a6 trunc

SELECT r.name "RBS", s.sid, s.serial#, s.username "USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo, substr(s.program, 1, 78) "COMMAND" FROM sys.v_$session s, sys.v_$transaction t, sys.v_$rollname r WHERE t.addr = s.taddr and t.xidusn = r.usn ORDER BY t.cr_get, t.phy_io

14. State of all the DB Block Buffer Run the script state_of_all_the _DB_BLOCK_BUFFERS.sql to find the status of DB block buffer. If available free block less then force the check point. set serverout on size 1000000 set verify off select decode(state, 0, 'Free', 1, decode(lrba_seq,0,'Available','Being Modified'), 2, 'Not Modified', 3, 'Being Read', 'Other') "BLOCK STATUS" ,count(*) cnt from sys.x$bh group by decode(state, 0, 'Free', 1, decode(lrba_seq,0,'Available','Being Modified'), 2, 'Not Modified', 3, 'Being Read', 'Other') / set verify on spool off

Weekly Activity 1. Check the Table fragmented Run the script check_table_fragmented_or_not.sql to verify the maximum extent & fragmented table. Follow the How to identified table is fragmented or not_and_resolve document to remove the fragmentation.

select table_name, round((blocks*8),2) tablesize, round((num_rows*avg_row_len/1024),2) actualsize from dba_tables where table_name='T';

2. Check the Chaining & Migrated Rows Run Chain_Row.sql to find the chaining & migrated rows in the table and Follow the remove_chaning_migrated_rows document to remove the same. SELECT chain_cnt, round(chain_cnt/num_rows*100,2) pct_chained, avg_row_len, pct_free , pct_used FROM user_tables WHERE table_name = 'ROW_MIG_CHAIN_DEMO';

3. Check the size of tables & check weather it need to partition or not Run the script table_size.sql to check the size of the table & follow the Partitioning in Oracle document to identify the table which are the candidate of partition. select table_name, round((blocks*8),2) tablesize, round((num_rows*avg_row_len/1024),2) actualsize from dba_tables where table_name='T';

4. Check for Block corruption Use DBV utility follow the document check_block_corruption to check & rectify the same or following script. — Read from v$backup_corruption SELECT distinct 'Data Block# '|| block# || ' of Data File ' || name || ' is corrupted.' FROM v$backup_corruption a, v$datafile b WHERE a.file# = b.file#; — Read from v$copy_corruption

SELECT distinct 'Data Block# '|| block# || ' of Data File ' || name || ' is corrupted.' FROM v$copy_corruption a, v$datafile b WHERE a.file# = b.file#;

5. Check the tables without PK Run the script table_with_no_pk.sql to check the tables having the primary key or not. if not found need to recommendation to create the same. Follow the document Create_PK_Table to create the PK select from where

and order

sysdate,OWNER,TABLE_NAME dba_tables dt not exists ( select 'TRUE' from dba_constraints dc where dc.TABLE_NAME = dt.TABLE_NAME and dc.CONSTRAINT_TYPE='P') OWNER not in ('SYS','SYSTEM') by OWNER, TABLE_NAME

6. Check the tables having no Indexes Run the script table_without_index.sql to find the tables without index & give recommendation to create the same. Follow the document create_index to create the indexes. select OWNER, TABLE_NAME from dba_tables minus select TABLE_OWNER, TABLE_NAME from dba_indexes ) orasnap_noindex where OWNER not in ('SYS','SYSTEM') order by OWNER,TABLE_NAME

7. Check the tables having more Indexes Run the script table_more_than_5 index. Sql to identify the table having more index & analyze the index either they are created on same leading column or not. Follow the document have_more_index to analyze the same. select sysdate,OWNER, TABLE_NAME, COUNT(*) index_count

from where group having order

dba_indexes OWNER not in ('SYS','SYSTEM') by OWNER, TABLE_NAME COUNT(*) > 5 by COUNT(*) desc, OWNER, TABLE_NAME

8. Check the tables having FK but there is no Index Run the FK_CONST_without_index_child_table.sql to check the table having the FK but there is no index that will lock the parent table also. So follow the document create_index to create the same. select sysdate,acc.OWNER, acc.CONSTRAINT_NAME, acc.COLUMN_NAME, acc.POSITION, 'No Index' Problem from dba_cons_columns acc, dba_constraints ac where ac.CONSTRAINT_NAME = acc.CONSTRAINT_NAME and ac.CONSTRAINT_TYPE = 'R' and acc.OWNER not in ('SYS','SYSTEM') and not exists ( select 'TRUE' from dba_ind_columns b where b.TABLE_OWNER = acc.OWNER and b.TABLE_NAME = acc.TABLE_NAME and b.COLUMN_NAME = acc.COLUMN_NAME and b.COLUMN_POSITION = acc.POSITION) order by acc.OWNER, acc.CONSTRAINT_NAME, acc.COLUMN_NAME, acc.POSITION

9. Check the objects having the more extents Run the script max_extent_table.sql to find the maximum extent are allocated. Follow the document Max_extent to analyze weather it will effect the performance or not. SELECT sysdate,segment_name table_name , COUNT(*) extents FROM dba_segments WHERE owner NOT IN ('SYS', 'SYSTEM') GROUP BY segment_name HAVING COUNT(*) = (SELECT MAX( COUNT(*) ) FROM dba_segments GROUP BY segment_name)

10. Check the frequently load objects & place them in separate tablespace & in cache Run the script frequent_load_object.sql to find the object which need to pin most the time. So we can place this object in separate tablespace as well as in keep cache. Follow the document how_move_objects_tablespace and how_to_put_object_cache. select OWNER, NAME||' - '||TYPE object, LOADS from v$db_object_cache where LOADS > 3 and type in ('PACKAGE','PACKAGE BODY','FUNCTION','PROCEDURE') order by LOADS desc

11. Check the free space at O/s Level Follow the document Stats_OS_level to check the free space at O/s Level & compare it with the threshold limit. 12. Check the CPU, Memory usage at O/s level define the threshold for the same. Follow the document Stats_OS_level to check the stats at O/s Level & compare it with the threshold limit 13. Check the used & free Block (High Water Mark)at object level. SELECT BLOCKS FROM DBA_SEGMENTS WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table); ANALYZE TABLE owner.table ESTIMATE STATISTICS; SELECT EMPTY_BLOCKS FROM DBA_TABLES WHERE OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table);

Thus, the tables' HWM = (query result 1) - (query result 2) - 1 14. Check the objects reaching to it’s Max extents Run the script object_reach_max_extents.sql to identify the objects reaching the max extent then compare it with the threshold limit. Follow the document Max_Extent for troubleshoot the same.

select sysdate,owner "Owner", segment_name "Segment Name", segment_type "Type", tablespace_name "Tablespace", extents "Ext", max_extents "Max" from dba_segments where ((max_extents - extents) 'TEST',cascade => TRUE); exec dbms_stats.gather_schema_stats(ownname=>'SCOTT',cascade => TRUE)

2. Check the Index need to Rebuild Run the Script Index_rebuild_need.sql to find the indexes which is required to rebuild then run the script index_rebuil.sql to rebuild the indexes select NAME, HEIGHT, DEL_LF_ROWS, DISTINCT_KEYS, ROWS_PER_KEY, BLKS_GETS_PER_ACCESS from INDEX_STATS

3. Check the tablespace for respective Tables & Indexes Run the script check_table_index_seprate_tablespace.sql to verify all the index and table are on separate tablespace. Follow the move_index_seprate_tablespace document to place the index on separate tablespace. Do this activity in the night. select owner,segment_name,segment_type,tablespace_name from dba_segments where owner='OWNER_NAME' order by segment_type

4. Check the No. of DML operation perform after last analysis

select inserts,updates,deletes,table_owner,table_name from sys.dba_tab_modifications where table_name=’TABLE_NAME';

5. Check the No. of Date of Last Analysis & No. of Record in the Table select TABLE_NAME, NUM_ROWS, LAST_ANALYZED from dba_tables where table_name='TABLE_NAME';

6. How to determine the table is required to analysis or not =============================================================== Note :- Compare the Number of rows, with no of insert, delete & update records, if this is more than 10% of number of rows then this table having the statle stats & need to analysis again ===============================================================

Find fragmentation in the tables SELECT * FROM (SELECT SUBSTR(TABLE_NAME, 1, 21) TABLE_NAME, NUM_ROWS, AVG_ROW_LEN ROWLEN, BLOCKS, ROUND((AVG_ROW_LEN + 1) * NUM_ROWS / 1000000, 0) NET_MB, ROUND(BLOCKS * (8000 - 23 * INI_TRANS) * (1 - PCT_FREE / 100) / 1000000, 0) GROSS_MB, ROUND((BLOCKS * (8000 - 23 * INI_TRANS) * (1 - PCT_FREE / 100) (AVG_ROW_LEN + 1) * NUM_ROWS) / 1000000) "WASTED_MB" FROM DBA_TABLES WHERE NUM_ROWS IS NOT NULL AND OWNER LIKE 'SAP%' AND PARTITIONED = 'NO' AND (IOT_TYPE != 'IOT' OR IOT_TYPE IS NULL) ORDER BY 7 DESC) WHERE ROWNUM 0

4

and l1.id1=l2.id1

5

and l1.id2=l2.id2

Check tablespace of tables & index ========================================================== select count(segment_type) from dba_segments where segment_type='INDEX' and tablespace_name='PSAPHCM' select count(segment_type) from dba_segments where segment_type='TABLE' and tablespace_name= 'PSAPHCM' ==========================================================

UNDO Tablespace need set linesize 120 set pagesize 60 alter session set nls_date_format = "dd-Mon-yyyy hh24:mi:ss"; COL TXNCOUNT FOR 99,999,999 HEAD 'Txn. Cnt.' COL MAXQUERYLEN FOR 99,999,999 HEAD 'Max|Query|Sec' COL MAXCONCURRENCY FOR 9,999 HEAD 'Max|Concr|Txn' COL bks_per_sec FOR 99,999,999 HEAD 'Blks per|Second' COL kb_per_second FOR 99,999,999 HEAD 'KB per|Second' COL undo_mb_required FOR 999,999 HEAD 'MB undo|Needed' COL ssolderrcnt FOR 9,999 HEAD 'ORA-01555|Count' COL nospaceerrcnt FOR 9,999 HEAD 'No Space|Count' break on report compute max of txncount maxquerylen maxconcurrency bks_per_sec kb_per_second undo_mb_required on report compute sum of ssolderrcnt nospaceerrcnt on report SELECT begin_time, txncount-lag(txncount) over (order by end_time) as txncount, maxquerylen, maxconcurrency, undoblks/((end_time - begin_time)*86400) as bks_per_sec, (undoblks/((end_time - begin_time)*86400)) * t.block_size/1024 as kb_per_second, ((undoblks/((end_time - begin_time)*86400)) * t.block_size/1024) * TO_NUMBER(p2.value)/1024 as undo_MB_required, ssolderrcnt, nospaceerrcnt FROM v$undostat s, dba_tablespaces t, v$parameter p, v$parameter p2 WHERE t.tablespace_name = UPPER(p.value) AND p.name = 'undo_tablespace' AND p2.name = 'undo_retention' ORDER BY begin_time;

ACTIVE - Undo Extent is Active, Used by a transaction. – using EXPIRED - Undo Extent is expired (Exceeded the Undo Retention). - free UNEXPIRED - Undo Extent will be required to honour UNDO_RETENTION – may be need

Undo Tablespace shrink by SMON after every 12 hrs. if more space is required. SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*) FROM DBA_UNDO_EXTENTS GROUP BY STATUS;

CHECK the SIZE of each data file SQL> SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 allocated_mb, 2 ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) 3 used_mb, 4 NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb 5 FROM v$datafile df, dba_free_space dfs 6 WHERE df.file# = dfs.file_id(+) 7 GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes 8 ORDER BY file_name;

Check the tablespace usages – considering the auto extend parameter on always select tablespace_name,sum(user_bytes)/1024/1024 "USED(MB)",sum(maxbytes)/1024/1024 "MAXBYTES(MB)",sum(maxbytesuser_bytes)/1024/1024 "FREESPACE(MB)", sum((user_bytes*100)/1024/1024)/(sum(maxbytes)/1024/1024) "USED%" from dba_data_files group by tablespace_name

Check the tablespaces usages – consider the auto extend on or not SELECT BB.tablespace_name, BB.USED, BB.MAXBYTES, BB.FREESPACE, BB.USEDP FROM (select tablespace_name, ROUND(sum(user_bytes)/1024/1024,2) USED, ROUND(sum(maxbytes)/1024/1024,2) MAXBYTES, ROUND(sum(maxbytes-user_bytes)/1024/1024,2) FREESPACE, ROUND(sum((user_bytes*100)/1024/1024)/(sum(maxbytes)/1024/1024),2) USEDP from dba_data_files d where d.autoextensible='YES' group by tablespace_name ) BB WHERE BB.USEDP>85 UNION select aa.tb, AA.Used_Space, AA.Total_Space, AA.Free_Space, (Used_Space*100/Total_Space) Perc from (select fs.tablespace_name tb, sum(fs.bytes/1024/1024) Free_Space, sum(df.bytes/1024/1024) Total_Space, sum(df.bytes/1024/1024) - sum(fs.bytes/1024/1024) Used_Space from dba_free_space fs, dba_data_files df where fs.tablespace_name=df.tablespace_name

AND dF.autoextensible='NO' group by fs.tablespace_name ) aa where (Used_Space*100/Total_Space)>85

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF