Total Pageviews

Thursday, January 17, 2013

Oracle Interview Questions - Part I

Skillbuilders

Keyword/Topics for an Oracle Interview :  
  • Oracle 9i, 10g, 11g Architecture (Oracle Internals - Part of SGA, PGA)
    • General :
    • 9i : 
    • 10g :
    • 11g : 
    • 12c
    • RAC :
    • Summary
      • Oracle Server = Oracle Instance + Oracle Database
      • Oracle Instance = Background Processes and Memory Structures
      • Oracle Database = Set of Files
      • Oracle Memory Structure
        • SGA (System Global Area) - Part of Oracle Memory shared by all processes in an instance
          • Fixed SGA
          • Variable SGA
            • DB Buffer Cache = db_block_buffers
            • Shared Pool = shared_pool_size
              • Library Cache comprising of (Shared SQL area for sql and execution plan + UGA)
              • Reserved Pool
              • DD Cache
              • Result Cache (11g)
              • Enqueues and Latches
            • Redo Log Buffer = log_buffer
            • Java Pool
            • Large Pool (optional) = large_pool_size
            • KEEP Pool
            • Recycle Buffer Pool
            • Streams Pool (11g)
            • Other structures like lock and latch management
          • Links
        • PGA - Program Global Area - Part of Oracle Memory private to each process.
          • Session Memory
            • Private SQL Area
              • Bind Variables
              • Query Execution State info
            • SQL Work Areas
              • Sorting Data
              • Hash Tables for hash join
      • Processes
        • Server Processes
        • Background Processes
          • Server (Snnn)
          • Dispatcher (Dnnn)
          • Queue Monitor (QMNn)
          • System Monitor (SMON)
          • Process Monitor (PMON)
          • Database Writer (DBW0 or DBWn)
          • Log Writer (LGWR)
          • Checkpoint (CKPT)
          • Recoverer (RECO)
          • Lock Manager Server (LMS) – Real Application Clusters only
          • Archiver (ARCn)
      • Files
        • Data files
        • Undo files
        • Redo files
        • Control files
        • Parameter file
        • ora files
        • Alert log
        • Trace files
        • Archive files
        • Backup dump files 
    • SMON - Server Monitor manages the server in case of failure, performs instance recovery.
    • PMON - Process Monitor helps to release resources held by a failed process or help in process recovery.
    • SGA - System Global Area  is part of oracle memory that stores data and control info common for all oracle user processes.
    • PGA - Program Global Area is part of oracle memory that stores data and control info for a single oracle user process.
    • UGA - User Global Area
    • PFILE - Oracle param file specifies path of the control files and how the memory is allocated and other characteristics of the Oracle Instance.
    • Redo Log File - Records chgs made to the db
    • CKPT - puts a timestamp on all control and data files when a checkpoint occurs
    • A checkpoint switch occurs when DBWR writes all modified buffers in the SGA buffer cache to the datafiles.
    • A Tablespace in Oracle comprises of one or more data files.
    • PCTFREE - a param which specifies how much space shld be left free in a block for future updates.
    • DBWn - writes the modified blocks from db buffer cache in the SGA to the db files
    • LGWR is a sys log writer that records the system activities and writes nto a log file.
    • A Chained Row is a row that is stored across multiple database blocks as it is too large to fit into a single database block
    • HWM - High Water Mark
      • The database doesn't know what is on a block unless and until it reads the block.
        You can think at the high-water mark (HWM) as the rightmost block that ever contained data in a segment.
      • The High water mark is the boundary between used and unused space in a segment.
      • High Water Mark = TOTAL BLOCKSUNUSED BLOCKS
      • OR High Water Mark = USER_SEGMENTS.BLOCKSUSER_TABLES.EMPTY_BLOCKS1
      • Truncated, (TRUNCATE will reset the HWM of a table back to 'zero')
    • SCN - System Change Number                           http://gerardnico.com/wiki/database/oracle/scn
      • A system change number (SCN) is a logical, internal time stamp used by Oracle Database. SCNs order events that occur within the database, which is necessary to satisfy the ACID properties of a transaction.
        Oracle Database uses SCNs to mark the SCN before which all changes are known to be on disk so that recovery avoids applying unnecessary redo. The database also uses SCNs to mark the point at which no redo exists for a set of data so that recovery can stop.
        SCNs occur in a monotonically increasing sequence. Oracle Database can use an SCN like a clock because an observed SCN indicates a logical point in time and repeated observations return equal or greater values. If one event has a lower SCN than another event, then it occurred at an earlier time with respect to the database. Several events may share the same SCN, which means that they occurred at the same time with respect to the database.
        A scn can be created with the savepoint command ?
      • Transaction

        Every transaction has an SCN. For example, if a transaction updates a row, then the database records the SCN at which this update occurred. Other modifications in this transaction have the same SCN. When a transaction commits, the database records an SCN for this commit.

      • Process

        Oracle Database increments SCNs in the system global area (SGA). When a transaction modifies data, the database writes a new SCN to the undo data segment assigned to the transaction. The log writer process then writes the commit record of the transaction immediately to the online redo log. The commit record has the unique SCN of the transaction. Oracle Database also uses SCNs as part of its instance recovery and media recovery mechanisms.
      • ORA_ROWSCN
        • For each row, ORA_ROWSCN returns the conservative upper bound system change number (SCN) of the most recent change to the row. This pseudocolumn is useful for determining approximately when a row was last updated. It is not absolutely precise, because Oracle tracks SCNs by transaction committed for the block in which the row resides.
        • SELECT MAX(ORA_ROWSCN), SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN)) FROM SCOTT.EMP;
    • 9i vs 10g vs 11g
    • How does Oracle execute query internally
      • All SQL statements have to go through the following stages of execution:
        • 1) PARSE: Every SQL statement has to be parsed which includes checking the syntax, validating, ensuring that all references to objects are correct and ensuring that relevant privileges to those object exist.
        • 2) BIND: After parsing, the oracle server knows the meaning of the oracle statement but still may not have enough info(values for variables) to execute the statement. The process of obtaining these value is called as bind values.
        • 3) EXECUTE: After binding, the Oracle server executes the statement.
        • 4) FETCH: In the fetch stage, rows are selected and ordered and each successive row retrieves another row of the result until the last row has been fetched. This stage is only for certain DML statements like SELECT.
      • When we pass a SELECT query it executes under some phases. That includes
        • 1) User submits a request from the client or request is recieved from an app via jdbc/odbc
        • 2) The request goes from the client to the listener which identifies the db service and passes it to the server.
        • 2) Checking by Server Process - Which type of Query is this - SQL or PL/SQL.
        • 3) If SQL- server process checks if it's already available under library cache under shared pool and acquires a handle, if not then it places the code in SQL area available after parsing if space is available else apply LRU to create space.
        • 4) If PL/SQL - server process puts the SQL code in SQL area after above step and checks if PL code is available in PL area available under library cache area under shared pool, if not then it places the code in PL area available after parsing if space is available else apply LRU to create space.
        • 5) Now Parsing (Syntax, Privileges on user passing the query checks)
        • 6) Check if execution plan is already available or not in shared sql area by creating a hash key and doing a lookup.
        • 7) If not then it creates an execution plan on the basis of available statistics in the database dictionary cache. Checks if space is available in the shared pool to place the execution plan. If space is available it places the plan on the shared pool, waits for a latch and uses the same. If the Shared pool is full it will apply LRU algo before putting the sql on the shared sql area. If sql and plan are already available, uses available execution plan directly.
        • 8) Checks if the data is available in the result cache, if available will use the same.
        • 9) Else checks if the data blocks contains required data are available in buffer cache or not
        • 10) If yes, server process pick the data from BC and return to the user. If not server process reads the data blocks from data files and put the data blocks into buffer cache and return to the user.
      • The order of Query Execution is following (Right to Left, Bottom Up)
        • 1)From Clause
        • 2)Where Clause
        • 3)Group By Clause
        • 4)Having Clause
        • 5)Select
        • 6)Order By Clause
    • Sum of the sizes of all the data files specified in the create tablespace determines the init size of a tablespace
    • INIT size of Diff between 9i, 10g, 11g
    • Optimistic and Pessimistic Locking
    • Data Concurrency and Consistency
    • Session (login) vs Transaction (commit) vs Unit of Work (begin end block)
      • Use the BEGIN_DISCRETE_TRANSACTION procedure to improve the performance of short, non-distributed transactions.
      • Use the SET TRANSACTION command with the USE ROLLBACK SEGMENT parameter to explicitly assign a transaction to an appropriate rollback segment. This can eliminate the need to dynamically allocate additional extents, which can reduce overall system performance.
      • Use the SET TRANSACTION command with the ISOLATION LEVEL set to SERIALIZABLE to get ANSI/ISO serializable transactions.
      • Establish standards for writing SQL statements so that you can take advantage of shared SQL areas. Oracle recognizes identical SQL statements and allows them to share memory areas. This reduces memory usage on the database server and increases system throughput.
      • Use the ANALYZE command to collect statistics that can be used by Oracle to implement a cost-based approach to SQL statement optimization. You can supply additional "hints" to the optimizer as needed.
      • Call the DBMS_APPLICATION_INFO.SET_ACTION procedure before beginning a transaction to register and name a transaction for later use when measuring performance across an application. You should specify what type of activity a transaction performs so that the system tuners can later see which transactions are taking up the most system resources.
      • Increase user productivity and query efficiency by including user-written PL/SQL functions in SQL expressions as described in "Calling Stored Functions from SQL Expressions".
      • Create explicit cursors when writing a PL/SQL application.
      • When writing precompiler programs, increasing the number of cursors using MAX_OPEN_CURSORS can often reduce the frequency of parsing and improve performance.
      • To commit a transaction, use the COMMIT command. The following two statements are equivalent and commit the current transaction:
        COMMIT WORK;
        COMMIT;
        
        
        The COMMIT command lets you include the COMMENT parameter along with a comment (less than 50 characters) that provides information about the transaction being committed. This option is useful for including information about the origin of the transaction when you commit distributed transactions:
        COMMIT COMMENT 'Dallas/Accts_pay/Trans_type 10B';
      • A read-only transaction is started with a SET TRANSACTION statement that includes the READ ONLY option. For example:
        SET TRANSACTION READ ONLY;
      • A transaction explicitly acquires the specified table locks when a LOCK TABLE statement is executed. A LOCK TABLE statement manually overrides default locking. When a LOCK TABLE statement is issued on a view, the underlying base tables are locked. The following statement acquires exclusive table locks for the EMP_TAB and DEPT_TAB tables on behalf of the containing transaction:
        LOCK TABLE Emp_tab, Dept_tab
            IN EXCLUSIVE MODE NOWAIT;
      • LOCK TABLE Emp_tab IN ROW SHARE MODE;
        LOCK TABLE Emp_tab IN ROW EXCLUSIVE MODE;
      • Table 7-2 Nondefault Locking Behavior
        StatementCase 1: rowCase 1: tableCase 2: rowCase 2: tableCase 3: rowCase 3: table

        SELECT

        -

        -

        -

        S

        -

        S

        INSERT

        X

        SRX

        X

        RX

        X

        SRX

        UPDATE

        X

        SRX

        X

        SRX

        X

        SRX

        DELETE

        X

        SRX

        X

        SRX

        X

        SRX

        SELECT...FOR UPDATE

        X

        RS

        X

        S

        X

        S

        LOCK TABLE... IN..

        -

        -

        -

        -

        -

        -

        ROW SHARE MODE

        -

        RS

        -

        RS

        -

        RS

        ROW EXCLUSIVE MODE

        -

        RX

        -

        RX

        -

        RX

        SHARE MODE

        -

        S

        -

        S

        -

        S

        SHARE ROW EXCLUSIVE MODE

        -

        SRX

        -

        SRX

        -

        SRX

        EXCLUSIVE MODE

        -

        X

        -

        X

        -

        X

        DDL statements

        -

        X

        -

        X

        -

        X
      • Table 7-3 Summary of ANSI Isolation Levels
        Isolation LevelDirty Read (1)Non-Repeatable Read (2)Phantom Read (3)

        READ UNCOMMITTED

        Possible

        Possible

        Possible

        READ COMMITTED

        Not possible

        Possible

        Possible

        REPEATABLE READ

        Not possible

        Not possible

        Possible

        SERIALIZABLE

        Not possible

        Not possible

        Not possible

        Notes:

        (1) A transaction can read uncommitted data changed by another transaction.

        (2) A transaction rereads data committed by another transaction and sees the new data.

        (3) A transaction can execute a query again, and discover new rows inserted by another committed transaction.
        The behavior of Oracle with respect to these isolation levels is summarized below:
        Isolation LevelDescription

        READ UNCOMMITTED

        Oracle never permits "dirty reads." Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle.

        READ COMMITTED

        Oracle meets the READ COMMITTED isolation standard. This is the default mode for all Oracle applications. Because an Oracle query only sees data that was committed at the beginning of the query (the snapshot time), Oracle actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READ COMMITTED isolation.

        REPEATABLE READ

        Oracle does not normally support this isolation level, except as provided by SERIALIZABLE.

        SERIALIZABLE

        You can set this isolation level using the SET TRANSACTION command or the ALTER SESSION command.
        Table 7-4  Read Committed Versus Serializable Transaction

        OperationRead CommittedSerializable

        Dirty write

        Not Possible

        Not Possible

        Dirty read

        Not Possible

        Not Possible

        Non-repeatable read

        Possible

        Not Possible

        Phantoms

        Possible

        Not Possible

        Compliant with ANSI/ISO SQL 92

        Yes

        Yes

        Read snapshot time

        Statement

        Transaction

        Transaction set consistency

        Statement level

        Transaction level

        Row-level locking

        Yes

        Yes

        Readers block writers

        No

        No

        Writers block readers

        No

        No

        Different-row writers block writers

        No

        No

        Same-row writers block writers

        Yes

        Yes

        Waits for blocking transaction

        Yes

        Yes

        Subject to "can't serialize access" error

        No

        Yes

        Error after blocking transaction aborts

        No

        No

        Error after blocking transaction commits

        No

        Yes

    • Locking and Blocks
    • 2 Phase Commit
    • Distributed Transaction is a transaction that includes one or more statements that update the db.
    • Drop in 10g vs 9i
    • Data on 2 or more distinct nodes of a distributed db.
    • Flash back syntax and recycle bin
    • Yes, Recycle table gets records after TRUNCATE.
    • Difference between DB Buffer Cache and Result Cache in 11g.
      • Server Result Cache
      • Query result cache in Oracle 11g
      • There are three new result caching features in 11g:
        • Query Result Cache;
        • PL/SQL function result cache; and
        • Client OCI result cache.
        • Query Result Cache : As its name suggests, the query result cache is used to store the results of SQL queries for re-use in subsequent executions. By caching the results of queries, Oracle can avoid having to repeat the potentially time-consuming and intensive operations that generated the resultset in the first place (for example, sorting/aggregation, physical I/O, joins etc). The cache results themselves are available across the instance (i.e. for use by sessions other than the one that first executed the query) and are maintained by Oracle in a dedicated area of memory. Unlike our homegrown solutions using associative arrays or global temporary tables, the query result cache is completely transparent to our applications. It is also maintained for consistency automatically, unlike our own caching programs.
        • A brief explanation of each of these parameters is as follows.
          • result_cache_mode: the result cache can be enabled in three ways: via hint, alter session or alter system. Default is MANUAL which means that we need to explicitly request caching via the RESULT_CACHE hint;
          • result_cache_max_size: this is the size of the result cache in bytes. The cache is allocated directly from the shared pool but is maintained separately (for example, flushing the shared pool will not flush the result cache);
          • result_cache_max_result: this specifies the highest percentage of the cache that is able to be used by a single resultset (default 5%); and
          • result_cache_remote_expiration: this specifies the number of minutes for which a resultset based on a remote object can remain valid. The default is 0 which means that resultsets defendant on remote objects will not be cached.





  • Oracle 11g New Features

    • Oracle 12c New Features
    • Codd 12 Rules
    • Normalization
      • 1NF - A table is in first normal form if it contains no repeating groups
      • 2NF - A relation is in second normal form (2NF) if and only if it is in 1NF and every nonkey attribute is fully dependent on the primary key. That is No non-prime attribute in the table is functionally dependent on a proper subset of any candidate key. All non-prime attributes should depend on the entire primary key.
      • 3NF - Every non-prime attribute is non-transitively dependent on every candidate key in the table.
      • BCNF / 3.5 NF - Every determinant must be a candidate key.
      • 4NF - A relation is in 4NF if it has no multi-valued dependencies.
      • Links
    • Data Types
    • MAX LIMITS
      • Logical Database Limits
      • Max cols in a table = 1000
      • Indexes per table = Unlimited
      • Limit to max no of chars in DBMS_OUTPUT.PUT_LINE = 255 (9i), 32K (10g)
      • 1 BEGIN--9i
        2 DBMS_OUTPUT.PUT_LINE( RPAD('x',256,'x') );
        3 END;
        4 /
        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
        xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx PL/SQL procedure successfully completed.
      •   1 BEGIN--10g
          2    DBMS_OUTPUT.PUT_LINE( RPAD('x',32767,'x') || 'even more char');
          3  END;
          4  /
        
        BEGIN
        *
        ERROR at line 1:
        ORA-20000: ORU-10028: line length overflow, limit of 32767
        bytes per line
        • SQL> set serveroutput on size 1000
          
          SP2-0547: size option 1000 out of range (2000 through 1000000)
        • SQL> set serveroutput on size unlimited 
        • SQL> show serveroutput
        • serveroutput ON SIZE UNLIMITED FORMAT WORD_WRAPPED
        • SQL> set serveroutput on
        • SQL> show serveroutput
        • serveroutput ON SIZE UNLIMITED FORMAT WORD_WRAPPED
      • MAX Limit on the static values in the IN clause is 1000
      • MAX Limit on the no of sub queries IN the where clause  = 255
      • MAX Limit on the no of sub queries IN the from clause = No Limit
      • Triggers = 11
    • Programming Structures
      • For Loop i index CANNOT be referred or incremented outside the LOOP construct.
    • PSUEDO Columns
      • ROWID - Physical location of a row on the disk
      • ORA_ROWSCN - SCN no for the row 
      • ROWNUM - Logical Number of Row in a resultset
      • NEXTVAL - Next Value of a Sequence
      • CURRVAL - Current Value of a Sequence
    • SQL
      • LIKE is used for wild card compares with % and _
      • Concatenation operator ||
      • A many to many relation is implemented using a intersection table / bridge table
      • Sub-query in IN clause can't have order by clause.
      • MERGE and Conditional MERGE for 3 operations
      • Delete from parent when child exist (CASCADE OPT)
      • Select * from emp where pos = 'sales' or pos = 'pur' and sal > 1500 what will execute first AND or OR. Refer Operator Precedence.
      • Single row sub query returns more than 1 row error can be avoided using In clause
      • What is the output of select * from emp where rownum = 2; No rows returns whereas
      • Select * from emp where rownum <= 2 will give an output, doesn't work with > or =. The filter is applied after the resultset is returned.
      • But <=1, = 1 and >=1 all will give result
      • CO-RELATED queries O(m*n) instead of O(m+n) for IN
      • ANSI compliant equivalent statement
      • Selecting from a cursor / collection
      • X RIGHT OUTER JOIN Y will return all rows from the table Y on the RIGHT
      • In outer join (+) side table can have NULL Cols
      • Oracle creates implicit cursor for DML operations Insert/Update/Delete
      • Delete on of the identical rows
        • Delete from tab t1 where t1.rowid > (Select min(t2.rowid) from tab t2 where t2.emp_no = t1.emp_no);
      • 4 table join will have min 3 join conditions
      • NULL in IF means do nothing and pass control to the next statement.
      • Drop Table with purge option drops the table including from the recycle bin
    • Handling NULLs
      • NVL - The NVL function allows you to replace null values with a default value. If the value in the first parameter is null, the function returns the value in the second parameter. If the first parameter is any value other than null, it is returned unchanged.
      • DECODE - The DECODE function is not specifically for handling null values, but it can be used in a similar way to the NVL function, as shown by the following example.
      • NULLIF - Returns NULL for a condition. The NULLIF function was introduced in Oracle 9i. It accepts two parameters and returns null if both parameters are equal. If they are not equal, the first parameter value is returned.
      • NVL2 The NVL2 function accepts three parameters. If the first parameter value is not null it returns the value in the second parameter. If the first parameter value is null, it returns the third parameter.
      • COALESCE The COALESCE function was introduced in Oracle 9i. It accepts two or more parameters and returns the first non-null value in a list. If all parameters contain null values, it returns null.
    • PL/SQL
      • Variables types as per scope : Local / Global / Constant
      • DDL in PL/SQL (Yes its possible using EXECUTE IMMEDIATE)
    • OPERATORS
      • Operator precedence
        1. NULL , LIKE , BETWEEN , IN, EXISTS
        2. NOT
        3. AND
        4. OR * / + -  
      • INTERVAL
        BINARY, COLLATE
        !
        - (unary minus), ~ (unary bit inversion)
        ^
        *, /, DIV, %, MOD
        -, +
        <<, >>
        &
        |
        = (comparison), <=>, >=, >, <=, <, <>, !=, IS, LIKE, REGEXP, IN
        BETWEEN, CASE, WHEN, THEN, ELSE
        NOT
        AND, &&
        XOR
        OR, ||
        = (assignment), :=
    • Data Dictionary Tables
      • user_views - Objects owned by current schema
      • all_views - Objects in all schema wherein the current schema has privilege to access
      • dba_view - All Object in the DB irrespective of any access
      • user_objects - gives info abt list of objects
      • user_tables - gives info abt list of tables
      • user_col_privs - gives info abt column priviledges
      • user_tab_columns - gives info abt table columns
      • dba_data_files - gives info about tablespace and size of file
    • EXTERNAL Tables
    • CURSOR SHARING AND TYPES
    • MATERIALIZED VIEW (aka. time based snapshots aka summary tables)
      • A materialized view stores both definitions of view plus rows resulting from the execution of the view. It is more efficient to use materialized views if query involves summaries, large or multiple joins or both. It is a pre-computed table comprising aggregated or joined data from fact and possibly dimensions tables. Also known as a summary or aggregate table and mainly used for improving query performance or providing replicated data.
      • A materialized view is a replica of a target master from a single point in time. The master can be either a master table at a master site or a master materialized view at a materialized view site. Whereas in multi-master replication tables are continuously updated by other master sites, materialized views are updated from one or more masters through individual batch updates, known as a refreshes, from a single master site or master materialized view site, as illustrated in Figure 3-1. The arrows in Figure 3-1 represent database links.
      • Types of materialized views
        • There are three types of materialized views :
          • Read-only materialized view
          • Updateable materialized view
          • Writeable materialized view 
      • Techniques used to refresh a Materialized View (Procedure used to refresh a mview)
        • Materialized Views Refresh Techniques
          • Manual Refresh: Can be performed using DBMS_MVIEW package. (REFRESH, REFRESH_DEPENDENT, REFRESH_ALL_VIEWS)
          • Automatic Refresh: Can be performed in two ways:
            • a)ON COMMIT – Materialized view gets updated whenever changes to one of these tables are committed.
            • b)ON DEMAND – At Specified Time – Refresh is scheduled to occur for specified time by using START WITH & NEXT clauses. For such refreshes, instance must initiate a process with JOB_QUEUE_PROCESSES.
        • Materialized Views Refresh Methods
          • FAST Clause 
            • The FAST refreshes use the materialized view logs (as seen above) to send the rows that have changed from master tables to the materialized view.
            • You should create a materialized view log for the master tables if you specify the REFRESH FAST clause.
            • SQL> CREATE MATERIALIZED VIEW LOG ON emp;
            • Materialized view log created.
            • Materialized views are not eligible for fast refresh if the defined subquery contains an analytic function. 
          • COMPLETE Clause (Also know as full refresh)
            • The complete refresh re-creates the entire materialized view. If you request a complete refresh, Oracle performs a complete refresh even if a fast refresh is possible. 
          • FORCE Clause
            • When you specify a FORCE clause, Oracle will perform a fast refresh if one is possible or a complete refresh otherwise. If you do not specify a refresh method (FAST, COMPLETE, or FORCE), FORCE is the default.
          • PRIMARY KEY and ROWID Clause 
            • WITH PRIMARY KEY is used to create a primary key materialized view i.e. the materialized view is based on the primary key of the master table instead of ROWID (for ROWID clause). PRIMARY KEY is the default option. To use the PRIMARY KEY clause you should have defined PRIMARY KEY on the master table or else you should use ROWID based materialized views.
            • PRIMARY KEY materialized views allow materialized view master tables to be reorganized without affecting the eligibility of the materialized view for fast refresh.
            • ROWID materialized views should have a single master table and cannot contain any of the following :
              • DISTINCT or AGGREGATE functions
              • GROUP BY Subqueries , JOINS & SET operations
              • TIMING the refresh
          • The START WITH clause tells the database when to perform the first replication from the master table to the local base table. It should evaluate to a future point in time. The NEXT clause specifies the interval between refreshes
      • REFRESHING materialized view
        • Refreshing a materialized view synchronizes is with its master table. Oracle performs the following operations when refreshing a materialized view. In the case of a complete refresh (using dbms_mview.refresh
          1. sys.snap$ and sys.mlog$ are updated to reflect the time of the refresh.
          2. The materialized base view is truncated.
          3. All rows selected from the master table are inserted into the snapshot base table.
          4. sys.slog$ is updated to reflect the time of the refresh.
        • In the case of a fast refresh, the steps are :
          1. sys.snap$ and sys.mlog$ are updated to reflect the time of the refresh.
          2. Rows in the materialized base view are deleted.
          3. All rows selected from the master table are inserted into the snapshot base table.
          4. sys.slog$ is updated to reflect the time of the refresh.
          5. Rows that are not needed anymore for a refresh by any materialized view are deleted from the materialized view log (.MLOG$_table)
      • If a materialized view is being refreshed can be checked by querying the type of v$lock: if the type is JI a refresh is being performed.
      • The following query checks for this : select  o.owner "Owner", o.object_name "Mat View", username "Username", s.sid "Sid"  from v$lock l, dba_objects o, v$session s  where o.object_id = l.id1 and l.type ='JI' and l.lmode = 6 and s.sid = l.sid and o.object_type = 'TABLE'
      • Can a mview be partitioned - Yes
      • Can you create an index on a mview - Yes
      • SNAPSHOT too old error
        • The ORA-01555 is caused by Oracle read consistency mechanism.  If you have a long running SQL that starts at 10:30 AM, Oracle ensures that all rows are as they appeared at 10:30 AM, even if the query runs until noon!
        • Oracles does this by reading the "before image" of changed rows from the online undo segments.  If you have lots of updates, long running SQL and too small UNDO, the ORA-01555 error will appear.
          From the docs we see that the ORA-01555 error relates to insufficient undo storage or a too small value for the undo_retention parameter:

          ORA-01555: snapshot too old: rollback segment number string with name "string" too small.
          Cause: Rollback records needed by a reader for consistent read are overwritten by other writers.
          Action: If in Automatic Undo Management mode, increase the setting of UNDO_RETENTION. Otherwise, use larger rollback segments.

          The ORA-01555 snapshot too old error can be addressed by several remedies:
          1. Re-schedule long-running queries when the system has less DML load.
          2. Increasing the size of your rollback segment (undo) size.  The ORA-01555 snapshot too old also relates to your setting for automatic undo retention.
          3. Don't fetch between commits.
      • QUERY RE-WRITE
        • Query rewrite is the process of modifying a query to use the view rather than the base table.
        • User do not explicitly need privileges on materialized views, permission on underlying base tables are required.
        • Optimizer rewrites the queries in order to utilize materialized views.
        • GLOBAL_QUERY_REWRITE > QUERY_REWRITE system privileges allows user to enable materialized views.
      • Links
    • GTT
      • Definition
        • Applications often use some form of Temporary Data Store for processes that are to complicated to complete in a single pass. Often, these temporary stores are defined as database tables or PL/SQL tables. In Oracle 8i, the maintenance and management of temporary tables can be delegated to the server by using Global Temporary Tables.
      • Common Errors
      • Use
        • Improve Performance
        • Reporting
      • Global Temporary Tables have three major benefits:
        • Non-interference between private sets of data.
        • Ease of getting rid of 'scratch' data. In a heap table you either rollback, or delete it. But in a GTT, you can truncate explicitly, without affecting anyone else (or allow the implicit "truncate on commit / exit" effect to do the same thing).
        • Decreased redo generation as, by definition, they are non-logging.
        • Get your remote data into a cursor using database link
        • Processes your data based on your custom requirement (such as filtering/modifying).
        • Insert the changed data into a global temporary table.
        • Join the global temporary table with other local oracle tables based on your requirement.
        • Reduce complexity and no of joins
        • Put the resultant data into its final destination table in your local database or to another remote database.
        • Limitation:
          • Mixing Global Temporary Tables (GTTs) with permanent tables usually causes some grief to the CBO. It has no information about the number of rows in the GTT, and therefore guesses (badly).
          • Even if you analyze table .. or dbms_stats.gather_table_stats() you don't get stats on the temporary table.
          • Set the init parameter dynamic_sampling to at least 2 for GTTs to be sampled at run-time.
        • Note: All DDL's includes two implicit commits so any rows in a GTT specified with ON COMMIT DELETE ROWS will empty the table.
      • ON COMMIT
          • PRESERVE ROWS - truncate table when session ends.
          • DELETE ROWStruncate data after each commit.
      • Links
      • Miscellaneous Features
        • If the TRUNCATE statement is issued against a temporary table, only the session specific data is truncated. There is no affect on the data of other sessions.
        • Data in temporary tables is stored in temp segments in the temp tablespace.
        • Data in temporary tables is automatically deleted at the end of the database session, even if it ends abnormally.
        • Indexes can be created on temporary tables. The content of the index and the scope of the index is that same as the database session.
        • Views can be created against temporary tables and combinations of temporary and permanent tables.
        • Temporary tables can have triggers associated with them.
        • Export and Import utilities can be used to transfer the table definitions, but no data rows are processed.
        • There are a number of restrictions related to temporary tables but these are version specific.
    • VIEWS
      • FORCE - Creates view even if the underlying table does not exist
      • WITH CHECK OPTION - Restricts data for insert statements on view
      • Read Only - Disallows DML on the view
      • Indexes on views - Only possible in MView, not possible in regular view
      • Create or Replace is used to add new columns to a view as it helps preserves grants.
      • If a table / view is dropped nothing happens to its synonym 
    • SEQUENCE
      • Sequence and uses
      • Syntax for sequence create sequence seq1 start with 1
      • NOCACHE
      • CYCLE
      • NEXTVAL
      • CURRVAL
    • Packages/Procedures/Functions/Blocks
      • Adv of a package
      • Package spec without a body
      • Adv proc vs sql vs pkg
      • Encapsulation
      • Overloading in pkg & outside for proc
      • Overriding
      • NOCOPY Hint and Other Hints
      • Pipelined functions - PIPE ROW () Table ()
      • EXECUTE
      • EXEC
      • Call functions in sql
      • PRAGMA passes information to compiler uses SGA. EXCEPTION_INIT, SERIALLY REUSABLE, AUTONOMOUS TRANSACTION, RESTRICT REFERENCES (PURITY LEVEL), INLINE
        • A package that is marked SERIALLY_REUSABLE has the following properties:
          • Its package variables are meant for use only within the work boundaries, which correspond to calls to the server (either OCI call boundaries or PL/SQL RPC calls to the server).

            Note:
            If the application programmer makes a mistake and depends on a package variable that is set in a previous unit of work, then the application program can fail. PL/SQL cannot check for such cases.

          • A pool of package instantiations is kept, and whenever a "unit of work" needs this package, one of the instantiations is "reused", as follows:
            • The package variables are reinitialized (for example, if the package variables have default values, then those values are reinitialized).
            • The initialization code in the package body is run again.
          • At the "end work" boundary, cleanup is done.
            • If any cursors were left open, then they are silently closed.
            • Some non-reusable secondary memory is freed (such as memory for collection variables or long VARCHAR2s).
            • This package instantiation is returned back to the pool of reusable instantiations kept for this package.
          • Serially reusable packages cannot be accessed from database triggers or other PL/SQL subprograms that are called from SQL statements. If you try, then Oracle Database generates an error.
          • With SERIALLY_REUSABLE packages, application developers have a way of modelling their applications to manage their memory better for scalability. Package state that they care about only for the duration of a call to the server should be captured in SERIALLY_REUSABLE packages.
        • SERIALLY_REUSABLE 
          The global memory for serially reusable packages is pooled in the System Global Area (SGA), not allocated to individual users in the User Global Area (UGA). That way, the package work area can be reused. When the call to the server ends, the memory is returned to the pool. Each time the package is reused, its public variables are initialized to their default values or to NULL.
        • SERIALLY_REUSABLE pragma can appear in the specification of a bodiless package, or in both the specification and body of a package. The pragma cannot appear only in the body of a package.
        • Serially reusable packages cannot be accessed from database triggers or other PL/SQL subprograms that are called from SQL statements. If you try, the database generates an error.
        • The EXCEPTION_INIT pragma associates a user-defined exception name with an Oracle Database error number. You can intercept any Oracle Database error number and write an exception handler for it, instead of using the OTHERS handler.
        • The RESTRICT REFERENCES pragma asserts that a user-defined subprogram does not read or write database tables or package variables.
          Subprograms that read or write database tables or package variables are difficult to optimize, because any call to the subprogram might produce different results or encounter errors.
        • The RESTRICT REFERENCES pragma is deprecated. Oracle recommends using DETERMINISTIC and PARALLEL_ENABLE (described in Function Declaration and Definition) instead of RESTRICT REFERENCES.
        • The INLINE pragma specifies that a subprogram call is, or is not, to be inlined. Inlining replaces a subprogram call (to a subprogram in the same program unit) with a copy of the called subprogram.
        • Example
        • DECLARE
             deadlock_detected EXCEPTION;
             PRAGMA EXCEPTION_INIT(deadlock_detected, -60);
          BEGIN
             NULL; -- Some operation that causes an ORA-00060 error
          EXCEPTION
             WHEN deadlock_detected THEN
                NULL; -- handle the error
          END;
          /
          raise_application_error(
                error_number, message[, {TRUE | FALSE}]);
          EXCEPTION
            WHEN invalid_number OR STANDARD.INVALID_NUMBER THEN
              -- handle the error
          END;
          DBMS_STANDARD.RAISE_APPLICATION_ERROR
        • The INLINE pragma specifies that a subprogram call is, or is not, to be inlined. Inlining replaces a subprogram call (to a subprogram in the same program unit) with a copy of the called subprogram.
        • The AUTONOMOUS_TRANSACTION pragma marks a routine as autonomous; that is, independent of the main transaction.
        • In this context, a routine is one of the following:
          • Top-level (not nested) anonymous PL/SQL block
          • Standalone, packaged, or nested subprogram
          • Method of a SQL object type
          • Database trigger
            You cannot apply this pragma to an entire package, but you can apply it to each subprogram in a package.
            You cannot apply this pragma to an entire an object type, but you can apply it to each method of a SQL object type.
            Unlike an ordinary trigger, an autonomous trigger can contain transaction control statements, such as COMMIT and ROLLBACK, and can issue DDL statements (such as CREATE and DROP) through the EXECUTE IMMEDIATE statement.
            In the main transaction, rolling back to a savepoint located before the call to the autonomous subprogram does not roll back the autonomous transaction. Remember, autonomous transactions are fully independent of the main transaction.
            If an autonomous transaction attempts to access a resource held by the main transaction (which cannot resume until the autonomous routine exits), a deadlock can occur. The database raises an exception in the autonomous transaction, which is rolled back if the exception goes unhandled.
            If you try to exit an active autonomous transaction without committing or rolling back, the database raises an exception. If the exception goes unhandled, or if the transaction ends because of some other unhandled exception, the transaction is rolled back.
      • AUTONOMOUS TRANSACTION defn and commit
      • Adv of PL/SQL Procedure over block (AUTHID, DEFINER)
      • INTO clause variable should be declared
      • RETURNING CLAUSE
    • HIERARCHICAL Queries
      • Used to navigate a hierarchy
      • CONNECT BY PRIOR ... START WITH
        • In the hr.employees table, the employee Steven King is the head of the company and has no manager. Among his employees is John Russell, who is the manager of department 80. If we update the employees table to set Russell as King's manager, we will create a loop in the data:
          UPDATE employees SET manager_id = 145
             WHERE employee_id = 100;
          
          SELECT last_name "Employee", 
             LEVEL, SYS_CONNECT_BY_PATH(last_name, '/') "Path"
             FROM employees
             WHERE level <= 3 AND department_id = 80
             START WITH last_name = 'King'
             CONNECT BY PRIOR employee_id = manager_id AND LEVEL <= 4;
            2    3    4    5    6    7  ERROR:
          ORA-01436: CONNECT BY loop in user data
          
          
          The NOCYCLE parameter in the CONNECT BY condition causes Oracle to return the rows in spite of the loop. The CONNECT_BY_ISCYCLE pseudocolumn shows you which rows contain the cycle:
          SELECT last_name "Employee", CONNECT_BY_ISCYCLE "Cycle",
             LEVEL, SYS_CONNECT_BY_PATH(last_name, '/') "Path"
             FROM employees
             WHERE level <= 3 AND department_id = 80
             START WITH last_name = 'King'
             CONNECT BY NOCYCLE PRIOR employee_id = manager_id AND LEVEL <= 4;
          
        • Simple Example :
          SELECT            
          CONNECT_BY_ROOT 
          Examples The following example returns the last name of
          each employee in department 110, each manager
          above that employee in the hierarchy, the number
          of levels between manager and employee,
          and the path between the two:
          
          
          SELECT last_name "Employee",
          CONNECT_BY_ROOT last_name "Manager",
             LEVEL-1 "Pathlen", SYS_CONNECT_BY_PATH(last_name, '/') "Path"
             FROM employees
             WHERE LEVEL > 1 and department_id = 110
             CONNECT BY PRIOR employee_id = manager_id;
          
          last_name, employee_id, manager_id, LEVEL
          FROM              employees
          START WITH        employee_id = 100
          CONNECT BY PRIOR  employee_id = manager_id
          ORDER SIBLINGS BY last_name;
        • In the hr.employees table, the employee Steven King is the head of the company and has no manager. Among his employees is John Russell, who is the manager of department 80. If we update the employees table to set Russell as King's manager, we will create a loop in the data:
          UPDATE employees SET manager_id = 145
             WHERE employee_id = 100;
          
          SELECT last_name "Employee", 
             LEVEL, SYS_CONNECT_BY_PATH(last_name, '/') "Path"
             FROM employees
             WHERE level <= 3 AND department_id = 80
             START WITH last_name = 'King'
             CONNECT BY PRIOR employee_id = manager_id AND LEVEL <= 4;
            2    3    4    5    6    7  ERROR:
          ORA-01436: CONNECT BY loop in user data
          
          
          The NOCYCLE parameter in the CONNECT BY condition causes Oracle to return the rows in spite of the loop. The CONNECT_BY_ISCYCLE pseudocolumn shows you which rows contain the cycle:
          SELECT last_name "Employee", CONNECT_BY_ISCYCLE "Cycle",
             LEVEL, SYS_CONNECT_BY_PATH(last_name, '/') "Path"
             FROM employees
             WHERE level <= 3 AND department_id = 80
             START WITH last_name = 'King'
             CONNECT BY NOCYCLE PRIOR employee_id = manager_id AND LEVEL 
             <= 4;
          
        • Simple Example :
          SELECT            last_name, employee_id, manager_id, LEVEL
          FROM              employees
          START WITH        employee_id = 100
          CONNECT BY PRIOR  employee_id = manager_id
          ORDER SIBLINGS BY last_name;
    • INDEXES
      • Index is useful when 80%+ columns values are not null
      • FULL TABLE scan is better than Index when fetching more than 20% of the records in the table.
      • Covering Indexes
      • Invisible Indexes
      • Indexes Compression
      • Types of indexes btree vs bitmap vs function vs reverse key
      • Bitmap join index
        • Bitmap join indexes
        • Difference between bitmap join indexes and materialized view
        • In Oracle 8i performance improvements were made using materialized views to store the resulting rows of queries. The benefits of this mechanism are still relevant, but a certain subset of the queries used in a data warehouse may benefit from the use of Bitmap Join Indexes.
        • Oracle 11g bitmap join indexes
        • A bitmap join index would allow you to index (for example) the DNAME column from DEPT - against the EMP table (emp has deptno, dept has deptno->dname - instead of query and join to find out everyone in the RESEARCH department - you can bitmap join index the data and just query the emp table) to do the same in a materialized view would have you replicate the entire EMP table and the DNAME (at least) column - and then create a bitmap index on it. 
        • A bitmap join index allows you to denormalize in the index, not in the tables. 
        • In a Bitmap Index, each distinct value for the specified column is associated with a bitmap where each bit represents a row in the table. A '1' means that row contains that value, a '0' means it doesn't. 
          Bitmap Join Indexes extend this concept such that the index contains the data to support the join query, allowing the query to retrieve the data from the index rather than referencing the join tables. Since the information is compressed into a bitmap, the size of the resulting structure is significantly smaller than the corresponding materialized view.

          CREATE BITMAP INDEX cust_sales_bji ON sales(customers.state) FROM sales, customers WHERE sales.cust_id = customers.cust_id; SELECT SUM(sales.dollar_amount) FROM sales, customer WHERE sales.cust_id = customer.cust_id AND customer.state = 'California' 

          Restrictions
          Bitmap Join Indexes have the following restrictions:
          • Parallel DML is currently only supported on the fact table. Parallel DML on one of the participating dimension tables will mark the index as unusable. 
          • Only one table can be updated concurrently by different transactions when using the bitmap join index. 
          • No table can appear twice in the join.
            • You cannot create a bitmap join index on an index-organized table or a temporary table.
            • The columns in the index must all be columns of the dimension tables.
            • The dimension table join columns must be either primary key columns or have unique constraints.
            • If a dimension table has composite primary key, each column in the primary key must be part of the join.
        • Clusters and Indexes
        • What init param needs to be set for function indexes to work
      • CURSORS
        • Def : Named Pointer to a query
        • Types : Static (Regular) and RefCursor (Dynamic)
        • RefCursors
        • Steps to Use : DECLARE, OPEN, FETCH, CLOSE
        • Opening a already open cursor throws an error.
          • %ISOPEN
          • %FOUND
          • %NOTFOUND - returns false if a fetch statement returns at least one row
          • %ROWCOUNT
        • Ref Cursors
          • Used when you need to dynamically reassign different queries to the same named cursor
      CREATE OR REPLACE Function FindCourse
      ( name_in IN varchar2 )
      RETURN number
      IS
        cnumber number;
        CURSOR c1
        IS
           SELECT course_number
           from courses_tbl
           where course_name = name_in;
      BEGIN
        open c1;
        fetch c1 into cnumber;
        if c1%notfound then
           cnumber := 9999;
        end if;
        close c1;
      RETURN cnumber;
      END;
        • Oracle/PLSQL: Cursor Attributes
          While dealing with cursors, you may need to determine the status of your cursor. The following is a list of the cursor attributes that you can use.
          VAttributeExplanation
          %ISOPEN- Returns TRUE if the cursor is open, FALSE if the cursor is closed.
          %FOUND- Returns INVALID_CURSOR if cursor is declared, but not open; or if cursor has been closed.- Returns NULL if cursor is open, but fetch has not been executed.
          - Returns TRUE if a successful fetch has been executed.
          - Returns FALSE if no row was returned.
          %NOTFOUND- Returns INVALID_CURSOR if cursor is declared, but not open; or if cursor has been closed.- Return NULL if cursor is open, but fetch has not been executed.
          - Returns FALSE if a successful fetch has been executed.
          - Returns TRUE if no row was returned.
          %ROWCOUNT- Returns INVALID_CURSOR if cursor is declared, but not open; or if cursor has been closed.- Returns the number of rows fetched.
          - The ROWCOUNT attribute doesn't give the real row count until you have iterated through the entire cursor. In other words, you shouldn't rely on this attribute to tell you how many rows are in a cursor after it is opened.
        • COLLECTIONS
        • PERFORMANCE TUNING
          • Check if the STATISTICS are uptodate
          • Fastest Access Path is ACCESS BY ROWID
          • NVL, NULL, NOT will NOT invoke index on the column
          • Steps in execution of a query
            • How Oracle Processes SQL Statements
              • All Oracle SQL statements must be processed the first time that they execute (unless they are cached in the library cache). and SQL execution steps include:
                1. A syntax check - Are all keywords present "select . . . from", etc . .
                2. A semantic check against the dictionary - Are all table names spelled correctly, etc.
                3. The creation of the cost-based decision tree of possible plans
                4. The generation of the lowest cost execution plan
                5. Binding the execution plan - This is where the table--> tablespace --> datafile translation occurs.
                6. Executing the query and fetching the rows. 
                Parse Phase - During the parse phase, Oracle opens the statement handle, checks whether the statement is OK (both syntactically and whether the involved objects exist and are accessible) and creates an execution plan for executing this statement. Parse call does not return an error if the statement is not syntactically correct.
                Parsing can be a very expensive operation that takes a lot of resources to execute. Special problem are so called “hard parses” which happen when there is no previously parsed version of the SQL to reuse.
                Once the execution plan is created, it is stored in the library cache (part of theshared_pool_size) to facilitate re-execution.  There are two types of parses:
                • Hard parse - A new SQL statement must be parsed from scratch.  (Seehard parse ratio, comparing hard parses to executes).  If the database is parsing every statement that is executing, the parse to execute ratio will be close to 1% (high hard parses), often indicating non-reentrant SQL that does not use host variables (see cursor_sharing=force).
                   
                • Soft parse - A reentrant SQL statement where the only unique feature are host variables. (See soft parse ratio, comparing soft parses to executes).  The best-case scenario is a parse to execute ratio of 100% which would indicate an application with fully reentrant SQL that “parses SQL once and executes many times” (also see your setting for session_cached_cursorsas this effects the reentrancy of an SQL statement).   
                Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant SQL is paged out), or when you have non-reusable SQL statements without host variables.  See the cursor_sharing parameter for an easy way to make SQL reentrant and remember that you should always use host variables in you SQL so that they can be reentrant.
                • Bind Phase - Once the plan is syntactically created, Oracle gathers the parameters from the client program needed for the execution. It makes the addresses of the program variables “known” to Oracle.
                • Execute Phase - During the execute phase, Oracle executes the statement, reports any possible errors, and if everything is as it should be, forms the result set. Unless the SQL statement being executed is a query, this is the last step of the execution.
                • Define Phase - Define is analogous to binds, only “output oriented”. The OCI define makes addresses of the output variables “known” to the Oracle process in order to make it possible to the fetch call to know where to put the output variables. The define call is frequently skipped and is largely considered unnecessary because of the automatic variables allocation in PHP.
                • Fetch Phase - During the fetch phase, Oracle brings the rows of the result to the program and makes them accessible by the PHP interpreter. Once more, the define and fetch phases are relevant for queries only. The Oracle OCI interface and the PHP OCI8 module contain calls to facilitate each of those phases
            • SQL exeution steps
              • The Stages in Processing a SQL Statement

                Text description of adg81052.gif follows
            • Optimization
            • A syntax check - Are all keywords present "select . . . from", etc . .
            • A semantic check against the dictionary - Are all table names spelled correctly, etc.
            • Object privileges are evaluated.
            • The creation of the cost-based decision tree of possible plans
            • The generation of the lowest cost execution plan
            • Binding the execution plan - This is where the table--> tablespace --> data file translation occurs.
            • Executing the query and fetching the rows.
          • SQL TRACE vs TK PROF vs STATSPACK vs AWR
            • SQL Trace
            • AWR : Oracle have provided many performance gathering and reporting tools over the years. Originally the UTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i introduced the statspack functionality which Oracle9i extended. In Oracle 10g statspack has evolved into the Automatic Workload Repository (AWR).
            • AWR
          • SQL PROFILER (Trace + Timing + No of Execs) vs SQL TRACE (Just Trace)
          • SOFT PARSE vs HARD PARSE
            • Parse Phase - During the parse phase, Oracle opens the statement handle, checks whether the statement is OK (both syntactically and whether the involved objects exist and are accessible) and creates an execution plan for executing this statement. Parse call does not return an error if the statement is not syntactically correct.
            • Parsing can be a very expensive operation that takes a lot of resources to execute. Special problem are so called “hard parses” which happen when there is no previously parsed version of the SQL to reuse.
            • Once the execution plan is created, it is stored in the library cache (part of the shared_pool_size) to facilitate re-execution.  There are two types of parses:
              • Hard parse - A new SQL statement must be parsed from scratch.  (See hard parse ratio, comparing hard parses to executes).  If the database is parsing every statement that is executing, the parse to execute ratio will be close to 1% (high hard parses), often indicating non-re-entrant SQL that does not use host variables (see cursor_sharing=force).
              • Soft parse - A re-entrant SQL statement where the only unique feature are host variables. (See soft parse ratio, comparing soft parses to executes).  The best-case scenario is a parse to execute ratio of 100% which would indicate an application with fully re-entrant SQL that “parses SQL once and executes many times” (also see your setting for session_cached_cursorsas this effects the reentrancy of an SQL statement).
            • Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant SQL is paged out), or when you have non-reusable SQL statements without host variables.  See the cursor_sharing parameter for an easy way to make SQL reentrant and remember that you should always use host variables in you SQL so that they can be reentrant.
            • Bind Phase - Once the plan is syntactically created, Oracle gathers the parameters from the client program needed for the execution. It makes the addresses of the program variables “known” to Oracle.
            • Execute Phase - During the execute phase, Oracle executes the statement, reports any possible errors, and if everything is as it should be, forms the result set. Unless the SQL statement being executed is a query, this is the last step of the execution.
            • Define Phase - Define is analogous to binds, only “output oriented”. The OCI define makes addresses of the output variables “known” to the Oracle process in order to make it possible to the fetch call to know where to put the output variables. The define call is frequently skipped and is largely considered unnecessary because of the automatic variables allocation in PHP.
            • Fetch Phase - During the fetch phase, Oracle brings the rows of the result to the program and makes them accessible by the PHP interpreter. Once more, the define and fetch phases are relevant for queries only. The Oracle OCI interface and the PHP OCI8 module contain calls to facilitate each of those phases.
            • Before DBMS_PROFILER was introduced tuning long PL/SQL programs was a great pain. Tuning PL/SQL was done by SQL TRACE and TKPROF. Code of thousands of lines was monitored via DBMS_UTILITY.GET_TIME to measure time consumed by individual statements.
            • With the presence of DBMS_PROFILER this has become quite easy to monitor the performance of the PL/SQL programs and identifying the culprit statements consuming too much time while the run of the program.
          • HASH JOIN vs MERGE JOIN vs NESTED LOOPS
          • FAST FULL SCAN vs SKIP SCAN
            • Refer Hints section
          • SET AUTOTRACE ON OPTIONS
            • AUTOTRACE OPTIONS: -
              1) SET AUTOTRACE OFF - No AUTOTRACE report is generated.
              This is the default. Queries are run as normal.
              2) SET AUTOTRACE ON EXPLAIN - The query is run as normal,
              and the AUTOTRACE report shows only the optimizer
              execution path.
              3) SET AUTOTRACE ON STATISTICS - The query is run as normal,
              and the AUTOTRACE report shows only the SQL statement
              execution statistics.
              4) SET AUTOTRACE ON - The query execution takes place,
              and the AUTOTRACE report includes both the optimizer
              execution path and the SQL statement execution statistics.
              5) SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but
              suppresses the printing of the query output, if any.
              6) SET AUTOTRACE TRACEONLY STATISTICS - Like SET AUTOTRACE
              TRACEONLY, but suppresses the display of the query plan.
              It shows only the execution statistics.
              7) SET AUTOTRACE TRACEONLY EXPLAIN - Like SET AUTOTACE
              TRACEONLY, but suppresses the display of the execution
              statistics, showing only the query plan. This setting
              does not actually execute the query. It only parses
              and explains the query.
          • SQL*LOADER DIRECT ACCESS PATH v/s CONVENTIONAL PATH
            • Initial Loading Full data
            • Lazy Loading
            • Loader Modes
            • Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. This method is used by all Oracle tools and applications.
              When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to Oracle, and executed.
              The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically.
            • Direct Path Loading : Instead of filling a bind array buffer and passing it to the Oracle database with a SQL INSERT statement, a direct path load uses the direct path API to pass the data to be loaded to the load engine in the server. The load engine builds a column array structure from the data passed to it.
              The direct path load engine uses the column array structure to format Oracle data blocks and build index keys. The newly formatted database blocks are written directly to the database (multiple blocks per I/O request using asynchronous writes if the host platform supports asynchronous I/O).
              Internally, multiple buffers are used for the formatted blocks. While one buffer is being filled, one or more buffers are being written if asynchronous I/O is available on the host platform. Overlapping computation with I/O increases load performance.
          • PARTITION EXCHANGE
          • TRANSPORTABLE TABLESPACES
          • REWRITING OUTER JOINS USING INLINE VIEWS CORRELATED QUERIES
          • FOR ALL and BULK COLLECT, INSERT, UPDATE, INTO COLLECTION
          • Using INDICES OF and VALUESOF with Non-Consecutive Index Values
          • FORALL i IN INDICES OF cust_tab
            INSERT INTO valid_orders
            (cust_name, amount)
              VALUES
              (cust_tab(i), amount_tab(i));
          • FORALL i IN VALUES OF rejected_order_tab
              INSERT INTO rejected_orders VALUES (cust_tab(i), amount_tab(i));
          • DECLARE
              -- collections to hold a set of customer names and amounts 
             SUBTYPE cust_name IS valid_orders.cust_name%TYPE;
             TYPE cust_typ IS TABLE OF cust_name;
             cust_tab cust_typ;
             SUBTYPE order_amount IS valid_orders.amount%TYPE;
             TYPE amount_typ IS TABLE OF NUMBER;
             amount_tab amount_typ;
              -- collections to point into the CUST_TAB collection.
             TYPE index_pointer_t IS TABLE OF PLS_INTEGER;
             big_order_tab index_pointer_t := index_pointer_t();
             rejected_order_tab index_pointer_t := index_pointer_t();
          • PROCEDURE setup_data 
          • IS 
            BEGIN
              -- Set up sample order data, with some invalid and 'big' orders
              cust_tab := cust_typ('Company1', 'Company2', 'Company3',
              'Company4', 'Company5');
              amount_tab := amount_typ(5000.01, 0, 150.25, 4000.00,NULL);
            END setup_data;
            BEGIN
              setup_data;
              dbms_output.put_line('--- Original order data ---');
              FOR i IN 1..cust_tab.LAST
              LOOP
                dbms_output.put_line('Cust#' || i || ', '|| cust_tab(i) ||
                ': $'||amount_tab(i));
              END LOOP;
              -- Delete invalid orders (where amount is null or 0)
              FOR i IN 1..cust_tab.LAST
              LOOP
                IF amount_tab(i) is null or amount_tab(i) = 0 
          •    THEN
                  cust_tab.delete(i);
                  amount_tab.delete(i);
                END IF;
              END LOOP;
              dbms_output.put_line('--Data with deleted invalid orders--');
              FOR i IN 1..cust_tab.LAST LOOP
                IF cust_tab.EXISTS(i) THEN
                  dbms_output.put_line('Cust#' || i || ', ' || cust_tab(i) ||
                  ': $'||amount_tab(i));
                END IF;
              END LOOP;
              -- Since the subscripts of our collections are not consecutive,
              -- we use use FORRALL...INDICES OF to iterate the subscripts

              FORALL i IN INDICES OF cust_tab
              INSERT INTO valid_orders
              (cust_name, amount)
              VALUES
              (cust_tab(i), amount_tab(i));
              -- Now let's process the order data differently extracting
              --  2 subsets and storing each subset in a different table.
             
              setup_data; -- Reinitialize the CUST_TAB and AMOUNT_TAB collections
              FOR i IN cust_tab.FIRST .. cust_tab.LAST
              LOOP
                IF amount_tab(i) IS NULL OR amount_tab(i) = 0 THEN
                  -- add a new element to the collection
                  rejected_order_tab.EXTEND;
                  -- record original collection subscript
                  rejected_order_tab(rejected_order_tab.LAST) := i;
                END IF;
                IF amount_tab(i) > 2000 THEN
                  -- Add a new element to the collection
                  big_order_tab.EXTEND;
                  -- record original collection subscript 
                  big_order_tab(big_order_tab.LAST) := i;
                END IF;
              END LOOP;
              -- run one DML statement on one subset of elements,
              -- and another DML statement on a different subset.
             
              FORALL i IN VALUES OF rejected_order_tab
              INSERT INTO rejected_orders VALUES (cust_tab(i), amount_tab(i));
              FORALL i IN VALUES OF big_order_tab
              INSERT INTO big_orders VALUES (cust_tab(i), amount_tab(i));
              COMMIT;
            END;
            /
          • BULK BIND, BULK EXCEPTION, BULK ROWCOUNT
            • FORALL index_name IN lower_boundary .. upper_boundary
            • sql_statement
            • SAVE EXCEPTIONS;
            • FORALL index_name IN INDICES OF collection
            • [BETWEEN lower_boundary AND upper_boundary]
            • sql_statement
            • SAVE EXCEPTIONS;
            • FORALL index_name IN INDICES OF collection
            • VALUES OF index_collection
            • sql_statement
            • SAVE EXCEPTIONS;
          • P.T. FULL TABLE SCANS cost
          • Diff between ESTIMATE STATISTICS AND COMPUTE STATISTICS
            • Compute : Computes Statistics for whole Table(for all rows).
            • Estimate : Computes Statistics on sample of rows over the table. 
              - Be default, its 1064 rows.
              - You can specify rows in numbers or in percentage of total rows. 
              - If the number or percentage of rows exceeds 50%, Statistics will be computed on all rows in Table.
          • When is the Cost of the query in the explain plan NULL - When its RULE BASED OPT
          • 1 Trillion records INSERT USER MERGE / PARTITION EXCHANGE / COPY / SQL LOADER / EXTERNAL TABLE / EXPDBIMPDB
          • 10 Billion records INSERT ALL EXCEPT ONE CREATE TABLE
          • Difference between DBMS_STATS and ANALYZE
            • Analyze versus dbmsstats
            • TUNING
            • Analyze vs dbmsstats
            • Cost based optimizer is preferred method for oracle optimizer. In order to make good use of the CBO, you need to create accurate statistics. Prior to oracle8i, we use ANALYZE command to gather statistics.

              DBMS_STATS package is introduced in oracle8i. Since Oracle8i, Oracle highly recommeds to use DBMS_STATS instead of ANALYZE command. This article is written in oracle10g. I am going to address below topics in this thread....

              1. Why oracle recommends to use DBMS_STATS package?
              2. What are the advantages of DBMS_STATS compared to ANALYZE?
              3. How do we use DBMS_STATS package to analyze the table?
              4. What are new features in each version for DBMS_STATS?


              Why oracle recommends to use DBMS_STATS since Oracle8i?

              1. Gathering statistics can be done in Parallel. This option is not available in ANALYZE command.

              2. It is used to collect the stale statistics. I discussed about collecting stale statistics in another topic. Please refer stale statistics to know more about collecting stale statistics.

              3. DBMS_STATS is a PL/SQL package. So it is easy to call. But ANALYZE does not.

              4. It is used to collect statistics for external tables. But ANALYZE does not.

              5. DBMS_STATS used to collect system statistics. But ANALYZE does not.

              6. Some time, ANALYZE does not produce accurate statistics. But DBMS_STATS does.

              7.
               We cannot use ANALYZE command to gather statistics for partition or sub partition level. But we can use DBMS_STATS to analyze any specific partition or sub partition. This is especially useful for partition table. We do not need to analyze the Historical data whenever we refresh the current partition.

              8. We can transfer statistics from one DB to another DB when we collected statistics through DBMS_STATS. But it can not be done when we use ANALYZE command to collect the statistics. Please refer statistics transfer to know more about transferring statistics.
            • Example for collecting statistics on table: 

              DBMS_STATS.GATHER_TABLE_STATS(
              OWNNAME => 'TANDEB',
              TABNAME => 'CUSTOMER',
              PARTNAME => 'PART092009'
              GRANULARITY => 'PARTITION',
              ESTIMATE_PERCENT => 10,
              METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
              CASCADE => TRUE,
              NO_INVALIDATE => TRUE);

              Example for collecting statistics on Schema: 

              DBMS_STATS.GATHER_SCHEMA_STATS(
              OWNNAME => 'SCOMPPRD',
              ESTIMATE_PERCENT => 10,
              METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
              OPTIONS => 'GATHER',
              CASCADE => TRUE,
              NO_VALIDATE => TRUE);


              Example for collecting system statistics:

              DBMS_STATS.GATHER_SYSTEM_STATS(

              GATHERING_MODE => 'INTERVAL',
              INTERVAL => 10);

              Example for collecting database statistics: 

              DBMS_STATS.GATHER_DATABASE_STATS(
              ESTIMATE_PERCENT => 10,
              METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',
              CASCADE => TRUE,
              NO_VALIDATE => TRUE,
              GATHER_SYS => FALSE)
          • HINTS
            • Join Order ORDERED vs LEADING
              • /*+ LEADING */ Hint  specifies the set of tables to be used as the prefix in the execution plan.
                • The "leading" hint is ignored if the tables specified cannot be joined first in the order specified
                • If you specify two or more conflicting LEADING hints – all of them are ignored
                • The ORDERED hint overrides all LEADING hints
              • /*+ ORDERED */ Hint Oracle joins tables in the order in which they appear in the FROM clause
                • The optimizer normally chooses the order in which to join the tables, but it's time-consuming and wrong if you have bad CBO stats (especially histograms)
                • You may want to specify the "ordered" hint if you know something about the number of rows selected from each table that the optimizer does not
            • Optimizer Hints in Oracle 11g and hint overkill
            • Hints Ref
            • Index scans
            • Hints
            • Predicate Pushing
            • Hints of Acceptability
            • INDEX_SS
            • An index skip scan uses logical subindexes of a composite index. An index skip scan uses logical subindexes of a composite index. The index skip scan was introduced to allow Oracle to “skip” leading-edge predicates in a multi-column index. You can force an index skip scan with the /*+ index_ss */ hint. For example, consider the following concatenated index on a super-low cardinality column, following by a very selective column: 
            • create index  sex_emp_id on emp (sex, emp_id);
              Prior to Oracle9i, this index could only be used with both sex and emp_id were present in the SQL query, or when the sex column was specified. The following query would not be able to use the concatenated index:

              select emp_id from emp where emp_id = 123
              ;

              The Oracle 9i skip scan execution plan allows for our concatenated index to be used, even though sex is not specified in the SQL query WHERE clause. The index skip scan promises that there is no need to build a second index on the emp_id column, since it is technically redundant to the multi-column index. Oracle acknowledges that the index skip scan is not as fast as  direct index lookup, but states that the index skip scan is faster than a full-table scan.

              INDEX_FFS

              The INDEX_FFS hint causes a fast full index scan to be performed rather than a full table scan.

              A fast full scan is a full index scan in which the database reads all index blocks in no particular order. The database accesses the data in the index itself, without accessing the table.

              This is the optimal access method when all of the information required to answer a query is contained in indexed columns. Thus we will start the demo by first finding indexed columns and then building a query that uses only those columns. And, to make the demo as reliable as possible only those where the column is the first column in the index.

              INDEX_COMBINE

              The INDEX_COMBINE hint explicitly chooses a bitmap access path for the table. If no indexes are given as arguments for the INDEX_COMBINE hint, then the optimizer uses whatever Boolean combination of bitmap indexes has the best cost estimate for the table. If certain indexes are given as arguments, then the optimizer tries to use some Boolean combination of those particular bitmap indexes.

              INDEX_JOIN

              The INDEX_JOIN hint explicitly instructs the optimizer to use an index join as an access path. For the hint to have a positive effect, a sufficiently small number of indexes must exist that contain all the columns required to resolve the query.

              APPEND - direct-path mode (enable parallel mode) 

              The APPEND hint lets you enable direct-path INSERT if your database is running in serial mode. Your database is in serial mode if you are not using Enterprise Edition. Conventional INSERT is the default in serial mode, and direct-path INSERT is the default in parallel mode.

              In direct-path INSERT, data is appended to the end of the table, rather than using existing space currently allocated to the table. As a result, direct-path INSERT can be considerably faster than conventional INSERT.

              NOAPPEND - conventional path mode (disable parallel mode)

              The NOAPPEND hint enables conventional INSERT by disabling parallel mode for the duration of the INSERT statement. (Conventional INSERT is the default in serial mode, and direct-path INSERT is the default in parallel mode).

              CACHE

              The CACHE hint specifies that the blocks retrieved for the table are placed at the most recently used end of the LRU list in the buffer cache when a full table scan is performed. This option is useful for small lookup tables.
            • Optimization Goals and Approaches (2)
              Access Path Hints (17)
              Other (20)
              Join Operation (7)
              ALL_ROWS
              FIRST_ROWS
              RULE
              CLUSTER
              FULL
              HASH
              INDEX
              NO_INDEX
              INDEX_ASC
              INDEX_DESC
              INDEX_COMBINE
              INDEX_JOIN
              INDEX_FFS
              INDEX_SS
              INDEX_SS_ASC
              INDEX_SS_DESC
              NATIVE_FULL_OUTER_JOIN
              NO_NATIVE_FULL_OUTER_JOIN
              NO_INDEX_FFS
              NO_INDEX_SS
              APPEND
              NOAPPEND
              CACHE
              NOCACHE
              CURSOR_SHARING_EXACT
              DRIVING_SITE
              DYNAMIC_SAMPLING
              MODEL_MIN_ANALYSIS
              MONITOR
              NO_MONITOR
              OPT_PARAM
              PUSH_PRED
              NO_PUSH_PRED
              PUSH_SUBQ
              NO_PUSH_SUBQ
              PX_JOIN_FILTER
              NO_PX_JOIN_FILTER
              QB_NAME
              RESULT_CACHE
              NO_RESULT_CACHE
              USE_HASH
              NO_USE_HASH
              USE_MERGE
              NO_USE_MERGE
              USE_NL
              USE_NL_WITH_INDEX
              NO_USE_NL
              Join Order (2)
              Query Transformation (13)
              XML (2)
              Parallel Execution (5)
              ORDERED
              LEADING
              FACT
              NO_FACT
              MERGE
              NO_MERGE
              NO_EXPAND
              USE_CONCAT
              REWRITE
              NO_REWRITE
              NOREWRITE*
              UNNEST
              NO_UNNEST
              STAR_TRANSFORMATION
              NO_STAR_TRANSFORMATION
              NO_QUERY_TRANSFORMATION
              NO_XMLINDEX_REWRITE
              NO_XML_QUERY_REWRITE
              PARALLEL
              NOPARALLEL*
              NO_PARALLEL
              PARALLEL_INDEX
              NO_PARALLEL_INDEX
              NOPARALLEL_INDEX*
              PQ_DISTRIBUTE



      • BIND Variables



      • 11g Oracle's Buffer Cache



      • BIND Variable PEEKING 11g



      • ADAPTIVE CURSOR SHARING IN 11G



      • SHARING, BIND Variable PEEKING and HISTOGRAMS



      • Links



      • Query to check long running queries

        • select vlo.*,vs.* from v$session_longops vlo, v$sql vs where vlo.sql_hash_value = vs.hash_value and vlo.sql_address = vs.address and vs.child_number=0 and vlo.sofar/vlo.totalwork*100<>100


      • SCHEDULING

        • DBMS_JOB
        • DBMS_SCHEDULER




      • TRIGGERS

        • 12 TYPES of triggers before, after, row level, statement level, system, non system
        • Call Sequence of Execution of triggers
        • AUTONOMOUS TRANSACTION  in triggers
        • INSTEAD OF TRIGGER = trigger on view
        • Create trigger to add user and source code to log table
        • Call PROC in triggers to get over max trigger size of 100K
        • How to avoid mutating trigger error change it to a statement level trigger, use view, use GTT
        • We can have a max of 12 triggers on a table


      • PARTITIONING

        • Partitions (how to specify)
        • You cannot update partition key
        • Horizontal vs Vertical partitioning, range, list, hash
        • Sub partitions
        • Global Indexes
        • Local Indexes (Always create local indexes for partitioned tables)
        • Global vs Local partitioned index
        • Object partitioning in Oracle
        • Partition by analytical function
        • Partitioned tables and indexes
        • Partition Pruning
          • Partition pruning improves response time
          • Partition pruning tips
            • Beginning with Oracle 11g, partition pruning uses bloom filtering instead of subquery pruning.  A bloom filter essentially tests if an element is a member of a set or not. In deciding whether to use partition pruning, a bloom filter uses partition pruning whenever a partitioned object is detected in the SQL statement. This enhances the performance of partition pruning because bloom filtering does not require additional resources. Also, it is not evaluated as a cost based decision. Instead, bloom filtering is constantly active as well as automatically activated for any join with a partitioned object. This transparent enhancement improves the performance of partition pruning.
        • Managing Partitioned Tables and Indexes in Oracle 10g
        • ALTER TABLE Maintenance Operations for Table Partitions
          Maintenance OperationRangeHashListComposite: Range/HashComposite: Range/List
          ADD PARTITION
          ADD PARTITION
          ADD PARTITION
          ADD PARTITION
          MODIFY PARTITION ... ADD SUBPARTITION
          ADD PARTITION
          MODIFY PARTITION ... ADD SUBPARTITION
          n/a
          COALESCE PARTITION
          n/a
          MODIFY PARTITION ... COALESCE SUBPARTITION
          n/a
          DROP PARTITION
          n/a
          DROP PARTITION
          DROP PARTITION
          DROP PARTITION
          DROP SUBPARTITION
          EXCHANGE PARTITION
          EXCHANGE PARTITION
          EXCHANGE PARTITION
          EXCHANGE PARTITION
          EXCHANGE SUBPARTITION
          EXCHANGE PARTITION
          EXCHANGE SUBPARTITION
          MERGE PARTITIONS
          n/a
          MERGE PARTITIONS
          MERGE PARTITIONS
          MERGE PARTITIONS
          MERGE SUBPARTITIONS
          MODIFY DEFAULT ATTRIBUTES
          MODIFY DEFAULT ATTRIBUTES
          MODIFY DEFAULT ATTRIBUTES
          MODIFY DEFAULT ATTRIBUTES
          MODIFY DEFAULT ATTRIBUTES FOR PARTITION
          MODIFY DEFAULT ATTRIBUTES
          MODIFY DEFAULT ATTRIBUTES FOR PARTITION
          MODIFY PARTITION
          MODIFY PARTITION
          MODIFY PARTITION
          MODIFY PARTITION
          MODIFY SUBPARTITION
          MODIFY PARTITION
          MODIFY SUBPARTITION
          n/a
          n/a
          MODIFY PARTITION...ADD VALUES
          n/a
          MODIFY SUBPARTITION ... ADD VALUES
          n/a
          n/a
          MODIFY PARTITION...DROP VALUES
          n/a
          MODIFY SUBPARTITION ... DROP VALUES
          n/a
          n/a
          n/a
          SET SUBPARTITION TEMPLATE
          SET SUBPARTITION TEMPLATE
          MOVE PARTITION
          MOVE PARTITION
          MOVE PARTITION
          MOVE SUBPARTITION
          MOVE SUBPARTITION
          RENAME PARTITION
          RENAME PARTITION
          RENAME PARTITION
          RENAME PARTITION
          RENAME SUBPARTITION
          RENAME PARTITION
          RENAME SUBPARTITION
          SPLIT PARTITION
          n/a
          SPLIT PARTITION
          SPLIT PARTITION
          SPLIT PARTITION
          SPLIT SUBPARTITION
          TRUNCATE PARTITION
          TRUNCATE PARTITION
          TRUNCATE PARTITION
          TRUNCATE PARTITION
          TRUNCATE SUBPARTITION
          TRUNCATE PARTITION
          TRUNCATE SUBPARTITION



      • EXCEPTION HANDLING



      • ANALYTICAL FUNCTIONS




      • 31.1.DBMS_ALERT( 7 ) 31.22.dbms_rls( 5 ) 

        31.2.dbms_application_info( 7 ) 31.23.dbms_rowid( 11 ) 

        31.3.DBMS_AQ( 1 ) 31.24.dbms_session( 4 ) 

        31.4.dbms_aqadm( 2 ) 31.25.dbms_space( 2 ) 

        31.5.dbms_crypto( 7 ) 31.26.DBMS_SQL( 20 ) 

        31.6.DBMS_DB_VERSION( 1 ) 31.27.DBMS_STATS( 10 ) 

        31.7.DBMS_DDL( 1 ) 31.28.DBMS_TRANSACTION( 1 ) 

        31.8.DBMS_DEBUG( 1 ) 31.29.DBMS_TYPES( 1 ) 

        31.9.dbms_fga( 1 ) 31.30.dbms_utility( 13 ) 

        31.10.dbms_java( 1 ) 31.31.dbms_wm( 1 ) 

        31.11.DBMS_JOB( 8 ) 31.32.dbms_xmlschema( 1 ) 

        31.12.DBMS_LOB( 9 ) 31.33.File Read with UTL_FILE( 4 ) 

        31.13.dbms_lock( 2 ) 31.34.owa_util( 1 ) 

        31.14.dbms_logmnr( 4 ) 31.35.File Write with UTL_FILE( 4 ) 

        31.15.dbms_metadata( 2 ) 31.36.system_privilege_map( 1 ) 

        31.16.dbms_obfuscation_toolkit( 3 ) 31.37.HTP( 11 ) 

        31.17.DBMS_OUTPUT( 11 ) 31.38.UTL_FILE( 9 ) 

        31.18.DBMS_PIPE( 11 ) 31.39.utl_raw( 6 ) 

        31.19.dbms_preprocessor( 2 ) 31.40.utl_smtp( 2 ) 

        31.20.DBMS_PROFILER( 7 ) 31.41.utl_tcp( 1 ) 

        31.21.DBMS_RANDOM( 2 ) 



















































































































      • TRANSPOSING DATA

        • NORMALIZATION - UNION
        • DENORMALIZATION - CASE
        • LISTAGG


      • ORACLE DBMS_**** and Other UTILITIES

        • Built in packages
        • DBMS_TRACE, DBMS_PROFILER, DBMS_SUPPORT, DBMS_MONITOR
        • WRAP command encrypts PL/SQL code.
        • ADIDENT utility is used to find the version of any file
        • DBMS_ALERT - Package DBMS_ALERT lets you use database triggers to alert an application when specific database values change. The alerts are transaction based and asynchronous (that is, they operate independently of any timing mechanism). For example, a company might use this package to update the value of its investment portfolio as new stock and bond quotes arrive.
        • DBMS_PIPE - Package DBMS_PIPE allows different sessions to communicate over named pipes. (A pipe is an area of memory used by one process to pass information to another.) You can use the procedures PACK_MESSAGE and SEND_MESSAGE to pack a message into a pipe, then send it to another session in the same instance or to a waiting application such as a UNIX program.
          At the other end of the pipe, you can use the procedures RECEIVE_MESSAGE and UNPACK_MESSAGE to receive and unpack (read) the message. Named pipes are useful in many ways. For example, you can write a C program to collect data, then send it through pipes to stored procedures in an Oracle database.
        • UTL_FILE -Package UTL_FILE lets PL/SQL programs read and write operating system (OS) text files. It provides a restricted version of standard OS stream file I/O, including open, put, get, and close operations.
          When you want to read or write a text file, you call the function FOPEN, which returns a file handle for use in subsequent procedure calls. For example, the procedure PUT_LINE writes a text string and line terminator to an open file, and the procedure GET_LINE reads a line of text from an open file into an output buffer.


      • GRANTS AND PRIVILEGES

        • Grant granter rights (WITH GRANT OPTION)
        • Roles help grant privileges to a group of users


      • CONSTRAINTS (alter table enable /disable constraint)

        • DISABLE CONSTRAINT
        • DISABLE CONSTRAINT NO CHECK
        • DISABLE ALL CONSTRAINT
        • ENABLE NOVALIDATE
        • We CANNOT change not DIFFERABLE CONSTRAINT


      • COMPILATION

        • When will an object become invalid 
        • Conditional Compilation in 11g
        Audit
        • Select d.name,l.sessions_current,s.value,l.sessions_highwater from v$database d, v$sysstats s, v$license l  
        Debugging and code maintainability :
        • Add RETURN or EXIT till the point you want to run
        • Prefer to use GTT over Collections
        • Need to find missing records in join use outer join in ANSI syntax


      • Impact Analysis and Dependencies

        • all_dependencies or dba_dependencies
        • Table for Indirect Dependencies


      • DBLINK what is a remote database a node
      • Shutdown / Startup Options

        • Normal
        • Immediate
        • Regular


      • Standards

        • Database Object Naming Rules : Max Limit is 30 Bytes/ Chars except DB Name which is 8 Bytes and DB Links which are 128 Bytes


      • Puzzles/Tricks

        • Num to word
          • to_char(to_date(col,'j'),'jsp')
        • Convert multiple rows to a single column
          • SELECT SUBSTR (SYS_CONNECT_BY_PATH (NAME , ','), 2) FRUITS_LIST
            FROM (SELECT NAME , ROW_NUMBER () OVER (ORDER BY NAME ) RN,
            COUNT (*) OVER () CNT
            FROM FRUITS)
            WHERE RN = CNT
            START WITH RN = 1
            CONNECT BY RN = PRIOR RN + 1;
        • Oracle tips tricks
        • select split('123,45,6,7',1,',') from dual


      • Unix and Oracle

        • Common Unix Commands & Usage
        • ksh shell script 'helloworld' after 25 secs


      • Additional Reference :




      • Links to popular blogs