Oracle Midlands Presentation

Ever wonder why your SQL execution plan has changed? Wondering why your boss is shouting something about “traders can’t trade” or “shoppers can’t spend” or “well that’s the Airport shut!” ?

When you write SQL, Oracle runs it through the Optimiser to determine the fastest way to access the data. This access path sometimes changes without anyone noticing, sometimes with sub-optimal results. I’m giving a presentation at an OracleMidlands even that will show the main 7 reasons why your execution plan will change, and what you can do about it.

Register, and come along on 24th November. There’s even free refreshments too! You might even learn something – even if it’s just that you know more than the presenter :-)

12C Paging through Data and Restricting Result Sets

Prior to Oracle 12C, if you wanted to page through result sets only returning a bit at a time, or only wanted the first dozen lines of data, it was a right pain. Writing your SQL, then having it as an in-line view with order by, and a rownum on the outer view. And the whole or that in an in-line view with another named rownum around that to create a sliding window. Horrid. Horrid. Horrid.

I have been covetous of the Top function within SQL Server SQL for years and years.

Well FINALLY! In 12C we have our very own Oracle function to rival that of the Top function in SQL Server. Something to make sliding cursors at least a bit more readable. Step forward FETCH FIRST n ROWS!



(note: this means the first record should be record 11 – one more than the offset)

It also has other flexibility: FETCH FIRST 20 PERCENT or FETCH FIRST 10 ROWS WITH TIES (i.e. where the values are the same), FETCH LAST 30 ROWS, etc

The numbers supplied can be bind variables, too. As we’d expect.

It’s worth noting that SQL will optimize differently to using the ROWNUM method of paging though a result set. That’s doesn’t mean better or worse, just differently. In some cases it will be better, and in other cases it may well be worse. It depends upon the table design and the SQL query.

Under the covers, Oracle is actually doing a query transformation, rewriting the query using a ROW_NUMBER() OVER (ORDER BY ) analytics function. This is the other way of paging through a result set, effectively doing:

(SELECT column, ROW_NUMBER() OVER (ORDER BY column) R FROM table)
WHERE R BETWEEN 51 and 100;

I think you’ll agree that, if nothing else, the new 12C syntax is a lot easier to read.

For a much better post about how it works, you should swing by Tim Hall’s website. Damn it Tim, I found that half way through writing this (which is why it’s short). Stop being so good at blogging :-)

Oracle 12C Problem with datapatch. Part 2, the “fix”

so after much too-ing and fro-ing with Oracle Support, we finally have a solution to my datapatch problem. Prevent the timeout from happening when running datapatch! We know which part of the code was timing out, and we can do this by setting an undocumented event: 18219841.

Here’s a show test to show you how that was done:

SQL: select dbms_sqlpatch.verify_queryable_inventory from dual;

ORA-20008: timed out. Job Load_opatch_inventory_1execution time is more than 120Secs

(wait a while for the background job to clean up)

SQL: alter session set events '18219841 trace name context forever';

Session Altered

SQL: select dbms_sqlpatch.verify_queryable_inventory from dual;


patch 19769480 installed in all nodes
patch 20299023 installed in all nodes
patch 28031110 installed in all nodes

and switch it back off

alter session set events '18219841 trace name context off';

You may note above that I was changing the context for the session only, as a test. To get it to work with datapatch, you will need to use “alter system” commands before and after running datapatch.

I would recommend that you do not set this event without first contacting Oracle Support. You may wish to quote “Bug 21121707” I would recommend you un-set it afterwards too – you don’t know what it’s doing and you don’t understand if there are any negative side-effects.

To check if the event is set in your session/system:

in sqlplus...

sql: var level number
sql: exec dbms_system.read_ev(1821984, :level);
sql: print :level


Addendum: After a twitter conversation, it was noted by Martin Berger (@martinberx) that the nature of the query had changed with the event set, and that it was no longer calling the scheduler jobs. I did note, however, that it’s still calling the same pre-processor, but without the timeout (so it will run until it’s finished, however long that may be): $ORACLE_HOME/QOPatch/qopiprep.bat  – It’s getting information from GV$QPX_INVENTORY, whatever that view does. Answers on a post card (or comment) please.

Oracle 12C Problem with datapatch

I was minding my business, patching an Oracle 12C RAC Cluster when the darned patch failed

./datapatch -verbose

SQL Patching tool version on Mon Aug  3 20:47:31 2015
Copyright (c) 2015, Oracle.  All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_9841_2015_08_03_20_47_31/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done

Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'
and/or check the invocation log
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
for information on how to resolve the above errors.

SQL Patching tool complete on Mon Aug 3 20:49:37 2015

So, following the instructions, I ran the query used to ID the inventory

[oracle@rac12c01 OPatch]$ sqlplus / as sysdba
SQL*Plus: Release Production on Mon Aug 3 20:59:37 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

20:59:37 SYS @ ORCL1 > select dbms_sqlpatch.verify_queryable_inventory from dual;

ORA-20008: Timed out, Job Load_opatch_inventory_2execution time is more than 120Secs

op_scpt_path :/u01/app/oracle/product/12.1.0/dbhome_1/QOpatchpos1

The code has timed out! So, there are some clues about what’s going on with this code. This procedure has a preprocessor, which needs to get information into the database. To do this, it invokes a scheduler job [Load_opatch_inventory_n where ‘n’ represents the cluster node executing the command], to pull in information from the o/s. Here’s a job running, with it’s definition – note the job name is mentioned in the failure above:

select job_name,state,job_action from dba_scheduler_jobs where job_name like '%PATCH%' order by job_name;
JOB_NAME                STATE    JOB_ACTION
----------------------- -------- ----------
LOAD_OPATCH_INVENTORY   DISABLED BEGIN dbms_qopatch.opatch_inv_refresh_job(); END;
LOAD_OPATCH_INVENTORY_1 DISABLED BEGIN dbms_qopatch.opatch_inv_refresh_job(); END;
LOAD_OPATCH_INVENTORY_2 RUNNING  BEGIN dbms_qopatch.opatch_inv_refresh_job(); END;

and dbms_qopatch.opatch_inv_refresh_job() also invokes a preprocessor. This preprocessor is a batch job. A shell script in /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch called qopiprep.bat And this shell script basically runs…

opatch lsinventory -xml

to get the current patches, formatted as XML. It then tacks a delimiter onto the end and returns it to the calling job before cleaning up after itself. This command can be slow-ish to execute, but on the problem cluster in question, it was running particularly slowly.

This command runs on all nodes, which return the output to the calling job, which then has to process the inbound XML files and determine if all is OK in the world of opatch before continuing. You have 2 minutes. GO! If you don’t finish in 2 minutes, the who thing fails, with no obvious remedial action possible.

One option you have, should you encounter this problem and be desperate for a workaround is to preprocess the XML file, and just get the batch file to return your handywork:

NOTE: This is very much a one-off to keep things moving and you must return the preprocessor code to it’s normal state and raise a call with Oracle Support to get this fixed properly. Do this workaround at your own risk. I don’t recommend it. Raise a call with Oracle Support instead.

save the pre-processor to put back later:

cp -p qopiprep.bat qopiprep.bat.sav

run the commands in the file:

$ORACLE_HOME/OPatch/opatch lsinventory -xml $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt

tack the delimiter string onto the end of the XML file

echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml

edit the sciprt (qopiprep.bat) and remove the bottom 5 commands EXCEPT the “echo `cat…” bit, which returns the output to the calling code:

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory. With that information, the
# patches of non-RDBMS homes could be fetched.
$ORACLE_HOME/OPatch/opatch lsinventory -xml $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
 rm $ORACLE_HOME/QOpatch/stout.txt

Now when you run, the dbms_sqlpatch command, you have taken the opatch lsinventory slow aspect of the preprocessor out of the equation:

21:04:13 SYS @ ORCL1 > select dbms_sqlpatch.verify_queryable_inventory from dual;
op_scpt_path :/u01/app/oracle/product/12.1.0/dbhome_1/QOpatchpos1
patch 19769480 installed in all nodes
patch 20299023 installed in all nodes
patch 20831110 installed in all nodes

don’t forget to put the correct processing code back once the datapatch -verbose has been completed successfully. And you need to clean up all of the output files you created, otherwise the code will fail again.

Hey, let’s be careful out there.

Oracle Management Database in Grid Infrastructure: -MGMTDB


Why? Why is is now mandatory to have the cluster management database as an Oracle CDB, with a PDB having the same name as the cluster? It’s not that I object to having another 1GB of memory lost to this DB, and up to 10GB of disk in the initial ASM Disk Group. It’s this:

You have called it -MGMTDB

That means I now have directories all over my Linux / Unix structure called “-MGMTDB”. Directories I may want to look in, to view a log to discovery why the “-MGMTDB” failed to create on install.

That leading , that MINUS . That’s the problem. THAT! WHY? I can’t use normal commands any more! It thinks the “-” is a switch and fails, unless I neutralise the “-” with a “–” prefix.

[oracle@vi-t5-oradev02 X]$ ls -l
total 4
drwxrwxr-x 2 oracle oracle 4096 Jul 20 12:39 -MGMTDB
[oracle@vi-t5-oradev02 X]$ cd -MGMTDB
bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

[oracle@vi-t5-oradev02 X]$ cd "-MGMTDB"
bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

[oracle@vi-t5-oradev02 X]$ cd *
bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

[oracle@vi-t5-oradev02 X]$ cd -- -MGMTDB


SO, why wasn’t it a plus, like +ASM. That was OK.

Oracle on 4096 (4k) sector disks don’t work (ish)

I recently came across 4K (4096 byte) sector drives. They are a fairly new thing and have come about so drives can exceed the 2TB limit imposed by having 512byte sectors. The details behind this can be found here, in much greater detail than I need to understand.

What I do understand is that Oracle doesn’t deal with 4K sectors (4Kn) very well and it shows up in a couple of ways. Don’t get me wrong, from Oracle 11.2, 4Kn database are supported, albeit with some features. Here’s 2 of them:
1. ACFS doesn’t like 4K sectors. There’s some fudging around identifying physical v logical 4k sectors but you need to check out the asm parameter “ORACLEASM_USE_LOGICAL_BLOCK_SIZE” to see if you can get it to work for you.
2. I was installing Grid Infrastructure – pretty recent I hear you all say! That only came out in July 2014. One important aspect of is that the management database was migrated from a being Berkely DB to an oracle single instance CDB with a single PDB. It’s called “-MGMTDB”. (this was optional prior to

However, when installing Grid Infrastructure, when it got to the bit at the end, after it’s all kind-of fully installed, it creates the -MGMTDB, and if you have a 4K Sector disks in ASM, it fails rather cryptically:

I have highlighted the key line in red. This isn’t obviously the problem, but it is the cause.

CRS-2674: Start of 'ora.mgmtdb' on 'server01' failed
[Thread-102] [ 2015-07-01 15:35:29.079 BST ] [HADatabaseUtils.start:1240]
Error starting mgmt database in local node, PRCR-1013 : Failed to start resource ora.mgmtdb
PRCR-1064 : Failed to start resource ora.mgmtdb on node server01
CRS-5017: The resource action "ora.mgmtdb start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA_DG/_mgmtdb/spfile-MGMTDB.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA_DG/_mgmtdb/spfile-MGMTDB.ora
ORA-15056: additional error message
ORA-17503: ksfdopn:2 Failed to open file +DATA_DG/_mgmtdb/spfile-mgmtdb.ora
ORA-15173: entry 'spfile-mgmtdb.ora' does not exist in directory '_mgmtdb'
ORA-06512: at line 4
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/server01/crs/trace/crsd_oraagent_grid.trc".
KJHA:2phase clscrs_flag:840 instSid:
KJHA:2phase ctx 2 clscrs_flag:840 instSid:-MGMTDB
KJHA:2phase clscrs_flag:840 dbname:
KJHA:2phase ctx 2 clscrs_flag:840 dbname:_mgmtdb
KJHA:2phase WARNING!!! Instance:-MGMTDB of kspins type:1 does not support 2 phase CRS

The fundamental problem is that, if you have a 4K sector and are using ASM, having your SPFILE in ASM doesn’t work. This was spotted in (Doc: 16870214.8) but wasn’t fixed in (it’s fixed by, possibly earlier) and it’s not fixed in base release. Which mean the -MGMTDB will always fail to create. Itis fixed by patch set 3 (path 20485724)

However, you’ve then got a broken -MGMTDB, which you’ll need to recreate: [Doc ID 1589394.1]

## Stop and disable ora.crf resource.
## On each node, as root user:
crsctl stop res ora.crf -init
crsctl modify res ora.crf -attr ENABLED=0 -init
## Issue the DBCA command to delete the management database
## As Grid User, locate the node that the Management Database is running by executing:

/u01/app/grid/ status mgmtdb
## rebuild mgmt
## Set the GI HOME
export GI_HOME=/u01/app/grid/12.1
## As Grid User on any node execute the following DBCA command with the desired <DG Name>:
dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -gdbName _mgmtdb -storageType ASM -diskGroupName DATA01 -datafileJarLocation $GI_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
## Create a PDB within the MGMTDB using DBCA.
## As Grid User on any node execute the following DBCA command:
## NOTE: The CLUSTER_NAME needs to have any hyphens (“-“) replaced with underscores (“_”)

dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName **MY_CLUSTER_NAME_HERE** -createPDBFrom RMANBACKUP -PDBBackUpfile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true –internalSkipGIHomeCheck
## Secure that the Management Database credential:
## As Grid User, confirm the node on which MGMTDB is running by executing.
$GI_HOME/bin/srvctl status MGMTDB
Database is enabled
 Instance -MGMTDB is running on node <NODE_NAME>
## and secure on that node
## Enable and start ora.crf resource.
## On each node, as root user:

$GI_HOME/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
$GI_HOME/bin/crsctl start res ora.crf -init

Good luck. And don’t use 4K sector sizes. It probably means your spindles are to big anyway. If “disk is cheap”, why do they have to keep buying such large capacity spindles with such low IOPS-per-GB for such huge quantities of money?


Locking Privileges in Oracle

What permissions do you need to lock rows on an Oracle table?
What about to lock the whole table?

It’s not quite as much as you may think!

Lets have a couple of users; schema_owner and user1

SQL> show user
SQL> create user schema_owner identified by schema_owner;
User created.
SQL> grant connect,resource to schema_owner;
Grant succeeded.
SQL> grant unlimited tablespace to schema_owner;
Grant succeeded.
SQL> create user user1 identified by user1;
User created.
SQL> grant create session to user1;
Grant succeeded.

Now for a table and grants

SQL> conn schema_owner/schema_owner
SQL> create table tab1 (col1 date, col2 number);
Table created.
SQL> insert into tab1 values (sysdate,1);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from tab1;
--------- ----------
14-JUL-15	   1
SQL> grant select on tab1 to user1;
Grant succeeded.

So, what can USER1 do with that table?

SQL> conn user1/user1
SQL> select * from schema_owner.tab1;
--------- ----------
14-JUL-15 1


SQL> update schema_owner.tab1 set col2=2 where col2=1;
update schema_owner.tab1 set col2=2 where col2=1
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> insert into schema_owner.tab1 values (sysdate,2);
insert into schema_owner.tab1 values (sysdate,2)
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> delete from schema_owner.tab1;
delete from schema_owner.tab1
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> select * from schema_owner.tab1 for update;
COL1      COL2
--------- ----------
14-JUL-15          1


SQL> lock table schema_owner.tab1 in exclusive mode;
Table(s) Locked.

What?!? Is this real? Has that REALLY lock the entire table with only SELECT permissions? Can I delete from that table from a different session + user which has permissions?

SQL> show user
SQL> select * from schema_owner.tab1;
COL1      COL2
--------- ----------
14-JUL-15	   1
SQL> delete from schema_owner.tab1;
(no return....)

A quick look in gv$session will show you that USER1 is indeed blocking SCHEMA_OWNER despite only having SELECT privileges on the table:

select .... from gv$session;
------ --- --------------- ------------- -------- ---------- ------ ---------------------------
     3	47 USER1			 INACTIVE NO HOLDER  BLOCK  SQL*Net message from client
     3	55 SCHEMA_OWNER    5n1hw77std3h5 ACTIVE   VALID      47     enq: TM - contention

SQL> select * from dba_blockers
 2 ;

--------------- ------
47                   3

SQL> select * from dba_waiters;

--------------- -------------- --------------- -------------- -------------------------- ---------------------------------------- ---------------------------------------- ---------- ----------
 55                          3              47              3 DML                 Exclusive Row-X (SX) 96178 0

This is because of a side effect of an Oracle philosophy; “don’t do now what you may never need to do”. If Oracle can defer any actions from now, such as writing a dirty buffer to disk, or seeing if a session has permissions to perform an update when all you have done is request a lock, then it will, if possible, do it later.

You may request the lock so Oracle checks that you can access the object (SELECT), but you may never try to actually change the row, or table so it’s not necessary to see if you can modify the object…

This is a pretty problematic security hole; In Oracle 12c, a new table privilege has appeared: READ. If we re-run the above with GRANT READ instead of GRANT SELECT…

SQL> show user
SQL> select grantee,privilege from user_tab_privs where table_name = 'TAB1';
-------------------- ----------
USER1                READ
SQ> select * from schema_owner.tab1;
COL1      COL2
--------- ----------
14-JUL-15          1


SQ> select * from schema_owner.tab1 for update;
select * from schema_owner.tab1
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> lock table schema_owner.tab1 in exclusive mode;
lock table schema_owner.tab1 in exclusive mode
ERROR at line 1:
ORA-01031: insufficient privileges

Thats better!

So the next time someone says “it’s only SELECT permissions”, it’s not. You might want to check out using READ.


Get every new post delivered to your Inbox.

Join 36 other followers

%d bloggers like this: