UKOUG Tech 15

It’s nearly early December, so it must be time for the UK Oracle User Group Conferences – JDE15, Apps15 and Tech15

There’s some absolutely wonderful presentations, and there’s the 2 I’m giving as well:

An Introduction to RAC (12c) at 9am on Monday 6th, and Troubleshooting Goldengate at 12:20 on Tuesday 7th.

I’m also helping our with a new style database Keynote this year. We are inviting questions from the audience and having them answered by senior Oracle technologists Dominic Giles, Maria Colgan and Penny Avril

There’s also a Performance Panel Q&A with Jonathan Lewis, Christian Antognini, Maria Colgan and Nigel Bayliss with a similar format.

Feel free to send me your questions for these 2 events ( and marking them either Performance or Keynote) – but you’ll need to turn up to hear them answered!


Hope to see you there. Come over. Say Hi! Ask for a badge. Buy me a drink? Hey, you’re a bit forward, aren’t you. Here’s your badge.



Decoding DBA_TAB_COLUMNS high_value and low_value

When performance tuning, it can be important to understand the statistics in the database. It is worth noting that every column records it lowest (low_value) and highest (high_value) values in DBA_TAB_COLUMNS when you gather stats, like a mini-histogram. If the optimizer believes it knows the range of values within a column, plus the number of different values (cardinality), it can make guesses about the suitability and cost of index access when a predicate references that column.

This is especially important when looking at values outside of the low_value and high_value range, as Oracle crudely calculates the probability that such a value exists. It does this by identifying the total range of values (high_value – low_value) and lowering the calculated cost for predicates increasingly further outside of the known range until it decides that there’s no realistic chance of hitting the value you have asked for.

For example:

If low_value was 40 and high_value was 50, this gives us a range of 10. Therefore, any predicate selecting outside of the values 40-to-50 will have it’s cost lowered by a percentage of the range (10). If we assume an access cost of 1000, this is how the index access cost may change.

where X = 40 ; cost = 1000 - within low-to-high value
where X = 50 ; cost = 1000 - within low-to-high value
where X = 55 ; cost = 500  - outsize of low-to-high value by 50% of the range
where X = 32 ; cost = 200  - outsize of low-to-high value by 80% of the range
where X = 61 ; cost = 1    - outside of low-to-high value by more than 100% of the range
where X = 29 ; cost = 1    - outside of low-to-high value by more than 100% of the range

This is called statistical decay – the calculated lowering of the cost because Oracle doesn’t think it will find the value in the index.

Querying the LOW_VALUE and HIGH_VALUE

The low_value and high_value in DBA_TAB_COLUMNS is stored in an Oracle internal format, so you need to use DBMS_STATS.CONVERT_RAW_VALUE to turn it into a human-readable format. Here’s an example:

Table Definition:

08:08:24 SQL> desc nc1
Name    Null?    Type
-----   -----    ----
C1               DATE
C2               TIMESTAMP(6)
C3               VARCHAR2(10)
C4               NUMBER
C5               BINARY_FLOAT
C6               BINARY_DOUBLE

Select all of the data out of the table so we can see what we have

C1                  C2                          C3                 C4         C5         C6
------------------- ----------------------------- ---------- ---------- ---------- ----------
21/07/0001 01:02:03 21/07/0001 01:02:03.456789    Lower               1   1.0E+000   1.0E+000
21/07/9999 01:02:03 21/07/9999 01:02:03.456789    Upper              99   9.9E+001   9.9E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.586342    Now                50   5.0E+001   5.0E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.585173    Now                50   5.0E+001   5.0E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.579977    Now                50   5.0E+001   5.0E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.585896    Now                50   5.0E+001   5.0E+001

Stats have been gathered, so we can look at DBA_TAB_COLUMNS (or USER_TAB_COLUMNS), and decode the high and low values

TABLE COLUM DATA_TYPE       LOW_VALUE                      HIGH_VALUE                     LOW_DECODE                     HIGH_DECODE
----- ----- --------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------
NC1   C1    DATE            64650715020304                 C7C70715020304                 0001-07-21 01:02:03            9999-07-21 01:02:03
NC1   C2    TIMESTAMP(6)    646507150203041B3A0C08         C7C707150203041B3A0C08         0001-07-21 01:02:03            9999-07-21 01:02:03
NC1   C3    VARCHAR2        4C6F776572                     5570706572                     Lower                          Upper
NC1   C4    NUMBER          C102                           C164                           1                              99
NC1   C5    BINARY_FLOAT    BF800000                       C2C60000                       1.0E+000                       9.9E+001
NC1   C6    BINARY_DOUBLE   BFF0000000000000               C058C00000000000               1.0E+000                       9.9E+001

So how did I decode all of that? I used the 12C capability of putting in-line functions in a SQL statement using subquery refactoring. Pre-12C, you could create the functions as objects.

FUNCTION raw_to_date(i_var in raw) return date  as
o_var date;
return o_var;
FUNCTION raw_to_number(i_var in raw) return number  as
o_var number;
return o_var;
FUNCTION raw_to_varchar2(i_var in raw) return varchar2  as
o_var varchar2(32767);
return o_var;
FUNCTION raw_to_float(i_var in raw) return binary_float  as
o_var binary_float;
return o_var;
FUNCTION raw_to_double(i_var in raw) return binary_double  as
o_var binary_double;
return o_var;
select table_name,column_name,data_type,low_value,high_value,
'DATE'     , to_char(raw_to_date(low_value),'YYYY-MM-DD HH24:MI:SS') ,
'TIMESTAMP', to_char(raw_to_date(low_value),'YYYY-MM-DD HH24:MI:SS') ,
'NUMBER',    to_char(raw_to_number(low_value)),
'VARCHAR2',  to_char(raw_to_varchar2(low_value)),
'BINARY_FL', to_char(raw_to_float(low_value)),
'BINARY_DO', to_char(raw_to_double(low_value)),'Unknown') low_decode,
'DATE'     , to_char(raw_to_date(high_value),'YYYY-MM-DD HH24:MI:SS') ,
'TIMESTAMP', to_char(raw_to_date(high_value),'YYYY-MM-DD HH24:MI:SS') ,
'NUMBER',    to_char(raw_to_number(high_value)),
'VARCHAR2',  to_char(raw_to_varchar2(high_value)),
'BINARY_FL', to_char(raw_to_float(high_value)),
'BINARY_DO', to_char(raw_to_double(high_value)),'Unknown') high_decode
from user_tab_columns
where table_name = 'NC1'

NOTE: There is no function to decode TIMESTAMPs. You can use them as input to the DATE convert (as above) but you will lose precision, or you can convert them manually:

                          +to_number(substr(low_value,3,2),'XX')) - 10100,4,'0') ||'-'
       ||lpad( to_number(substr(low_value, 5,2),'XX'),2,'0')||'-'
       ||lpad( to_number(substr(low_value, 7,2),'XX'),2,'0')||' '
       ||lpad((to_number(substr(low_value, 9,2),'XX')-1),2,'0')||':'
                          +to_number(substr(high_value,3,2),'XX')) - 10100,4,'0') ||'-'
       ||lpad( to_number(substr(high_value, 5,2),'XX'),2,'0')||'-'
       ||lpad( to_number(substr(high_value, 7,2),'XX'),2,'0')||' '
       ||lpad((to_number(substr(high_value, 9,2),'XX')-1),2,'0')||':'
from user_tab_columns
where data_type like 'TIMESTAMP%'
COLUM LOW_VALUE                      LOW_DECODE                     HIGH_VALUE                     HIGH_DECODE
----- ------------------------------ ------------------------------ ------------------------------ ------------------------------
C2    646507150203041B3A0C08         0001-07-21 01:02:03.45678899   C7C707150203041B3A0C08         9999-07-21 01:02:03.45678899

OEM and monitoring the -MGMTDB GMIR Database

When you create Grid Infrastructure in, you are presented with a new (annoyingly named) “-MGMTDB”

This database is a standard, 12C CDB/PDB storing transient performance and other information (Grid Infrastructure Management Repository). If it is lost, no biggie. Just re-create it (in your voting disk DG. Aside: create a new MGMTDB_DG, move your voting disk there, re-create -MGMTDB, then move your voting disk back out to the proper multiple voting volumes.)

However, Oracle Enterprise Manager Cloud Control and earlier sees this oracle database, PDB, listener and all, and decides to discover it. This is incorrect and should not happen. It is supposed to be “masked off” from OEM. Monitoring this database system will only lead to false positives and problems where none really exist, and all of the targets should be ignored (as per the attached picture)


In a future release of Oracle Enterprise Manager Cloud Control, these targets will no longer be discovered and will automatically remain hidden from view within OEM, once the team have fixed the bug which – I was very reliably informed – they discovered the root cause of today.

Oracle Midlands Presentation

Ever wonder why your SQL execution plan has changed? Wondering why your boss is shouting something about “traders can’t trade” or “shoppers can’t spend” or “well that’s the Airport shut!” ?

When you write SQL, Oracle runs it through the Optimiser to determine the fastest way to access the data. This access path sometimes changes without anyone noticing, sometimes with sub-optimal results. I’m giving a presentation at an OracleMidlands even that will show the main 7 reasons why your execution plan will change, and what you can do about it.

Register, and come along on 24th November. There’s even free refreshments too! You might even learn something – even if it’s just that you know more than the presenter :-)

12C Paging through Data and Restricting Result Sets

Prior to Oracle 12C, if you wanted to page through result sets only returning a bit at a time, or only wanted the first dozen lines of data, it was a right pain. Writing your SQL, then having it as an in-line view with order by, and a rownum on the outer view. And the whole or that in an in-line view with another named rownum around that to create a sliding window. Horrid. Horrid. Horrid.

I have been covetous of the Top function within SQL Server SQL for years and years.

Well FINALLY! In 12C we have our very own Oracle function to rival that of the Top function in SQL Server. Something to make sliding cursors at least a bit more readable. Step forward FETCH FIRST n ROWS!



(note: this means the first record should be record 11 – one more than the offset)

It also has other flexibility: FETCH FIRST 20 PERCENT or FETCH FIRST 10 ROWS WITH TIES (i.e. where the values are the same), FETCH LAST 30 ROWS, etc

The numbers supplied can be bind variables, too. As we’d expect.

It’s worth noting that SQL will optimize differently to using the ROWNUM method of paging though a result set. That’s doesn’t mean better or worse, just differently. In some cases it will be better, and in other cases it may well be worse. It depends upon the table design and the SQL query.

Under the covers, Oracle is actually doing a query transformation, rewriting the query using a ROW_NUMBER() OVER (ORDER BY ) analytics function. This is the other way of paging through a result set, effectively doing:

(SELECT column, ROW_NUMBER() OVER (ORDER BY column) R FROM table)
WHERE R BETWEEN 51 and 100;

I think you’ll agree that, if nothing else, the new 12C syntax is a lot easier to read.

For a much better post about how it works, you should swing by Tim Hall’s website. Damn it Tim, I found that half way through writing this (which is why it’s short). Stop being so good at blogging :-)

Oracle 12C Problem with datapatch. Part 2, the “fix”

so after much too-ing and fro-ing with Oracle Support, we finally have a solution to my datapatch problem. Prevent the timeout from happening when running datapatch! We know which part of the code was timing out, and we can do this by setting an undocumented event: 18219841.

Here’s a show test to show you how that was done:

SQL: select dbms_sqlpatch.verify_queryable_inventory from dual;

ORA-20008: timed out. Job Load_opatch_inventory_1execution time is more than 120Secs

(wait a while for the background job to clean up)

SQL: alter session set events '18219841 trace name context forever';

Session Altered

SQL: select dbms_sqlpatch.verify_queryable_inventory from dual;


patch 19769480 installed in all nodes
patch 20299023 installed in all nodes
patch 28031110 installed in all nodes

and switch it back off

alter session set events '18219841 trace name context off';

You may note above that I was changing the context for the session only, as a test. To get it to work with datapatch, you will need to use “alter system” commands before and after running datapatch.

I would recommend that you do not set this event without first contacting Oracle Support. You may wish to quote “Bug 21121707” I would recommend you un-set it afterwards too – you don’t know what it’s doing and you don’t understand if there are any negative side-effects.

To check if the event is set in your session/system:

in sqlplus...

sql: var level number
sql: exec dbms_system.read_ev(1821984, :level);
sql: print :level


Addendum: After a twitter conversation, it was noted by Martin Berger (@martinberx) that the nature of the query had changed with the event set, and that it was no longer calling the scheduler jobs. I did note, however, that it’s still calling the same pre-processor, but without the timeout (so it will run until it’s finished, however long that may be): $ORACLE_HOME/QOPatch/qopiprep.bat  – It’s getting information from GV$QPX_INVENTORY, whatever that view does. Answers on a post card (or comment) please.

Oracle 12C Problem with datapatch

I was minding my business, patching an Oracle 12C RAC Cluster when the darned patch failed

./datapatch -verbose

SQL Patching tool version on Mon Aug  3 20:47:31 2015
Copyright (c) 2015, Oracle.  All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_9841_2015_08_03_20_47_31/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done

Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'
and/or check the invocation log
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
for information on how to resolve the above errors.

SQL Patching tool complete on Mon Aug 3 20:49:37 2015

So, following the instructions, I ran the query used to ID the inventory

[oracle@rac12c01 OPatch]$ sqlplus / as sysdba
SQL*Plus: Release Production on Mon Aug 3 20:59:37 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

20:59:37 SYS @ ORCL1 > select dbms_sqlpatch.verify_queryable_inventory from dual;

ORA-20008: Timed out, Job Load_opatch_inventory_2execution time is more than 120Secs

op_scpt_path :/u01/app/oracle/product/12.1.0/dbhome_1/QOpatchpos1

The code has timed out! So, there are some clues about what’s going on with this code. This procedure has a preprocessor, which needs to get information into the database. To do this, it invokes a scheduler job [Load_opatch_inventory_n where ‘n’ represents the cluster node executing the command], to pull in information from the o/s. Here’s a job running, with it’s definition – note the job name is mentioned in the failure above:

select job_name,state,job_action from dba_scheduler_jobs where job_name like '%PATCH%' order by job_name;
JOB_NAME                STATE    JOB_ACTION
----------------------- -------- ----------
LOAD_OPATCH_INVENTORY   DISABLED BEGIN dbms_qopatch.opatch_inv_refresh_job(); END;
LOAD_OPATCH_INVENTORY_1 DISABLED BEGIN dbms_qopatch.opatch_inv_refresh_job(); END;
LOAD_OPATCH_INVENTORY_2 RUNNING  BEGIN dbms_qopatch.opatch_inv_refresh_job(); END;

and dbms_qopatch.opatch_inv_refresh_job() also invokes a preprocessor. This preprocessor is a batch job. A shell script in /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch called qopiprep.bat And this shell script basically runs…

opatch lsinventory -xml

to get the current patches, formatted as XML. It then tacks a delimiter onto the end and returns it to the calling job before cleaning up after itself. This command can be slow-ish to execute, but on the problem cluster in question, it was running particularly slowly.

This command runs on all nodes, which return the output to the calling job, which then has to process the inbound XML files and determine if all is OK in the world of opatch before continuing. You have 2 minutes. GO! If you don’t finish in 2 minutes, the who thing fails, with no obvious remedial action possible.

One option you have, should you encounter this problem and be desperate for a workaround is to preprocess the XML file, and just get the batch file to return your handywork:

NOTE: This is very much a one-off to keep things moving and you must return the preprocessor code to it’s normal state and raise a call with Oracle Support to get this fixed properly. Do this workaround at your own risk. I don’t recommend it. Raise a call with Oracle Support instead.

save the pre-processor to put back later:

cp -p qopiprep.bat qopiprep.bat.sav

run the commands in the file:

$ORACLE_HOME/OPatch/opatch lsinventory -xml $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt

tack the delimiter string onto the end of the XML file

echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml

edit the sciprt (qopiprep.bat) and remove the bottom 5 commands EXCEPT the “echo `cat…” bit, which returns the output to the calling code:

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory. With that information, the
# patches of non-RDBMS homes could be fetched.
$ORACLE_HOME/OPatch/opatch lsinventory -xml $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
 rm $ORACLE_HOME/QOpatch/stout.txt

Now when you run, the dbms_sqlpatch command, you have taken the opatch lsinventory slow aspect of the preprocessor out of the equation:

21:04:13 SYS @ ORCL1 > select dbms_sqlpatch.verify_queryable_inventory from dual;
op_scpt_path :/u01/app/oracle/product/12.1.0/dbhome_1/QOpatchpos1
patch 19769480 installed in all nodes
patch 20299023 installed in all nodes
patch 20831110 installed in all nodes

don’t forget to put the correct processing code back once the datapatch -verbose has been completed successfully. And you need to clean up all of the output files you created, otherwise the code will fail again.

Hey, let’s be careful out there.


Get every new post delivered to your Inbox.

Join 37 other followers

%d bloggers like this: