Pre-creating Interval Partitions

One of the major problems with interval-based partitioning is the statistics. Partitions appear dynamically as-needed based upon data being inserted or udpated, and any partition magically appearing isn’t going to have any statistics.

This is generally a stability issue as you will, at best, be using dynamic statistics for your optimizations. So how do we deal with it? My preferred method is to pre-create the partitions and copy statistics from a good partition into the pre-created partition. But how do we get the partitions to appear? Here’s 2 options:

  1. Insert data into the row with the partition key for the partition we wish to create, and rollback. This can be tricky, especially with tables containing many NOT NULL columns, and is subject to failure based upon table evolution.
  2. Lock the relevant partition in shared mode using the commandLOCK TABLE .x. PARTITION FOR <partition-key> IN SHARE MODE;

    This will place a shared lock on the non-existant partition, which Oracle will create so it can lock it. A much less messy solution, and not one I had thought of until shown the light by Dan Jankowski.

So does option 2 work? Here’s a quick example:

10:51:55 NEIL @ ORCL01 > CREATE TABLE interval_table (id NUMBER, created_date DATE)
10:51:55   2             PARTITION BY RANGE (created_date) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
10:51:55   3           ( PARTITION part_01 values LESS THAN (TO_DATE('01-JAN-2015','DD-MON-YYYY')))
10:51:55   4  /
Table created.

10:51:55 NEIL @ ORCL01 > select table_name, partition_name,high_value from user_tab_partitions order by 1,2;
TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE
------------------------------ ------------------------------ --------------------------------------------------------------------------------
INTERVAL_TABLE                 PART_01                        TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


use a shared lock to generate a new partition

10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-JAN-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.

10:51:55 NEIL @ ORCL01 > select table_name, partition_name,high_value from user_tab_partitions order by 1,2;
TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE
------------------------------ ------------------------------ --------------------------------------------------------------------------------
INTERVAL_TABLE                 PART_01                        TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P647                       TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-FEB-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.
10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-MAR-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.
10:51:55 NEIL @ ORCL01 > lock table interval_table partition for (to_date('01-APR-2016','DD-MON-YYYY')) in share mode;
Table(s) Locked.
10:51:55 NEIL @ ORCL01 > select table_name, partition_name,high_value from user_tab_partitions order by 1,2;
TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE
------------------------------ ------------------------------ --------------------------------------------------------------------------------
INTERVAL_TABLE                 PART_01                        TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P647                       TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P648                       TO_DATE(' 2016-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P649                       TO_DATE(' 2016-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
INTERVAL_TABLE                 SYS_P650                       TO_DATE(' 2016-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

10:51:55 NEIL @ ORCL01 > -- and release the locks... the rollback doesn't rollback the creation of the new partitions.
10:51:55 NEIL @ ORCL01 > rollback;
Rollback complete.

So now what? To get the stats right I follow the following rule set:

Firstly, lock the table stats when you create the table and have a dedicated, focused stats job. This stop the general stats job from getting in the way of this this important partitioned table.

  • Create a new partition
  • Copy stats using DBMS_STATS.COPY_TABLE_STATS from your fullest partition (with FORCE=TRUE to override the locked stats option). Always try to pretend you have more data than you really have if faking it – it’s safer as an optimized plan with larger expected data sets processing a smaller data set tends to work much better than the stats assuming a small data set and being surprised by lots of data. Consider using SCALE_FACTOR if you have a growing dataset. Don’t reply upon Optimizer magic, such as Adaptive Execution Plans to dig out of a hole.
  • Once a partition becomes “old” (like a no-longer used date-range partition), gather actual stats to get all your stats for that partition accurate. That may even become your new baseline stats copy partition. You will possibly never need to gather stats again for that partition.
  • Be careful if you are copying stats when you have an object-level change. For example, if you put a new index on, you need to re-copy stats post change to any pre-created partitions. Empty pre-created partitions will have accurate (i.e. empty) stats and that’s really not what you want.
  • Repeat as often as you pre-create a new partition

Primarys Keys and their supporting indexes

Sometimes things just happen which makes you realise that stuff you thought all experienced DBA’s knew about isn’t common knowledge. I suppose it’s a side effect of working with the same evolving product for over 25 years at dozens of clients, when the colleague you are talking to has a mere decade of experience at fewer companies (and therefore less diversity of challenge).

Today I noticed that a release was creating a table, then an index, and then adding a Primary Key based upon the index. Pretty normal, but the indexes were non-unique. Interesting. Is that valid? (yes) It is sensible? (under some circumstances, also yes). Is it wrong but nobody noticed until it got to me? Probably.

However, there’s more variables at play here than first meets the eye. The Primary Key is being enforced by a pre-existing non-unique index, rather than create its own index “dynamically” (which would be unique under most circumstances). So therefore we have a few questions to answer

  1. Is the index being used optimal?
  2. How do I know it’s using a pre-created index and the constraint did not create its own index?
  3. What happens if I drop the PK constraint?

 

Scenario 1: create table, create constraint

You get a nice neat fast index enforcing the constraint. If you drop the constraint, the index will also get removed. This may or may not be what you want. You may need to put a new index back on the table…

Scenario 2: create table, create unique index, create constraint

You get a nice neat fast index enforcing the constraint. If you drop the constraint, the index does not get removed. This may or may not be what you want. The index which was left behind might cause a problem if you miss the fact it’s still around…

Scenario 3: create table, create non-unique index (same as constraint columns), create constraint

The index enforcing the constraint is slightly less efficient enforcing the constraint as it has to optimize to an index range scan, not a unique scan. However, if you drop the constraint you still have a non-unique index. This may be a very useful ability for bulk data loads with questionable data, although the use-cases are limited and tend to be Warehouse-centric.

If you have a deferrable constraint, you must have a non-unique index to enforce the constraint as there may temporarily be duplicate keys in the table part-way through the transaction.

Scenario 4: create table, create non-unique composite index, create constraint which uses leading columns of that index but not all of the columns

The index may be somewhat less efficient enforcing the constraint by having less rows per block and possibly more levels, but this inefficiency may help overall system efficiency by having to maintain less indexes – maintaining 2 indexes instead of 1 is a 100% overhead in index maintenance. The composite index cannot be a unique index.

 

Some worked example showing the above scenarios (ran on Oracle 12.1.0.2.0 in a PDB) :

NEIL @ ORCL01 > -- Scenario 1
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) ;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_PK              UNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index is gone...
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
no rows selected
NEIL @ ORCL01 > -- Scenario 2
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create unique index neil_i on neil(col_id);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) ;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               UNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               UNIQUE
NEIL @ ORCL01 > -- Scenario 3
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create index neil_i on neil(col_id);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) using index neil_i;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               NONUNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I               NONUNIQUE
NEIL @ ORCL01 > -- Scenario 4
NEIL @ ORCL01 > drop table neil;
Table dropped.
NEIL @ ORCL01 > create table neil (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create index neil_i_composite on neil(col_id,col_txt);
Index created.
NEIL @ ORCL01 > alter table neil add constraint neil_pk primary key (col_id) using index neil_i_composite;
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I_COMPOSITE     NONUNIQUE
NEIL @ ORCL01 > insert into neil values (1,'A');
1 row created.
NEIL @ ORCL01 > insert into neil values (1,'B');
insert into neil values (1,'B')
*
ERROR at line 1:
ORA-00001: unique constraint (NEIL.NEIL_PK) violated
NEIL @ ORCL01 > commit;
Commit complete.
NEIL @ ORCL01 > alter table neil drop constraint neil_pk;
Table altered.
NEIL @ ORCL01 > -- And the index survives
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name = 'NEIL';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_I_COMPOSITE     NONUNIQUE

So how do I tell if, when I drop my constraint, I will also drop the index? This does not appear to be exposed in any DBA_ view of which I am aware, so I have had to rely upon this post from Jonathan Lewis to help me decipher the information. You need to see if certain bits are set in the column sys.ind$.property. Here’s my worked example to show the information we seek:

NEIL @ ORCL01 > drop table neil_cons_create;
Table dropped.
NEIL @ ORCL01 > create table neil_cons_create (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > alter table neil_cons_create add constraint neil_cons_create_pk primary key (col_id);
Table altered.
NEIL @ ORCL01 > drop table neil_pre_create;
Table dropped.
NEIL @ ORCL01 > create table neil_pre_create (col_id number, col_txt varchar2(10 char));
Table created.
NEIL @ ORCL01 > create unique index neil_pre_create_i on neil_pre_create(col_id);
Index created.
NEIL @ ORCL01 > alter table neil_pre_create add constraint neil_pre_create_pk primary key (col_id);
Table altered.
NEIL @ ORCL01 > select index_name,uniqueness from user_indexes where table_name like 'NEIL%';
INDEX_NAME           UNIQUENESS
-------------------- --------------------
NEIL_CONS_CREATE_PK  UNIQUE
NEIL_PRE_CREATE_I    UNIQUE

select do.owner,do.object_name,do.object_id,ind.property, 'Keep Index' action
  from sys.ind$ ind
       ,dba_objects do
where do.object_name like 'NEIL%'
  and do.object_id = ind.obj#
  and bitand(ind.property,4097) = 4097
union all
select do.owner,do.object_name,do.object_id,ind.property, 'Drop Index' action
  from sys.ind$ ind
      ,dba_objects do
where do.object_name like 'NEIL%'
  and do.object_id = ind.obj#
  and bitand(ind.property,4097) <> 4097
order by 1,2,3
/
OWNER        OBJECT_NAME         OBJECT_ID  PROPERTY   ACTION
------------ ------------------- ---------- ---------- --------------------
NEIL         NEIL_CONS_CREATE_PK      93814       4097 Keep Index
NEIL         NEIL_PRE_CREATE_I        93816          1 Drop Index

When did I update that row?

I had a requirement the other day to understand when some data had been changed, but there was no column on the table which showed this.

So how do I find out? Well I could go off mining redo and lots of other time consuming and exotic things, but you can use the Oracle Pseudocolumn ORA_ROWSCN. This gives the SCN assocaited with each row. Well, actually it usually doesn’t. It does not show when the individual row of data was changed but, by default, gives the last changed SCN for the block where the row of data lives.

If you want to know with accuracy the SCN for the row change, you need to create your table with the extension “rowdependencies”. This adds a hidden column to each row, taking 6 bytes and storing the SCN on a row-by-row basis with your data. NOTE: This is a CREATE TABLE option only. You can’t modify a table to add rowdependencies, and there are a few restrictions for tables where this is enabled. Check the documentation for your version.

So, we now have the SCN, whether for the BLOCK or the ROW. What good is that? Well, there’s a chance that Oracle will have remembered approximately when that SCN came out. I think you are guaranteed this for about 120 hours – nearly a week – but depending upon a number of factors including the flashback retention and undo retention times.

You can get a rough idea of the SCN time from V$LOG_HISTORY, a more accurate idea from SYS.SMON_SCN_TIME, or just use the SCN_TO_TIMESTAMP function to make your life easier! If you are within 120 hours and have rowdependencies enabled, it will be roughly accurate to the time of the commit, depending upon a couple of factors – please see comments.

Here’s a short worked example to show the sorting of SCN’s in both normal (block) and rowdependency-enabled tables. Note how the ORA_ROWSCN is the same for each row in the same block in the normal table.

 

00:29:34 NEIL @ ORCL01 > create table scn_block (col1 number, col2 date, c_scn number);
Table created.
00:29:34 NEIL @ ORCL01 > create table scn_row (col1 number, col2 date, c_scn number) rowdependencies;
Table created.
00:29:34 NEIL @ ORCL01 > insert into scn_block values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:34 NEIL @ ORCL01 > commit;
Commit complete.
00:29:34 NEIL @ ORCL01 > host sleep 5
00:29:39 NEIL @ ORCL01 > insert into scn_row values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:39 NEIL @ ORCL01 > commit;
Commit complete.
00:29:39 NEIL @ ORCL01 > host sleep 5
00:29:44 NEIL @ ORCL01 > insert into scn_block values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:44 NEIL @ ORCL01 > commit;
Commit complete.
00:29:44 NEIL @ ORCL01 > host sleep 5
00:29:49 NEIL @ ORCL01 > insert into scn_row values (1,sysdate,userenv('commitscn') );
1 row created.
00:29:49 NEIL @ ORCL01 > commit;
Commit complete.
00:29:49 NEIL @ ORCL01 > column dt_1 format a30 truncate
00:29:49 NEIL @ ORCL01 > column dt_2 format a30 truncate
00:29:49 NEIL @ ORCL01 >
00:29:49 NEIL @ ORCL01 > select col1,col2,c_scn,ora_rowscn,scn_to_timestamp(c_scn) dt_1,scn_to_timestamp(ora_rowscn) dt_2 from scn_block;

COL1       COL2      C_SCN      ORA_ROWSCN DT_1                           DT_2
---------- --------- ---------- ---------- ------------------------------ ------------------------------
1          05-DEC-15 3670102    3670149    05-DEC-15 00.29.34.000000000   05-DEC-15 00.29.43.000000000
1          05-DEC-15 3670148    3670149    05-DEC-15 00.29.43.000000000   05-DEC-15 00.29.43.000000000

00:29:49 NEIL @ ORCL01 > select col1,col2,c_scn,ora_rowscn,scn_to_timestamp(c_scn) dt_1,scn_to_timestamp(ora_rowscn) dt_2 from scn_row;

COL1       COL2      C_SCN      ORA_ROWSCN DT_1                           DT_2
---------- --------- ---------- ---------- ------------------------------ ------------------------------
1          05-DEC-15 3670133    3670134    05-DEC-15 00.29.39.000000000   05-DEC-15 00.29.39.000000000
1          05-DEC-15 3670160    3670161    05-DEC-15 00.29.48.000000000   05-DEC-15 00.29.48.000000000

In an interesting convergance, whilst I was doing this, Martin Widlake was looking at the same thing in a slightly different way. How meta.

 

The Boat

Coming soon in March there’s one of the most popular global user group conferences, The Norwegian Oracle User Group Spring conference, known fondly as “The Boat“, it takes place from 10th to 12th of March sailing on board a cruise ship. The ship sails from Olso on 10th and arrives in Kiel, Germany the following day where it turns round and sails back to Oslo, arriving at 10am on Saturday 12th.

speaker_ougn_varseminar2016

I will be riding The Boat for the first time this year, so this should be a great new experience for me – standing in front of an audience while the auditorium sways its way across the sea.

I’ll be giving 2 presentations;

  • An Introduction to RAC – hopefully explaining how it works and why apps get it wrong when it comes to horizontal RAC scaling.
  • Why has my plan changed? Top 7 plan stability pitfalls and how to avoid them.

The full Agenda should be available w/c 18.January.2016 (probably on 19th…)

See you on The Boat, hoping for calm weather!

 

UKOUG Tech 15

It’s nearly early December, so it must be time for the UK Oracle User Group Conferences – JDE15, Apps15 and Tech15

There’s some absolutely wonderful presentations, and there’s the 2 I’m giving as well:

An Introduction to RAC (12c) at 9am on Monday 6th, and Troubleshooting Goldengate at 12:20 on Tuesday 7th.

I’m also helping our with a new style database Keynote this year. We are inviting questions from the audience and having them answered by senior Oracle technologists Dominic Giles, Maria Colgan and Penny Avril

There’s also a Performance Panel Q&A with Jonathan Lewis, Christian Antognini, Maria Colgan and Nigel Bayliss with a similar format.

Feel free to send me your questions for these 2 events (ukoug_questions@chandler.uk.com and marking them either Performance or Keynote) – but you’ll need to turn up to hear them answered!

 

Hope to see you there. Come over. Say Hi! Ask for a badge. Buy me a drink? Hey, you’re a bit forward, aren’t you. Here’s your badge.

 

 

Decoding DBA_TAB_COLUMNS high_value and low_value

When performance tuning, it can be important to understand the statistics in the database. It is worth noting that every column records it lowest (low_value) and highest (high_value) values in DBA_TAB_COLUMNS when you gather stats, like a mini-histogram. If the optimizer believes it knows the range of values within a column, plus the number of different values (cardinality), it can make guesses about the suitability and cost of index access when a predicate references that column.

This is especially important when looking at values outside of the low_value and high_value range, as Oracle crudely calculates the probability that such a value exists. It does this by identifying the total range of values (high_value – low_value) and lowering the calculated cost for predicates increasingly further outside of the known range until it decides that there’s no realistic chance of hitting the value you have asked for.

For example:

If low_value was 40 and high_value was 50, this gives us a range of 10. Therefore, any predicate selecting outside of the values 40-to-50 will have it’s cost lowered by a percentage of the range (10). If we assume an access cost of 1000, this is how the index access cost may change.

where X = 40 ; cost = 1000 - within low-to-high value
where X = 50 ; cost = 1000 - within low-to-high value
where X = 55 ; cost = 500  - outsize of low-to-high value by 50% of the range
where X = 32 ; cost = 200  - outsize of low-to-high value by 80% of the range
where X = 61 ; cost = 1    - outside of low-to-high value by more than 100% of the range
where X = 29 ; cost = 1    - outside of low-to-high value by more than 100% of the range

This is called statistical decay – the calculated lowering of the cost because Oracle doesn’t think it will find the value in the index.

Querying the LOW_VALUE and HIGH_VALUE

The low_value and high_value in DBA_TAB_COLUMNS is stored in an Oracle internal format, so you need to use DBMS_STATS.CONVERT_RAW_VALUE to turn it into a human-readable format. Here’s an example:

Table Definition:

08:08:24 SQL> desc nc1
Name    Null?    Type
-----   -----    ----
C1               DATE
C2               TIMESTAMP(6)
C3               VARCHAR2(10)
C4               NUMBER
C5               BINARY_FLOAT
C6               BINARY_DOUBLE

Select all of the data out of the table so we can see what we have

C1                  C2                          C3                 C4         C5         C6
------------------- ----------------------------- ---------- ---------- ---------- ----------
21/07/0001 01:02:03 21/07/0001 01:02:03.456789    Lower               1   1.0E+000   1.0E+000
21/07/9999 01:02:03 21/07/9999 01:02:03.456789    Upper              99   9.9E+001   9.9E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.586342    Now                50   5.0E+001   5.0E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.585173    Now                50   5.0E+001   5.0E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.579977    Now                50   5.0E+001   5.0E+001
31/12/2015 00:00:00 12/11/2015 08:08:24.585896    Now                50   5.0E+001   5.0E+001

Stats have been gathered, so we can look at DBA_TAB_COLUMNS (or USER_TAB_COLUMNS), and decode the high and low values

TABLE COLUM DATA_TYPE       LOW_VALUE                      HIGH_VALUE                     LOW_DECODE                     HIGH_DECODE
----- ----- --------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------
NC1   C1    DATE            64650715020304                 C7C70715020304                 0001-07-21 01:02:03            9999-07-21 01:02:03
NC1   C2    TIMESTAMP(6)    646507150203041B3A0C08         C7C707150203041B3A0C08         0001-07-21 01:02:03            9999-07-21 01:02:03
NC1   C3    VARCHAR2        4C6F776572                     5570706572                     Lower                          Upper
NC1   C4    NUMBER          C102                           C164                           1                              99
NC1   C5    BINARY_FLOAT    BF800000                       C2C60000                       1.0E+000                       9.9E+001
NC1   C6    BINARY_DOUBLE   BFF0000000000000               C058C00000000000               1.0E+000                       9.9E+001

So how did I decode all of that? I used the 12C capability of putting in-line functions in a SQL statement using subquery refactoring. Pre-12C, you could create the functions as objects.

with
FUNCTION raw_to_date(i_var in raw) return date  as
o_var date;
begin
dbms_stats.convert_raw_value(i_var,o_var);
return o_var;
end;
FUNCTION raw_to_number(i_var in raw) return number  as
o_var number;
begin
dbms_stats.convert_raw_value(i_var,o_var);
return o_var;
end;
FUNCTION raw_to_varchar2(i_var in raw) return varchar2  as
o_var varchar2(32767);
begin
dbms_stats.convert_raw_value(i_var,o_var);
return o_var;
end;
FUNCTION raw_to_float(i_var in raw) return binary_float  as
o_var binary_float;
begin
dbms_stats.convert_raw_value(i_var,o_var);
return o_var;
end;
FUNCTION raw_to_double(i_var in raw) return binary_double  as
o_var binary_double;
begin
dbms_stats.convert_raw_value(i_var,o_var);
return o_var;
end;
select table_name,column_name,data_type,low_value,high_value,
decode(substr(data_type,1,9),
'DATE'     , to_char(raw_to_date(low_value),'YYYY-MM-DD HH24:MI:SS') ,
'TIMESTAMP', to_char(raw_to_date(low_value),'YYYY-MM-DD HH24:MI:SS') ,
'NUMBER',    to_char(raw_to_number(low_value)),
'VARCHAR2',  to_char(raw_to_varchar2(low_value)),
'BINARY_FL', to_char(raw_to_float(low_value)),
'BINARY_DO', to_char(raw_to_double(low_value)),'Unknown') low_decode,
decode(substr(data_type,1,9),
'DATE'     , to_char(raw_to_date(high_value),'YYYY-MM-DD HH24:MI:SS') ,
'TIMESTAMP', to_char(raw_to_date(high_value),'YYYY-MM-DD HH24:MI:SS') ,
'NUMBER',    to_char(raw_to_number(high_value)),
'VARCHAR2',  to_char(raw_to_varchar2(high_value)),
'BINARY_FL', to_char(raw_to_float(high_value)),
'BINARY_DO', to_char(raw_to_double(high_value)),'Unknown') high_decode
from user_tab_columns
where table_name = 'NC1'
/

NOTE: There is no function to decode TIMESTAMPs. You can use them as input to the DATE convert (as above) but you will lose precision, or you can convert them manually:

select
      column_name,
      low_value,
          lpad(to_char(100*to_number(substr(low_value,1,2),'XX')
                          +to_number(substr(low_value,3,2),'XX')) - 10100,4,'0') ||'-'
       ||lpad( to_number(substr(low_value, 5,2),'XX'),2,'0')||'-'
       ||lpad( to_number(substr(low_value, 7,2),'XX'),2,'0')||' '
       ||lpad((to_number(substr(low_value, 9,2),'XX')-1),2,'0')||':'
       ||lpad((to_number(substr(low_value,11,2),'XX')-1),2,'0')||':'
       ||lpad((to_number(substr(low_value,13,2),'XX')-1),2,'0')||'.'
       ||lpad((to_number(substr(low_value,15,8),'XXXXXXXX')-1),8,'0')
       low_decode,
      high_value,
          lpad(to_char(100*to_number(substr(high_value,1,2),'XX')
                          +to_number(substr(high_value,3,2),'XX')) - 10100,4,'0') ||'-'
       ||lpad( to_number(substr(high_value, 5,2),'XX'),2,'0')||'-'
       ||lpad( to_number(substr(high_value, 7,2),'XX'),2,'0')||' '
       ||lpad((to_number(substr(high_value, 9,2),'XX')-1),2,'0')||':'
       ||lpad((to_number(substr(high_value,11,2),'XX')-1),2,'0')||':'
       ||lpad((to_number(substr(high_value,13,2),'XX')-1),2,'0')||'.'
       ||lpad((to_number(substr(high_value,15,8),'XXXXXXXX')-1),8,'0')
       high_decode
from user_tab_columns
where data_type like 'TIMESTAMP%'
/
COLUM LOW_VALUE                      LOW_DECODE                     HIGH_VALUE                     HIGH_DECODE
----- ------------------------------ ------------------------------ ------------------------------ ------------------------------
C2    646507150203041B3A0C08         0001-07-21 01:02:03.45678899   C7C707150203041B3A0C08         9999-07-21 01:02:03.45678899

OEM and monitoring the -MGMTDB GMIR Database

When you create Grid Infrastructure in 12.1.0.2, you are presented with a new (annoyingly named) “-MGMTDB”

This database is a standard, 12C CDB/PDB storing transient performance and other information (Grid Infrastructure Management Repository). If it is lost, no biggie. Just re-create it (in your voting disk DG. Aside: create a new MGMTDB_DG, move your voting disk there, re-create -MGMTDB, then move your voting disk back out to the proper multiple voting volumes.)

However, Oracle Enterprise Manager Cloud Control 12.1.0.5.0 and earlier sees this oracle database, PDB, listener and all, and decides to discover it. This is incorrect and should not happen. It is supposed to be “masked off” from OEM. Monitoring this database system will only lead to false positives and problems where none really exist, and all of the targets should be ignored (as per the attached picture)

OEM-MGMTDB

In a future release of Oracle Enterprise Manager Cloud Control, these targets will no longer be discovered and will automatically remain hidden from view within OEM, once the team have fixed the bug which – I was very reliably informed – they discovered the root cause of today.

Follow

Get every new post delivered to your Inbox.

Join 43 other followers

%d bloggers like this: