PC Pro Oracle DBA Article

PC Pro August 2015 CoverI was interviewed recently by UK publication PC Pro magazine for their regular series about IT Careers with regard to database administration (surprise!) The (somewhat abridged!) interview has just appeared in the August 2015 edition of the magazine. I cannot link to the on-line version of the article so here’s my hopefully readable cropped photograph of it.

PC Pro also has an on-line sister publication, Alphr, which is worth a look

PC Pro Neil Chandler DBA

Cloud Control 12c Discovery Problem

You know when things just don’t quite work as they should, and you spend far longer than you think you should have sorting them out. Well, this was one of them. Burned 20 minutes on this!

I’m running Oracle Enterprise Linux 6.6 with Oracle Enterprise Manager Cloud Control 12.1.0.4, and I’m trying to discover some lovely new targets. Discovery from within OEM is simplicity itself: Delegate privileges from the OEM server, allow unfettered sudo access from the installation account (oracle) to root and autodiscover by IP range.

Except that in OEL 6.6 it doesn’t work. The discovery uses nmap, and nmap is looking for an older version of libpcap than is installed at 6.6 and the discovery fails:

/u01/app/oracle/agent12c/agent_inst/discovery/nmap/bin/nmap: error while loading shared libraries: libpcap.so.0.9.4: cannot open shared object file: No such file or directory

But I have a newer version of libpcap installed!

-bash-4.1$ rpm -qa | grep libpcap
libpcap-1.4.0-1.20130826git2dbcaa1.el6.x86_64

So, what to do? You can try to get your hands on an old version of libpcap, or you can cheat and symbolically link the new version to the old version name!

ln -s /usr/lib64/libpcap.so.1.4.0 /usr/lib64/libpcap.so.0.9.4

And all it well with the world again. And I’ve discovered all of my targets from within OEM. Nice.

p.s. ensure you have nmap in your system!

yum install nmap
Setting up Install Process
Package 2:nmap-5.51-4.0.1.el6.x86_64 already installed and latest version

Oracle ACE Award

I have just learned that I have been nominated and accepted as an Oracle ACE. I would like to thank Jonathan Lewis for the nomination and Oracle Corp. for the recognition.

O_ACELogo_clr

It’s a tremendous honour for me to receive this award. I just hope I can live up to it by continuing to serve the community, write some more blog posts (here) that people find useful, keep organising the UKOUG RAC, Cloud, Availability and Infrastructure SIG and keep helping with UKOUG Tech 15. Keep going to the London Oracle Beer [no link, just beer] and #ClubOracle and keep promoting the Oracle community.

I’d also like to thank Martin Widlake for persuading me to get up on stage a few years ago and start giving presentations too. I think your beer debt is finally paid :-)

Finally, I’d like to thank my wife and children for putting up with me sitting in my office at home, working on those presentations for hours and letting me enjoy myself off at conferences, SIGs and all the other events I attend when I could be spending time with them. I hope I get the work/life/community balance right.

Whilst all of that community stuff takes time and some effort, it’s rewarding and fulfilling, and remarkably social. Why don’t you try some or all of the above. I’m happy to help you get started. I’m short a talk on the next RAC SIG in July… fancy filling in? Send me an email. Lets talk.

Goldengate Data Manipulation – When Inserts & Updates differ

One very useful aspect of Golden Gate is to allow the manipulation of data between the source and the destinations.

One recent problem that I encountered was to alter the data differently for inserts than for updates. This was caused by the receiving system needing to have some default data in columns which may or may not be supplied by the insert or update statements. This is slightly more complex than first imagined:

  • If we have an INSERT and the column value IS NULL, or the column value IS NOT SUPPLIED by the insert statement, we should set the default for that column.
  • If we have an UPDATE and the column value IS NULL, then we should set the default for that column.
    We must NOT set the default if the UPDATE does not supply the column, otherwise we may incorrectly overwrite data in the target system.

The first problem is that, by default, you are not allowed to have more than one table mapping per table. To get around this, you need to use the “ALLOWDUPTARGETMAP” parameter. You can then add multiple mappings.

You need to be aware that each mapping will fire for each transaction action. If you have 2 active table mappings for the same table, you will end up with 2 inserts/updates/deletes. Get this mapping wrong and your data integrity will be destroyed, and you will get a lot of constraint errors. In this case we have 2 mappings, one for inserts and one for updates and deletes. I need to use the get/ignore commands to indicate which actions each mapping should use.

The following example was for a data pump, but it is valid to do this for all extracts and replicats.

 

-- ggsci: add extract p_neil, exttrailsource /u02/gg/bin12/dirdat/NE, BEGIN NOW
-- ggsci: add rmttrail ./dirdat/NP extract p_neil megabytes 100

EXTRACT p_neil
USERID owner_goldengate@DB_LOCAL PASSWORD password
passthru
TARGETDEFS ./dirdef/defgen.neil.def
RMTHOST remote.server.world mgrport 7809
RMTTRAIL ./dirdat/NP

-- So we can have multiple mappings for a single table. This is a dangerous parameter!
ALLOWDUPTARGETMAP

-- FOR INSERTS - REPLACE MISSING COLUMNS
getinserts
ignoreupdates
ignoredeletes

TABLE NCHA.NEIL, TARGET NCHA.NEIL
COLMAP (usedefaults, &
C2 = @IF (@COLTEST (C2 , NULL, MISSING) , '1900-01-01:00:00:00.000000' , C2 ), &
C3 = @IF (@COLTEST (C3 , NULL, MISSING) , '1900-01-01:00:00:00.000000' , C3 ), &
);

-- FOR UPDATES - IGNORE MISSING COLUMNS
-- We will do the deletes here too. If they are supplied as NULL they should be modified
-- You may need to do a separate section for deletes depending upon your rules.
ignoreinserts
getupdates
getdeletes

TABLE NCHA.NEIL, TARGET NCHA.NEIL
COLMAP (usedefaults, &
C2 = @IF (@COLTEST (C2 , NULL) , '1900-01-01:00:00:00.000000' , C2 ), &
C3 = @IF (@COLTEST (C3 , NULL) , '1900-01-01:00:00:00.000000' , C3 ), &
);

-- And back to normal for subsequent table mappings
getinserts
getupdates
getdeletes

UKOUG Tech15 Call for Papers

The #ukoug_tech15 call for papers has gone out, and will be open until midnight on 10th May 2015

TECH15_EF_SAA_v1

Last year was a fabulous event in Liverpool and we hope and expect it to be even better in Birmingham this year.

Read more of this post

Goldengate Log Rotation

Golden Gate 12 has some excellent commands to keep your log files in check, plus one glaring omission (scheduled for a future enhancement)

Each extract, datapump and replicat will be writing to report (.rpt) and discard (.dsc) files in the dirrpt directory (if you aren’t specifying a discard file, you should. They are very useful for troubleshooting)
If your system is up for a long time, these files are going to get large. Oracle has realised this and provides some lovely in-built log rotation commands. To keep my parameter (.prm) files nice and neat and consistent, I use include files, and this is a perfect case for a standard include.

report.prm:

-- Standard include commands for ALL extracts and replicats to ensure they are aligned
-- Write the days stats out to the file at the end of every day.
-- Roll the file over every week
-- Report just how much throughput we have every 15 minutes
-- History: 14.04.2015 N Chandler 
-- place the command include/dirprm/report.prm in your parameter files

STATOPTIONS REPORTDETAIL, RESETREPORTSTATS
REPORT AT 23:59
REPORTROLLOVER AT 00:01 ON MONDAY
REPORTCOUNT EVERY 15 MINUTES, RATE
-- or if you would rather by volume...
--REPORTCOUNT EVERY 10000000 RECORDS, RATE

I think it’s worth pointing out here what the 2 bracketed throughput numbers output by the REPORTCOUNT commands mean, as you’ll struggle to find it in the documentation

Rate  = number of records processed per second since startup divided by the total time since startup of the extract/replicat
Delta = number of records processed per second since last report divided by time since last report (in this case, 15 minutes)

 

 

There is 1 notable growing file which you cannot rotate using Goldengate commands: ggserr.log

This is a significant oversight by Oracle and will be rectified in a future release, but as of 12.1 you have to manually sort this out. You have 2 main options to do this:

1. Stop the manager, rename the file, restart the manager
2. Copy the file to a new file and then empty the in-place file by catting /dev/null into it. (I’m sure there’s a Windows equivalent of this, but I mainly work on Unix)
* DO NOT simply delete the file while the manager is running.
All future error output will drop into a “black hole” until the manager is restarted.   Option 2 tends to be preferably, so here’s part of a bash script I use to perform this action

#!/bin/bash
# rotate_ggserr_log.sh - copies the logfile to one side with a date suffix and blows away the current file
# but leaves it in place to we can continue to write to it with the manager.
# Neil Chandler 14.04.2015 created
#
today=`date +%Y%m%d`

-- Check to see if we have already rolled-over today
if [ -e /u99/gg/bin/ggserr.log.${today} ]
then
 echo "File /u99/gg/bin/ggserr.log.${today} exist already. Stopping."
else
 # copy the log file preserving attributes
 /bin/cp -pnv /u99/gg/bin/ggserr.log /u99/gg/bin/ggserr.log.${today}

 # See if there is a difference - did you copy it successfully?
 diff /u99/gg/bin/ggserr.log /u99/gg/bin/ggserr.log.${today}
 RC=$?

 # If there is no difference, wipe the ggserr.log file out
 # otherwise stop!
 if [ ${RC} -eq 0 ]
 then
  echo "clear the file /u99/gg/bin/ggserr.log"
  cat /dev/null > /u99/gg/bin/ggserr.log
  exit ${RC}
 else
  echo "Error - cannot clear file /u99/gg/bin/ggserr.log as it's not the same as the copied version. Stopping."
  exit ${RC}
 fi
fi

Developers

Just a small Sunday night anecdote with a wider point. I, or maybe a colleague, recently received an update statement from a developer. Now, this developer is long of tooth and is well versed in the ways of Oracle data manipulation.

The aforementioned update statement contained an interesting hint. BYPASS_UJVC. You may not have heard of this hint. It’s not commonly used, although it’s been around since Oracle 8.1.5. Mainly because it is both undocumented and unsupported by Oracle. In the right hands, it’s a very neat way around a problem of doing an update through a join where you would otherwise be restricted by the potential of having transient keys (i.e. multiple updates via the join giving random results). There’s a bunch of other blogs around about how/why/not to use it so I won’t waffle on here.

However, the Dev was disappointed when we [the DBA’s] told him to, erm, rewrite his code (probably as a merge – tends to let you get round the same problem), given he has been using this hint for as long as it’s been around (a long time!) but as it’s NOT supported we wouldn’t allow it. I’m not about to update millions of rows in a multi-billion row database with an unsupported function unless I have a MAJOR problem.

The point of the story is, that evening, we met up for drinks with another Developer (Dev2) whom we have both known for a couple of decades. Dev says to Dev2 “DBA’s – they are never happy, going around being all DBA-ish and No-you-cant today”, and (unprompted) Dev2 says “BYPASS_UJVC?”

I like Developers – I used to be one – but the role of Developer and DBA should be symbiotic and not adversarial as it can become upon occasion. We should work together – all of the best system I have delivered have a great relationship between Dev and DBA – but don’t ask me to bend my database out of shape just so you can take a short cut. Talk about physicalities, data access, data life-cycle, volume and performance before a line of code has been written and we will all work much better together. If all parties feel they have some ownership with the system, there is a greater chance of success and joy.

Follow

Get every new post delivered to your Inbox.

Join 32 other followers

%d bloggers like this: