Neil Chandler's DBA Blog

A resource for Database Professionals

Archive for the ‘Programming’ Category

Developers Killing Sessions

with 3 comments

When you end up spending a far great a percentage of your day than seems sensible killing off Java connections that Developers have carelessly left lying around, locking objects all over the place, you need a solution to get them to go away. The solution is to let them do it themselves!

I’m not advocating granting ALTER SYSTEM to Developers! That way madness lies, or certainly some unintended consequences. I’m all for Dev’s having a lot of freedom in the database, just not freedom with the database.

So, creating a stored procedure (in this example as sys, but as any user with an explicit ALTER SYSTEM privilege granted will do) to kill sessions without allowing too much latitude to do anything else seems appropriate. Here’s one I built earlier:

 

create or replace procedure sys.kill_session 
 ( p_sid IN number, p_serial IN number, p_instance IN number) as
 -- Neil Chandler. Grant the ability to kill session on a restricted basis. 21.07.2010
 l_username varchar2(30) := null;
 l_priv number := 1;
begin
 -- Who owns the session?
 select username into l_username
 from gv$session
 where sid = p_sid and serial#=p_serial and inst_id = p_instance;
 -- Check for DBA role
 select count(*) into l_priv
 from dba_role_privs
 where grantee = l_username and granted_role = 'DBA';
 -- If the user has the DBA priv, deny the kill request
 if l_priv > 0 or l_username is null
 then
 dbms_output.put_line 
  ('User request to kill session '||p_sid||','||p_serial||',@'||p_instance||
   ' denied. Session is for privileged user '||l_username||'.');
 else
 dbms_output.put_line 
   ('Killing user '||l_username||' - '||p_sid||','||p_serial||',@'||p_instance);
 execute immediate 'alter system disconnect session '''||
                    p_sid||','||p_serial||',@'||p_instance||
                   ''' immediate';
 end if;
end;
/

-- and let the proc be seen and used
create or replace public synonym kill_session for sys.kill_session; 
grant execute on kill_session to (whomever);

Then a nifty bit of sql to generate the kill commands for the Developers. 
Please include your favourite columns from gv$session:

select username,status,blocking_session,
 'exec kill_session ('|| sid||','||serial#||','||inst_id||')' Kill_Command 
 from gv$session
where username is not null
  and type <> 'BACKGROUND'
/

USERNAME  STATUS  BLOCKING_SESSION KILL_COMMAND
--------- ------- ---------------- ------------------------------------ 
SYS       ACTIVE                   exec kill_session (31,65,1)
SYSTEM    INACTIVE                 exec kill_session (58,207,1)
USER_1    INACTIVE                 exec kill_session (59,404,1)
USER_2    INACTIVE                 exec kill_session (72,138,1)
USER_2    INACTIVE                 exec kill_session (46,99,2)


May the odds be forever in your favour. Let the killing commence…

Written by Neil Chandler

29th May 2014 at 11:13

Doing it properly?

with one comment

When giving a presentation last year about how much a DBA should do to get to the bottom of a problem; in a discussion between Martin Widlake, myself, and the audience we amusingly concluded that we probably shouldn’t be doing things quite right first time.

 

What do I mean by this? Well, we should be doing things right-enough. We don’t want anything to fail. We want the project you are working on to succeed. BUT, where is the mileage in doing everything right first time? NOBODY got commended by their management for implementing a really efficient IOT in the original schema design over and above a Heap. How many people will ooh and aah, and give you a really big bonus, over than nifty single table hash cluster that makes the system just Zing? Or the really, really clever statistics design. Nobody ever notices when the implementation is just right.

 

That said, even when you have plenty of influence over the design, spend ages getting it right, ensuring all the right bits are partitioned in the right way, the right services on the more appropriate nodes, seriously exceed the required number of transaction per seconds as they gave you a much bigger set of servers and SAN resources than you asked for, and then the Developers go and use an ORM like Hibernate and mess the whole thing up:

- Can you just change the query to work like this…. No.
- You query appears to be mixing ANSI-standard SQL with Oracle-syntax SQL. Can you please be consistent as this isn’t a good idea… No.
– You want me to write a trigger to audit when a new record has a different value in a column because you don’t know how to write a join, and you have no idea what a Windowing function is in SQL. Oh, you don’t actually understand SQL at all!

Some days it’s just not worth chewing through the straps.

Written by Neil Chandler

21st May 2014 at 21:50

Oracle’s Locking Model – Multi Version Concurrency Control

leave a comment »

mvcc

Written by Neil Chandler

1st December 2013 at 15:42

SQL Developer insight

leave a comment »

When using SQL Developer, there is a feature called Completion Insight which help you write SQL and PL/SQL in the SQL Worksheets. It will auto-pop helpful things, like object name when typing a SELECT statement, column names if it is aware of the object which you are accessing and things like that.

The pop-up seems to pop-up inconsistently, appearing when I don’t want it and not appearing when I do. I finally snapped and decided to work out (look up) exactly how this feature works. And it’s like this.

1. Make sure your SQL is correctly formed. If you have multiple bits of SQL in the Worksheet, and they are not terminated with a “;” or “/“, then it can get confused about where you are contextually within a statement and not pop-up.

2. The Auto-popup has a delay of either 0.3s (SQL) or 0.6s (PL/SQL). This can be changed in Tools -> Preferences -> Code Editor -> Completion Insight. I have now set a 1.0s delay so it’s not quite so eager to appear.

3. You want the pop-up to appear? Press CTRL + SPACE. If it still does not appear, then it cannot determine your SQL Context (see point 1) or there’s nothing to pop.

Hope this helps you. It’s certainly helped me.

Written by Neil Chandler

5th September 2013 at 07:43

Exposing the Oracle Alert Log to SQL

leave a comment »

I’ve been spending some time working in Apex recently, building a small app to draw together the monitoring of application and infrastructure components into a single easy-to-visualise tool. As part of that, I wanted to be able to read and report on the alert log. Traditionally, that would have meant creating an external table to point to the alert log and reading it that way, with lots of string manipulation and regular expressions to try to pull out useful bits of information. However, Oracle 11G had made that a lot easier. Step forward X$DBGALERTEXT. This is the decoded version of the XML version of the Alert log, and as such provides lots of lovely columns to filter by, rather than a single line of text to decode. Particularly useful (for me) is the MESSAGE_LEVEL. Is this line of text informational (16), or critical (1), or something in between? Of course, each “normal” line of text is still available in the MESSAGE_TEXT column.

SQL> desc x$dbgalertext;
 Name                           Type
 ------------------------------ --------------------------------------------------------
 ADDR                           RAW(4)
 INDX                           NUMBER
 INST_ID                        NUMBER
 ORIGINATING_TIMESTAMP          TIMESTAMP(3) WITH TIME ZONE
 NORMALIZED_TIMESTAMP           TIMESTAMP(3) WITH TIME ZONE
 ORGANIZATION_ID                VARCHAR2(64)
 COMPONENT_ID                   VARCHAR2(64)
 HOST_ID                        VARCHAR2(64)
 HOST_ADDRESS                   VARCHAR2(46)
 MESSAGE_TYPE                   NUMBER
 MESSAGE_LEVEL                  NUMBER
 MESSAGE_ID                     VARCHAR2(64)
 MESSAGE_GROUP                  VARCHAR2(64)
 CLIENT_ID                      VARCHAR2(64)
 MODULE_ID                      VARCHAR2(64)
 PROCESS_ID                     VARCHAR2(32)
 THREAD_ID                      VARCHAR2(64)
 USER_ID                        VARCHAR2(64)
 INSTANCE_ID                    VARCHAR2(64)
 DETAILED_LOCATION              VARCHAR2(160)
 PROBLEM_KEY                    VARCHAR2(64)
 UPSTREAM_COMP_ID               VARCHAR2(100)
 DOWNSTREAM_COMP_ID             VARCHAR2(100)
 EXECUTION_CONTEXT_ID           VARCHAR2(100)
 EXECUTION_CONTEXT_SEQUENCE     NUMBER
 ERROR_INSTANCE_ID              NUMBER
 ERROR_INSTANCE_SEQUENCE        NUMBER
 VERSION                        NUMBER
 MESSAGE_TEXT                   VARCHAR2(2048)
 MESSAGE_ARGUMENTS              VARCHAR2(128)
 SUPPLEMENTAL_ATTRIBUTES        VARCHAR2(128)
 SUPPLEMENTAL_DETAILS           VARCHAR2(128)
 PARTITION                      NUMBER
 RECORD_ID                      NUMBER

Very handy. Just add your own view, synonym and permissions to read the view, and you’re away…

create view v_$alert_log as select * from x$dbgalertext;
create public synonym v$alert_log for sys.v_$alert_log;
grant select on v$alert_log to whomever...

  1* select message_text from v$alert_log where ...;

MESSAGE_TEXT
-----------------------------------------------------------------
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Shared memory segment for instance monitoring created
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.

Using parameter settings in server-side pfile /home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora
System parameters with non-default values:
  processes                = 200
  sessions                 = 322
  sga_max_size             = 2G
  pre_page_sga             = TRUE
  nls_language             = "ENGLISH"
  nls_territory            = "UNITED KINGDOM"
  filesystemio_options     = "SetAll"
  sga_target               = 2G
  control_files            = "/u02/oradata/orcl/control01.ctl"
.
[snip]
.
  aq_tm_processes          = 1
  diagnostic_dest          = "/u20/apps/oracle"
PMON started with pid=2, OS id=2492
PSP0 started with pid=3, OS id=2494
VKTM started with pid=4, OS id=2512 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
GEN0 started with pid=5, OS id=2520
DIAG started with pid=6, OS id=2522
...etc...

Written by Neil Chandler

26th March 2012 at 21:58

SYSTIMESTAMP INTERVAL Processing

with 4 comments

…or how to calculate dates in Oracle.

This is just a quick post to try to encourage the use of the INTERVAL function when adjusting (sys)timestamps (or dates). I thought this would be better expressed through a quick script with comments to show how using the traditional Oracle method of calculating fractions of a day can cause problems and make you have to think more than necessary.

There are 2 main issues with using the traditional Oracle method of calculating date/time changes. Firstly, it’s strange. You have to calculate fractions of a day. 1 second is 1/86400, 1 minute is 1/1440 [maybe 1/(24*60) expresses it better]. Secondly, with timestamps it causes an implicit conversion to a date type, with all of the unintended consequences that implicit conversion carries.

Lets run the script and see what happens:

Script Output

> -- Lets just set the date format explicitly before we start. > alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS' session SET altered.
> -- Lets start simple and add a day. > select systimestamp, systimestamp + 1 from dual
SYSTIMESTAMP  SYSTIMESTAMP+1             ------------- ------------------------- 09-DEC-11 18.39.53.026000000 +00:00 10-DEC-2011 18:39:53  
> -- Hang on, doing this the traditional Oracle way has implicitly cast > -- the timestamp into a sysdate format. We need to convert it back. > -- Lets try that again, but add 3 hours instead > select systimestamp, to_timestamp( > systimestamp + (3/24), >      'DD-MON-YYYY HH24:MI:SS' ) from dual
SYSTIMESTAMP  TO_TIMESTAMP(SYSTIMESTAMP+(3/24),'DD-MON-YYYYHH24:MI:SS') ------------- ------------------------- 09-DEC-11 18.39.53.026000000 +00:00 09-DEC-11 21.39.53.000000000
> -- So we have an implicit conversion, and we've lost the timestamp precision. > -- Now lets add 3 hours, 5 minutes 10.5 seconds. >select systimestamp, to_timestamp( > systimestamp + (3/24) + (5/(24*60)) + (10.5/24/60/60), > 'DD-MON-YYYY HH24:MI:SS') from dual
SYSTIMESTAMP  TO_TIMESTAMP(SYSTIMESTAMP+(3/24)+(5/(24*60))+(10.5/24/60/60),'DD-MON-YYYYHH24:MI:SS') ------------- ------------------------- 09-DEC-11 18.39.53.026000000 +00:00 09-DEC-11 21.45.04.000000000
> -- And we completely lost the .5 in the 10.5 as we've converted to a sysdate. > -- It was rounded up. You might also have noticed that I have used some > -- different date processing to calculate fractions of a day. > -- There are many different way to calculate time in Oracle: > -- 10 minutes can be 10/1440, 10/60/24, 10/(60*24), 0.00694, 600/86400 etc, > -- It's all tricky to understand and standards vary from company to company, > -- if the company actually has a standard. > -- There is a better way, using the INTERVAL function. It goes like this: > -- Lets start simple and add a day.
> select systimestamp, systimestamp + INTERVAL '1' day from dual
SYSTIMESTAMP  SYSTIMESTAMP+INTERVAL'1'DAY ------------- --------------------------- 09-DEC-11 18.39.53.042000000 +00:00 10-DEC-11 18.39.53.042000000 +00:00
> -- So far so good. And we haven't lost the data type either! > -- No implicit conversion to break our code in the future. > -- Now lets try to add 3 hours
> select systimestamp, systimestamp + INTERVAL '3' hour from dual
SYSTIMESTAMP  SYSTIMESTAMP+INTERVAL'3'HOUR ------------- ---------------------------- 09-DEC-11 18.39.53.042000000 +00:00 09-DEC-11 21.39.53.042000000 +00:00
> -- Note the indicator is always singular. > -- Now lets add 3 hours, 5 minutes 10.5 seconds
> select systimestamp, > systimestamp + INTERVAL '3' hour > + INTERVAL '5' minute >                    + INTERVAL '10.5' second >       from dual
SYSTIMESTAMP  SYSTIMESTAMP+INTERVAL'3'HOUR+INTERVAL'5'MINUTE+INTERVAL'10.5'SECOND ------------- ------------------------------------------------------------------- 09-DEC-11 18.39.53.042000000 +00:00 09-DEC-11 21.45.03.542000000 +00:00                                 
> -- Spot on - and the precision is correct too > -- My mam/mum/mom* could read it and work out what was going on. > -- But she is an OCA** (* delete as applicable) (**not really) > -- There you go. Get your company to use this nomenclature and you're home free. > -- No more strange time-base calculations or implicit conversion errors. > -- It also works with DATE types too.

Original Script set echo on
-- Lets just set the date format explicitly before we start. alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; -- Lets start simple and add a day. select systimestamp, systimestamp + 1 from dual; -- Hang on, doing this the traditional Oracle way has implicitly cast -- the timestamp into a sysdate format. We need to convert it back. -- Lets try that again, but add 3 hours instead select systimestamp, to_timestamp(        systimestamp + (3/24),                                    'DD-MON-YYYY HH24:MI:SS' ) from dual; -- So we have an implicit conversion, and we've lost the timestamp precision. -- Now lets add 3 hours, 5 minutes 10.5 seconds. select systimestamp, to_timestamp(        systimestamp + (3/24) + (5/(24*60)) + (10.5/24/60/60),                                    'DD-MON-YYYY HH24:MI:SS') from dual; -- And we completely lost the .5 in the 10.5 as we've converted to a sysdate. -- It was rounded up. You might also have noticed that I have used some -- different date processing to calculate fractions of a day. -- There are many different way to calculate time in Oracle: -- 10 minutes can be 10/1440, 10/60/24, 10/(60*24), 0.00694, 600/86400 etc, -- It's all tricky to understand and standards vary from company to company, -- if the company actually has a standard. -- There is a better way, using the INTERVAL function. It goes like this: -- Lets start simple and add a day. select systimestamp, systimestamp + INTERVAL '1' day from dual; -- So far so good. And we haven't lost the data type either! -- No implicit conversion to break our code in the future. -- Now let's try to add 3 hours select systimestamp, systimestamp + INTERVAL '3' hour from dual; -- Note the indicator is always singular. -- Now lets add 3 hours, 5 minutes 10.5 seconds select systimestamp, systimestamp + INTERVAL '3' hour                                   + INTERVAL '5' minute                                   + INTERVAL '10.5' second                      from dual; -- Spot on - and the precision is correct too -- My mam/mum/mom* could read it and work out what was going on. -- But she is an OCA** (* delete as applicable) (**not really) -- There you go. Get your company to use this nomenclature and you're home free. -- No more strange time-base calculations or implicit conversion errors. -- It also works with DATE types too.

Written by Neil Chandler

12th December 2011 at 07:59

Implicit Conversion Errors

with 6 comments

A while ago, I failed over a database (as planned) to it’s Dataguard copy, and of course everything worked as expected. Everything, that is, except a couple of reports which get sent directly from the database server early in the morning. The report generation had failed. After some investigation, we discovered that the newly active Dataguard server did not have NLS_DATE_FORMAT set in the environment, and the 2 reports in question were coded something like:

select col1, col2, col3 from user_data where user_date >= '25-Nov-2011 00:00:00';

The select was failing with the error.
 
ORA-01821: date format not recognized
 

If the developer had coded an explicit conversion, then we would not have experienced a problem.

select col1, col2, col3 from user_data
where user_date >= to_date('25-Nov-2011 00:00:00','DD-MON-YYYY HH24:MI:SS');
 

Coding with an implicit date mask is great and works successfully every time, as long as the NLS_DATE_FORMAT in your session matches the date mask you have supplied, which is course it always does. Until something changes and it doesn’t. In my experience, implicit conversion is probably the single greatest source of failure in systems and also one of the hardest to track down. It frequently occurs in a badly designed schema which doesn’t use the correct datatypes. I have seen schemas where everything is being stored a VARCHAR2, including numeric fields. This works fine as Oracle will happily insert implicit to_number functions into your code and return answers in ways which seem correct, until you get some rogue data into the database and everything falls apart.

USER1 @ orcl > -- Create a table but allow generic data, rather than specifying numeric data
USER1 @ orcl > -- The client will take care of validation. Of course it will.
USER1 @ orcl > create table implicit_problem (col1 varchar2(10), col2 varchar2(10));

Table created.

USER1 @ orcl >
USER1 @ orcl > -- Lets fill the table with reasonable data
USER1 @ orcl > insert into implicit_problem values (1,1);

1 row created.

USER1 @ orcl > insert into implicit_problem values (2,10);

1 row created.

USER1 @ orcl > insert into implicit_problem values (3,66);

1 row created.

USER1 @ orcl >
USER1 @ orcl > -- Oracle is putting an implicit to_number around col1*col2 to allow the calculation
USER1 @ orcl > select col1,col2,col1*col2 from implicit_problem;

COL1       COL2        COL1*COL2
---------- ---------- ----------
1          1                   1
2          10                 20
3          66                198

USER1 @ orcl >
USER1 @ orcl > -- And now lets have some incorrectly validated data
USER1 @ orcl > insert into implicit_problem values (4,'A');

1 row created.

USER1 @ orcl >
USER1 @ orcl >
USER1 @ orcl > -- And now the implicit conversion fails
USER1 @ orcl > select col1,col2,col1*col2 from implicit_problem;
ERROR:
ORA-01722: invalid number

no rows selected

USER1 @ orcl >
USER1 @ orcl > -- Cleanup
USER1 @ orcl > drop table implicit_problem;

Table dropped.
 



It’s much easier (and quicker) to catch bad data going into a system than it is to perform problem resolution. Always code explicitly for your data types. Implicit conversion in yuor coding invariably leads to hard-to-find bugs.

Written by Neil Chandler

28th November 2011 at 09:31

Oracle Timestamp Processing – mildly annoying

with 3 comments

I was writing a small piece of SQL this morning which I needed to account for daylight savings time correctly. All of my databases run in UTC, so a quick foray into using TIMESTAMP AS TIME ZONE seemed the easiest way to accomplish this. So, I code it up and want to test my code to ensure that the timestamp operates correctly on both sides of UK Daylight savings. I figured that the easiest way to do this would be to used the old Oracle initialisation parameter FIXED_DATE. You can set this in the database on the fly and observe the results immediately. So to for testing (in a Dev database which only I was using). Guess what? FIXED_DATE works perfectly for SYSDATE. However, it is completely ignored for SYSTIMESTAMP! WHAT??? Who in Oracle missed this one? Let me show you how this (doesn’t) work, with a workaround for my testing included in the example to show how neatly TIMESTAMP AS TIME ZONE does work:

> alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS'

Session altered.

> select * from v$timezone_names where tzname like '%London%'

TZNAME                                                           TZABBREV
---------------------------------------------------------------- ----------------------------------------------------------------
Europe/London                                                    LMT
Europe/London                                                    GMT
Europe/London                                                    BST
Europe/London                                                    BDST

Before we start, check the current date
> select sysdate from dual

SYSDATE
--------------------
01-NOV-2011 11:50:15

Before Daylight Savings Time changes (on 29th October 02:00:00)
> alter system set fixed_date = '28-OCT-2011 08:00:00' scope=memory

System altered.

> select systimestamp at time zone 'Europe/London' from dual

SYSTIMESTAMPATTIMEZONE'EUROPE/LONDON'
---------------------------------------------------------------------------
01-NOV-11 11.50.15.666000 EUROPE/LONDON

The timestamp is unaffected by setting FIXED_DATE!!! Oracle! Grrrr!

> select sysdate from dual

SYSDATE
--------------------
28-OCT-2011 08:00:00

The UTC time is correct.

So I need to take the sysdate and transform it into a timestamp at timezone
London should be 1 hour ahead of UTC at this point
> select to_timestamp(sysdate) at time zone 'Europe/London' from dual

TO_TIMESTAMP(SYSDATE)ATTIMEZONE'EUROPE/LONDON'
---------------------------------------------------------------------------
28-OCT-11 09.00.00 EUROPE/LONDON

###########################################################################
Now to roll the time on and re-test
###########################################################################
After Daylight Savings Time changes (on 29th October 02:00:00)
> alter system set fixed_date = '30-OCT-2011 08:00:00' scope=memory

System altered.

> select sysdate from dual

SYSDATE
--------------------
30-OCT-2011 08:00:00

Now Daylight savings should by the same as UTC, not 1 hour ahead
> select to_timestamp(sysdate) at time zone 'Europe/London' from dual

TO_TIMESTAMP(SYSDATE)ATTIMEZONE'EUROPE/LONDON'
---------------------------------------------------------------------------
30-OCT-11 08.00.00 EUROPE/LONDON

Yey! Daylight davings is correct for London. 

Remove the fixed_date setting
> alter system set fixed_date=NONE scope=memory

System altered.

And check the date
> select sysdate from dual

SYSDATE
--------------------
01-NOV-2011 11:50:16
------------------------------------------------------------

There you have it. How very mildly annoying. Can’t use TIME ZONE with SYSDATE, can’t use FIXED_DATE with SYSTIMESTAMP.

Written by Neil Chandler

1st November 2011 at 12:18

Posted in Programming

Dennis Ritchie RIP

leave a comment »

There can be few scientists who have contributed so much to the world as Dennis Ritchie. Completely anonymous to the world at large, and to far too much of the computing fraternity too, his involvement in the development of C – the first portable programming language, and Unix cannot be understated. Who uses Unix? Well, everybody. What is the O/S on embedded devices? What did Steve Jobs base his Mac O/S and Mobile Operating systems on? Upon what did Bill Gates base the operating system for the new IBM PC in 1980? What do corporations across the world use to power their servers? What is it all built on? Unless you’re running some proprietary mainframe software, the building blocks of what you are using were set, to some degree, by Dennis Ritchie.

Whilst I never met him myself I did work with his sister, Lynn, for several years at a software house in the North East of England, where she still resides. She told me some of the stories about him; how the invention of C and Unix were really aims in making computing more portable, more standard, easier to use. How all that Ritchie, Ken Thompson, Brian Kernighan and the others got for this was their salary [although they did make a few quid on top out of their books :-) ], and the ability to distribute Unix free, with C, to all and sundry as Bell labs, a subsidiary of AT&T, had no financial interest in computing as they were a regulated telephone monopoly at the time.

Thanks Dennis.

Written by Neil Chandler

16th October 2011 at 17:19

Posted in Programming, Unix

Tagged with , , ,

Follow

Get every new post delivered to your Inbox.

Join 27 other followers

%d bloggers like this: