Archive for the ‘Administration’ Category
I was looking at the contents of a tablespace recently – just the extents, not a block dump – with the intention of shrinking a couple of the associated datafiles (don’t ask why – the reason is both stupid and irrelevant). I needed to know just how much I could shrink immediately, and if there were any quick wins in terms of moving objects from near the end of files so even more space could be reclaimed. I was a little surprised to discover that something which I have been doing for over 20 years was not as common knowledge as I thought it was, so I though I would try to let some more people know how to do this.
So, what’s in a tablespace? It’s made up of a number of datafiles (whether in ASM, RAW or on a journalled file-system is irrelevant for this conversation). Within the datafiles are extents. These are logical groupings of blocks – a multiple of the block size – which show where your objects are stored within the datafile.
I wrote this piece of SQL to show where the extents live in 1992. It still works today, regardless of whether your tablespaces are dictionary or locally managed:
SELECT tablespace_name, file_id, owner, segment_name, block_id begin, blocks, block_id+blocks-1 end, bytes/1024 KB, '' free FROM sys.dba_extents where tablespace_name = 'USERS' UNION SELECT tablespace_name, file_id, '' owner, '' segment_name, block_id begin, blocks, block_id+blocks-1 end, bytes/1024 KB, 'FREE' free FROM sys.dba_free_space where tablespace_name = 'USERS' ORDER BY 1,2,5 TSPC ID OWNER SEGMENT_NAME BEGIN BLOCKS END KB FREE USERS 4 SCOTT DEPT 128 8 135 64 USERS 4 SCOTT PK_DEPT 136 8 143 64 USERS 4 SCOTT EMP 144 8 151 64 USERS 4 SCOTT PK_EMP 152 8 159 64 USERS 4 SCOTT SALGRADE 160 8 167 64 USERS 4 168 128 295 1024 FREE USERS 4 SYS TABLE_1 296 8 303 64 USERS 4 SYS TABLE_1 304 8 311 64 USERS 4 SYS TABLE_1 312 8 319 64 USERS 4 SYS TABLE_1 320 8 327 64 USERS 4 SYS TABLE_1 328 8 335 64 USERS 4 SYS IND_1 336 8 343 64 USERS 4 344 40 383 320 FREE USERS 4 384 128 511 1024 FREE USERS 4 512 128 639 1024 FREE USERS 6 SYS TABLE_2 128 128 255 1024 USERS 6 256 12544 12799 100352 FREE USERS 6 SYS IND_2 12800 256 13055 2048
So, as we can see from the output above, the tablespace USERS has 2 datafiles; “4″ and “6″ (you can identify them fully using view DBA_DATA_FILES). We can shrink file 4 immediately by 1024K+1024K+320K = a bit over 2M.
File 6 is a bit more of a problem. There’s an index IND_2 in the way, unhelpfully located at the end of the datafile, which stops us from shrinking the datafile. If you try to resize the file using ALTER DATABASE DATAFIL E ‘+DATA/orcl/…..’ 10M, it will fail with the error ORA-03297: file contains used data beyond requested RESIZE value. In this case, the remedy may be as simple as performing an ALTER INDEX IND_2 REBUILD TABLESPACE USERS ONLINE; This will (probably) relocate the index somewhere lower in the file and allow us to shrink the datafile. Moving tables with an ALTER TABLE … MOVE command may prove more difficult depending upon your throughput due to the extra locks needed at the start and end of the transaction, and extra care needs to be take with any LOB objects involved.
Yesterday I had a problem… I noticed that one of our overnight jobs was failing. The job in question is an ad-hoc tidy-up job, ran as needed when a performance metric for a specific query indicates that we’re going to be in trouble soon.
So, what was the problem? I have the stack trace captured in the job output:
Finished shrinks at 02-APR-2012 00:12 Creating job data pump export job LOG_EXPORT User:....... NEW_SCHEMA Tables:..... 'table-1','table-2' DECLARE * ERROR at line 1: ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 911 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356 ORA-06512: at line 22
So, “job does not exist”! What does that mean, and what could be causing this to happen? What has changed? Let’s look at the code at line 22 and see what we’re running:
. . v_handle := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'LOG_EXPORT'); . .
And that’s it. We’re initiating a DBMS_DATAPUMP job (to extract the data to an external file before manipulation.)
So, what has changed? Since the last invokation of this job, we have had a new software release, and for this product that means a whole new schema. We’ve missed something. We must have a permissions problem within the database. Fortunately, we keep the old schema in place, locked, following an upgrade so all we need to do is compare the permissions for the old and new schemas and we’re done.
select granted_role from dba_role_privs where grantee = 'OLD_SCHEMA' minus select granted_role from dba_role_privs where grantee = 'NEW_SCHEMA' / no rows selected select privilege from dba_sys_privs where grantee = 'OLD_SCHEMA' minus select privilege from dba_sys_privs where grantee = 'NEW_SCHEMA' / no rows selected select table_name,privilege from dba_tab_privs where grantee = 'OLD_SCHEMA' minus select table_name,privilege from dba_tab_privs where grantee = 'NEW_SCHEMA' / TABLE_NAME PRIVILEGE ------------------------------ ---------------------------------------- DATA_PUMP_DIR READ DATA_PUMP_DIR WRITE
Ah ha! There’s the difference! And it is definitely data pump related, so let’s run the grants into the database and run the job again. This will fix it.
grant READ on DIRECTORY data_pump_dir to NEW_SCHEMA; grant WRITE on DIRECTORY data_pump_dir to NEW_SCHEMA; ... and run the job again... DECLARE * ERROR at line 1: ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 911 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356 ORA-06512: at line 22
What! Oh nuts. Why didn’t that work? [because you have not assigned a directory to DBMS_DATAPUMP yet! It'll be useful later though... Now stop jumping to conclusions! ed.] What are we going to do now? We know the permissions match. I’ve checked the O/S and there’s space in the filesystem. DBMS_DATAPUMP is not giving me enough information; what can I do to diagnose this problem further? I know! Why don’t I use datapump externally and see if that makes a difference…
expdp NEW_SCHEMA/..pwd..@..DB.. directory=data_pump_dir dumpfile=neilchandler.dmp job_name=neilchandler Export: Release 10.2.0.3.0 - 64bit Production on Tuesday, 03 April, 2012 12:54:27 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options ORA-31626: job does not exist ORA-31633: unable to create master table "NEW_SCHEMA.NEILCHANDLER" ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 863 ORA-01536: space quota exceeded for tablespace 'USERS'
Oh heck, we can’t write the parameter table as we have been denied a space quota on the default tablespace. D’oh! How painful was that to locate?
alter user NEW_SCHEMA quota unlimited on USERS;
And it all worked.
My question is, why couldn’t DBMS_DATAPUMP tell me that?
I’ve been spending some time working in Apex recently, building a small app to draw together the monitoring of application and infrastructure components into a single easy-to-visualise tool. As part of that, I wanted to be able to read and report on the alert log. Traditionally, that would have meant creating an external table to point to the alert log and reading it that way, with lots of string manipulation and regular expressions to try to pull out useful bits of information. However, Oracle 11G had made that a lot easier. Step forward X$DBGALERTEXT. This is the decoded version of the XML version of the Alert log, and as such provides lots of lovely columns to filter by, rather than a single line of text to decode. Particularly useful (for me) is the MESSAGE_LEVEL. Is this line of text informational (16), or critical (1), or something in between? Of course, each “normal” line of text is still available in the MESSAGE_TEXT column.
SQL> desc x$dbgalertext; Name Type ------------------------------ -------------------------------------------------------- ADDR RAW(4) INDX NUMBER INST_ID NUMBER ORIGINATING_TIMESTAMP TIMESTAMP(3) WITH TIME ZONE NORMALIZED_TIMESTAMP TIMESTAMP(3) WITH TIME ZONE ORGANIZATION_ID VARCHAR2(64) COMPONENT_ID VARCHAR2(64) HOST_ID VARCHAR2(64) HOST_ADDRESS VARCHAR2(46) MESSAGE_TYPE NUMBER MESSAGE_LEVEL NUMBER MESSAGE_ID VARCHAR2(64) MESSAGE_GROUP VARCHAR2(64) CLIENT_ID VARCHAR2(64) MODULE_ID VARCHAR2(64) PROCESS_ID VARCHAR2(32) THREAD_ID VARCHAR2(64) USER_ID VARCHAR2(64) INSTANCE_ID VARCHAR2(64) DETAILED_LOCATION VARCHAR2(160) PROBLEM_KEY VARCHAR2(64) UPSTREAM_COMP_ID VARCHAR2(100) DOWNSTREAM_COMP_ID VARCHAR2(100) EXECUTION_CONTEXT_ID VARCHAR2(100) EXECUTION_CONTEXT_SEQUENCE NUMBER ERROR_INSTANCE_ID NUMBER ERROR_INSTANCE_SEQUENCE NUMBER VERSION NUMBER MESSAGE_TEXT VARCHAR2(2048) MESSAGE_ARGUMENTS VARCHAR2(128) SUPPLEMENTAL_ATTRIBUTES VARCHAR2(128) SUPPLEMENTAL_DETAILS VARCHAR2(128) PARTITION NUMBER RECORD_ID NUMBER
Very handy. Just add your own view, synonym and permissions to read the view, and you’re away…
create view v_$alert_log as select * from x$dbgalertext; create public synonym v$alert_log for sys.v_$alert_log; grant select on v$alert_log to whomever... 1* select message_text from v$alert_log where ...; MESSAGE_TEXT ----------------------------------------------------------------- LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Shared memory segment for instance monitoring created Picked latch-free SCN scheme 2 Autotune of undo retention is turned on. LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 22.214.171.124.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in server-side pfile /home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora System parameters with non-default values: processes = 200 sessions = 322 sga_max_size = 2G pre_page_sga = TRUE nls_language = "ENGLISH" nls_territory = "UNITED KINGDOM" filesystemio_options = "SetAll" sga_target = 2G control_files = "/u02/oradata/orcl/control01.ctl" . [snip] . aq_tm_processes = 1 diagnostic_dest = "/u20/apps/oracle" PMON started with pid=2, OS id=2492 PSP0 started with pid=3, OS id=2494 VKTM started with pid=4, OS id=2512 at elevated priority VKTM running at (1)millisec precision with DBRM quantum (100)ms GEN0 started with pid=5, OS id=2520 DIAG started with pid=6, OS id=2522 ...etc...
I have been DBA-ing for a while now, and I today I used a “new” command which I have never used in the previous 20+ years I have worked with Oracle: DROP DATABASE. It’s amazing what you miss sometimes!
So, what does it do? As the name implies, it drops the database. That is, it deletes all of your database files. This is significantly safer than using traditional Operating System methods (assuming you even have access and are not using ASM), and for added safety you have to be running the database in RESTRICT MOUNT mode.
What a lovely command. It goes to all the trouble of locating and removing those troublesome files that were put into the wrong place by another DBA that you would have missed if cleaning up manually.
It certainly made the pre-production clone script (cloning from the Physical Standby) that I have just written quite a bit easier to code.
Does anybody know if there is any other reason for running the database in RESTRICT MOUNT mode, or was it designed just for this command?
EDIT: I was just informed via Twitter (@chandlerdba) by
@pfierens that this also takes out your SPFILE. If you are using an SPFILE (you should be!) and want to keep your db parameters for a db rebuild, I would recommend creating a pfile from the spfile first…
Has anyone ever done an RMAN “drop database including backups” ? You REALLY need to be sure you don’t want it back to run that one!
Well, not mine, but a recent post on the oracle-l mailing list which I thought was worth linking to and repeating. Firstly, because it’s interesting, and secondly, because it shows some good problem solving skills by both the poster and the wider Oracle community.
The poster in question was experiencing a problem whereby his alert log was not updating…
We have an alert.log that was last updated by the database on May 6th. Strangely enough, the log.xml in the alert directory of the diag destination is being updated normally, it is just the plain text alert.log in the trace directory that is not updated. We have bounced the database, changed the diag_destination parameter and I have even grepped all the file descriptors in /proc/*/fd for traces of a possibly opened alert.log - nothing, the alert.log is still not being updated. I tried dbms_system.ksdwrt to force a write to the alert.log - again, the log.xml is updated, the plain text is not.