Oracle Management Database in Grid Infrastructure: -MGMTDB

Oracle,

Why? Why is is now mandatory to have the cluster management database as an Oracle CDB, with a PDB having the same name as the cluster? It’s not that I object to having another 1GB of memory lost to this DB, and up to 10GB of disk in the initial ASM Disk Group. It’s this:

You have called it -MGMTDB

That means I now have directories all over my Linux / Unix structure called “-MGMTDB”. Directories I may want to look in, to view a log to discovery why the “-MGMTDB” failed to create on install.

That leading , that MINUS . That’s the problem. THAT! WHY? I can’t use normal commands any more! It thinks the “-” is a switch and fails, unless I neutralise the “-” with a “–” prefix.

[oracle@vi-t5-oradev02 X]$ ls -l
total 4
drwxrwxr-x 2 oracle oracle 4096 Jul 20 12:39 -MGMTDB
[oracle@vi-t5-oradev02 X]$ cd -MGMTDB
bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

[oracle@vi-t5-oradev02 X]$ cd "-MGMTDB"
bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

[oracle@vi-t5-oradev02 X]$ cd *
bash: cd: -M: invalid option
cd: usage: cd [-L|-P] [dir]

[oracle@vi-t5-oradev02 X]$ cd -- -MGMTDB

Finally!

SO, why wasn’t it a plus, like +ASM. That was OK.

Oracle on 4096 (4k) sector disks don’t work (ish)

I recently came across 4K (4096 byte) sector drives. They are a fairly new thing and have come about so drives can exceed the 2TB limit imposed by having 512byte sectors. The details behind this can be found here, in much greater detail than I need to understand.

What I do understand is that Oracle doesn’t deal with 4K sectors (4Kn) very well and it shows up in a couple of ways. Don’t get me wrong, from Oracle 11.2, 4Kn database are supported, albeit with some features. Here’s 2 of them:
 
1. ACFS doesn’t like 4K sectors. There’s some fudging around identifying physical v logical 4k sectors but you need to check out the asm parameter “ORACLEASM_USE_LOGICAL_BLOCK_SIZE” to see if you can get it to work for you.
 
2. I was installing 12.1.0.2.0 Grid Infrastructure – pretty recent I hear you all say! That only came out in July 2014. One important aspect of 12.1.0.2.0 is that the management database was migrated from a being Berkely DB to an oracle single instance CDB with a single PDB. It’s called “-MGMTDB”. (this was optional prior to 12.1.0.2)

However, when installing 12.1.0.2.0 Grid Infrastructure, when it got to the bit at the end, after it’s all kind-of fully installed, it creates the -MGMTDB, and if you have a 4K Sector disks in ASM, it fails rather cryptically:

I have highlighted the key line in red. This isn’t obviously the problem, but it is the cause.

CRS-2674: Start of 'ora.mgmtdb' on 'server01' failed
[Thread-102] [ 2015-07-01 15:35:29.079 BST ] [HADatabaseUtils.start:1240]
Error starting mgmt database in local node, PRCR-1013 : Failed to start resource ora.mgmtdb
PRCR-1064 : Failed to start resource ora.mgmtdb on node server01
CRS-5017: The resource action "ora.mgmtdb start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA_DG/_mgmtdb/spfile-MGMTDB.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA_DG/_mgmtdb/spfile-MGMTDB.ora
ORA-15056: additional error message
ORA-17503: ksfdopn:2 Failed to open file +DATA_DG/_mgmtdb/spfile-mgmtdb.ora
ORA-15173: entry 'spfile-mgmtdb.ora' does not exist in directory '_mgmtdb'
ORA-06512: at line 4
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/server01/crs/trace/crsd_oraagent_grid.trc".
KJHA:2phase clscrs_flag:840 instSid:
KJHA:2phase ctx 2 clscrs_flag:840 instSid:-MGMTDB
KJHA:2phase clscrs_flag:840 dbname:
KJHA:2phase ctx 2 clscrs_flag:840 dbname:_mgmtdb
KJHA:2phase WARNING!!! Instance:-MGMTDB of kspins type:1 does not support 2 phase CRS

The fundamental problem is that, if you have a 4K sector and are using ASM, having your SPFILE in ASM doesn’t work. This was spotted in 11.2.0.3 (Doc: 16870214.8) but wasn’t fixed in 11.2.0.4.0 (it’s fixed by 11.2.0.4.6, possibly earlier) and it’s not fixed in 12.1.0.2.0 base release. Which mean the -MGMTDB will always fail to create. Itis fixed by patch set 3 12.1.0.2.3 (path 20485724)

However, you’ve then got a broken -MGMTDB, which you’ll need to recreate: [Doc ID 1589394.1]

## Stop and disable ora.crf resource.
## On each node, as root user:
crsctl stop res ora.crf -init
crsctl modify res ora.crf -attr ENABLED=0 -init
## Issue the DBCA command to delete the management database
## As Grid User, locate the node that the Management Database is running by executing:

/u01/app/grid/12.2.0.2/bin/srvctl status mgmtdb
## rebuild mgmt
## Set the GI HOME
export GI_HOME=/u01/app/grid/12.1
## As Grid User on any node execute the following DBCA command with the desired <DG Name>:
dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -gdbName _mgmtdb -storageType ASM -diskGroupName DATA01 -datafileJarLocation $GI_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
## Create a PDB within the MGMTDB using DBCA.
## As Grid User on any node execute the following DBCA command:
## NOTE: The CLUSTER_NAME needs to have any hyphens (“-“) replaced with underscores (“_”)

dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName **MY_CLUSTER_NAME_HERE** -createPDBFrom RMANBACKUP -PDBBackUpfile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GI_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true –internalSkipGIHomeCheck
## Secure that the Management Database credential:
## As Grid User, confirm the node on which MGMTDB is running by executing.
$GI_HOME/bin/srvctl status MGMTDB
Database is enabled
 Instance -MGMTDB is running on node <NODE_NAME>
 On <NODE_NAME>:>
## and secure on that node
$GI_HOME/bin/mgmtca
## Enable and start ora.crf resource.
## On each node, as root user:

$GI_HOME/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
$GI_HOME/bin/crsctl start res ora.crf -init

Good luck. And don’t use 4K sector sizes. It probably means your spindles are to big anyway. If “disk is cheap”, why do they have to keep buying such large capacity spindles with such low IOPS-per-GB for such huge quantities of money?

 

Locking Privileges in Oracle

What permissions do you need to lock rows on an Oracle table?
What about to lock the whole table?

It’s not quite as much as you may think!

Lets have a couple of users; schema_owner and user1

SQL> show user
USER is "SYS"
SQL> create user schema_owner identified by schema_owner;
User created.
SQL> grant connect,resource to schema_owner;
Grant succeeded.
SQL> grant unlimited tablespace to schema_owner;
Grant succeeded.
SQL> create user user1 identified by user1;
User created.
SQL> grant create session to user1;
Grant succeeded.

Now for a table and grants

SQL> conn schema_owner/schema_owner
Connected.
SQL> create table tab1 (col1 date, col2 number);
Table created.
SQL> insert into tab1 values (sysdate,1);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from tab1;
COL1		COL2
--------- ----------
14-JUL-15	   1
SQL> grant select on tab1 to user1;
Grant succeeded.

So, what can USER1 do with that table?

SQL> conn user1/user1
Connected.
SQL> select * from schema_owner.tab1;
COL1 COL2
--------- ----------
14-JUL-15 1

good

SQL> update schema_owner.tab1 set col2=2 where col2=1;
update schema_owner.tab1 set col2=2 where col2=1
*
ERROR at line 1:
ORA-01031: insufficient privileges

nice

SQL> insert into schema_owner.tab1 values (sysdate,2);
insert into schema_owner.tab1 values (sysdate,2)
*
ERROR at line 1:
ORA-01031: insufficient privileges

yeah

SQL> delete from schema_owner.tab1;
delete from schema_owner.tab1
*
ERROR at line 1:
ORA-01031: insufficient privileges

great

SQL> select * from schema_owner.tab1 for update;
COL1      COL2
--------- ----------
14-JUL-15          1

oh

SQL> lock table schema_owner.tab1 in exclusive mode;
Table(s) Locked.

What?!? Is this real? Has that REALLY lock the entire table with only SELECT permissions? Can I delete from that table from a different session + user which has permissions?

SQL> show user
USER is "SCHEMA_OWNER"
SQL> select * from schema_owner.tab1;
COL1      COL2
--------- ----------
14-JUL-15	   1
SQL> delete from schema_owner.tab1;
(no return....)

A quick look in gv$session will show you that USER1 is indeed blocking SCHEMA_OWNER despite only having SELECT privileges on the table:

select .... from gv$session;
CON_ID SID USERNAME	   SQL_ID	 STATUS   BS_STAT    BL_SID EVENT
------ --- --------------- ------------- -------- ---------- ------ ---------------------------
     3	47 USER1			 INACTIVE NO HOLDER  BLOCK  SQL*Net message from client
     3	55 SCHEMA_OWNER    5n1hw77std3h5 ACTIVE   VALID      47     enq: TM - contention

SQL> select * from dba_blockers
 2 ;

HOLDING_SESSION CON_ID
--------------- ------
47                   3

SQL> select * from dba_waiters;

WAITING_SESSION WAITING_CON_ID HOLDING_SESSION HOLDING_CON_ID LOCK_TYPE MODE_HELD MODE_REQUESTED LOCK_ID1 LOCK_ID2
--------------- -------------- --------------- -------------- -------------------------- ---------------------------------------- ---------------------------------------- ---------- ----------
 55                          3              47              3 DML                 Exclusive Row-X (SX) 96178 0

This is because of a side effect of an Oracle philosophy; “don’t do now what you may never need to do”. If Oracle can defer any actions from now, such as writing a dirty buffer to disk, or seeing if a session has permissions to perform an update when all you have done is request a lock, then it will, if possible, do it later.

You may request the lock so Oracle checks that you can access the object (SELECT), but you may never try to actually change the row, or table so it’s not necessary to see if you can modify the object…

This is a pretty problematic security hole; In Oracle 12c, a new table privilege has appeared: READ. If we re-run the above with GRANT READ instead of GRANT SELECT…

SQL> show user
USER is "USER1"
SQL> select grantee,privilege from user_tab_privs where table_name = 'TAB1';
GRANTEE              PRIVILEGE
-------------------- ----------
USER1                READ
SQ> select * from schema_owner.tab1;
COL1      COL2
--------- ----------
14-JUL-15          1

ok

SQ> select * from schema_owner.tab1 for update;
select * from schema_owner.tab1
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> lock table schema_owner.tab1 in exclusive mode;
lock table schema_owner.tab1 in exclusive mode
*
ERROR at line 1:
ORA-01031: insufficient privileges

Thats better!

So the next time someone says “it’s only SELECT permissions”, it’s not. You might want to check out using READ.

Oracle Cluster Health Monitor – changes in 12.1.0.2

From Oracle 12.1.0.2, the Oracle Cluster Health Monitor becomes an Oracle database by default [replacing the old Berkley DB], and it’s called “-MGMTDB” (note the leading “-“)

cat /etc/oratab

+ASM1:/u01/app/grid:N # line added by Agent
-MGMTDB:/u01/app/grid:N # line added by Agent

It lives on one of the nodes on your RAC cluster and occupied space on Disk Group provisioned during install.

The DB will take about 750MB RAM [and 1GB on disk to start with], so even more memory taken on top of the +ASM database memory. This makes a good case for running the -MGMTDB on one node but using Flex ASM on the other nodes if you have mode than a 2 node cluster. Given 80% of all the RAC installs globally are 2 node clusters… that’s not going to help so you’re just losing memory.

It’s scheduled to regularly minutes and gather O/S stats and store them in the DB.

Check out the settings using:

srvctl config mgmtdb

Database unique name: _mgmtdb
Database name:
Oracle home: 
Oracle user: oracle
Spfile: +OCR_DG/_MGMTDB/PARAMETERFILE/spfile.268.884776109
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac12c_cluster
PDB service: rac12c_cluster
Cluster name: rac12c-cluster
Database instance: -MGMTDB

You can switch the CHM process on and off using:

crsctl stop res ora.crf -init
crsctl start res ora.crf -init

Oracle 12.1.0.2.0 ACFS on Linux 7 don’t work

I installed OEL 7.1…

uname -a
Linux rac12c01 3.8.13-55.1.6.el7uek.x86_64

Then I installed Grid Infrastructure and Database 12.1.0.2.0 and looked to configure ACFS for the database files, ready for DB create. And it won’t let me. In asmca, the tabs are greyed-out.

Lets just investigate that:

acfsdriverstate supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: 'unknown'

Unknown! Great. This means…patching. After a bit of searching, I need:

Patch 20485724 – Oracle Grid Infrastructure Patch Set Update 12.1.0.2.3 (Apr2015) [p20485724_121020_Linux-x86-64.zip]

So, patch downloaded and onto each node to run:

/u01/app/grid/OPatch/opatchauto apply /u01/sw/20485724 -ocmrf *omf_reposnse*file*

Oh. An OMF response file is now mandatory for patching, and that’s new to me. Off to MOS Doc 966023.1 to find out how I do that!

First I need to download the latest Opatch, as the default install doesn’t have the binaries (emocmrsp) I need. Download and install Opatch: 6880880 (p6880880_121010_Linux-x86-64.zip) into the grid home.

Now, lets create an OCM response file:

ORACLE_HOME=/u01/app/grid
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /u01/sw/grid_ocm.rsp

So lets check current releases before we start (note the versions and patching level):

/u01/app/grid/OPatch/opatch lsinventory  -oh /u01/app/grid
Oracle Interim Patch Installer version 12.1.0.1.8
Copyright (c) 2015, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/grid/oraInst.loc
OPatch version    : 12.1.0.1.8
OUI version       : 12.1.0.2.0
Log file location : /u01/app/grid/cfgtoollogs/opatch/opatch2015-07-11_17-19-01PM_1.log

Lsinventory Output file location : /u01/app/grid/cfgtoollogs/opatch/lsinv/lsinventory2015-07-11_17-19-01PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: rac12c02
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Grid Infrastructure 12c                                       12.1.0.2.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Patch level status of Cluster nodes :

 Patching Level                  Nodes
 --------------                  -----
 0                               rac12c03,rac12c02,rac12c01
--------------------------------------------------------------------------------

And so let’s patch!

/u01/app/grid/OPatch/opatchauto apply /u01/sw/20485724 -ocmrf /u01/sw/grid_ocm.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.8
OUI Version        : 12.1.0.2.0
Running from       : /u01/app/grid

opatchauto log file: /u01/app/grid/cfgtoollogs/opatchauto/20485724/opatch_gi_2015-07-11_17-10-25_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u01/sw/20485724
Grid Infrastructure Patch(es): 19872484 20299018 20299022 20299023
DB Patch(es): 20299022 20299023

Patch Validation: Successful
Grid Infrastructure home:
/u01/app/grid


Performing prepatch operations on CRS Home... Successful

Applying patch(es) to "/u01/app/grid" ...
Patch "/u01/sw/20485724/19872484" successfully applied to "/u01/app/grid".
Patch "/u01/sw/20485724/20299018" successfully applied to "/u01/app/grid".
Patch "/u01/sw/20485724/20299022" successfully applied to "/u01/app/grid".
Patch "/u01/sw/20485724/20299023" successfully applied to "/u01/app/grid".

Performing postpatch operations on CRS Home... Successful

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/grid: 19872484,20299018,20299022,20299023

opatchauto succeeded.

on each node in turn…. then a quick check to make sure they are all patched to the same level:

/u01/app/grid/OPatch/opatch lsinventory  -oh /u01/app/grid
Oracle Interim Patch Installer version 12.1.0.1.8
Copyright (c) 2015, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/grid/oraInst.loc
OPatch version    : 12.1.0.1.8
OUI version       : 12.1.0.2.0
Log file location : /u01/app/grid/cfgtoollogs/opatch/opatch2015-07-11_20-34-00PM_1.log

Lsinventory Output file location : /u01/app/grid/cfgtoollogs/opatch/lsinv/lsinventory2015-07-11_20-34-00PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: rac12c01
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1):

Oracle Grid Infrastructure 12c                                       12.1.0.2.0
There are 1 products installed in this Oracle Home.


Interim patches (4) :

Patch  20299023     : applied on Sat Jul 11 17:33:30 BST 2015
Unique Patch ID:  18703022
Patch description:  "Database Patch Set Update : 12.1.0.2.3 (20299023)"
   Created on 16 Mar 2015, 22:21:54 hrs PST8PDT
Sub-patch  19769480; "Database Patch Set Update : 12.1.0.2.2 (19769480)"
   Bugs fixed:
     19189525, 19065556, 19075256, 19723336, 19077215, 19865345, 18845653
[snip]
     19885321, 19163887, 19820247, 18715868, 18852058, 19538241, 19804032

Patch  20299018     : applied on Sat Jul 11 17:26:39 BST 2015
Unique Patch ID:  18582442
Patch description:  "ACFS Patch Set Update : 12.1.0.2.3 (20299018)"
   Created on 4 Mar 2015, 23:52:42 hrs PST8PDT
   Bugs fixed:
     19452723, 19078259, 19919907, 18900953, 20010980, 19127216, 18934139
[snip]
     18510745, 18915417, 19134464, 19060056, 18955907

Patch  19872484     : applied on Sat Jul 11 17:21:04 BST 2015
Unique Patch ID:  18291456
Patch description:  "WLM Patch Set Update: 12.1.0.2.2 (19872484)"
   Created on 2 Dec 2014, 23:18:41 hrs PST8PDT
   Bugs fixed:
     19016964, 19582630



Patch level status of Cluster nodes :

 Patching Level                  Nodes
 --------------                  -----
 3467666221                      rac12c03,rac12c02,rac12c01

--------------------------------------------------------------------------------

OPatch succeeded.

Well that looks good. All patching is complete, can we use ACFS?

[oracle@rac12c01 ~]$ acfsdriverstate supported
ACFS-9200: Supported
[oracle@rac12c01 ~]$ acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 3.8.13-35.3.1.el7uek.x86_64(x86_64).
ACFS-9326:     Driver Oracle version = 150210.
[oracle@rac12c01 ~]$ acfsdriverstate installed
ACFS-9203: true
[oracle@rac12c01 ~]$ acfsdriverstate loaded
ACFS-9203: true

Sweet.

After creating a DG, creating a volume in the DG and mounting it on all nodes:

df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/ol00-root   20G  1.3G   19G   7% /
devtmpfs               2.2G     0  2.2G   0% /dev
tmpfs                  2.2G  1.3G  992M  56% /dev/shm
tmpfs                  2.2G  8.6M  2.2G   1% /run
tmpfs                  2.2G     0  2.2G   0% /sys/fs/cgroup
/dev/sdb1               52G   30G   23G  58% /u01
/dev/mapper/ol00-home   20G   33M   20G   1% /home
/dev/sda1              497M  140M  357M  29% /boot
/dev/asm/acfs_u02-433   19G  153M   19G   1% /u02

Linux Annoying Defaults, and changes when moving to RH/OEL7

So why does Linux have an alias for “ls” which turns on colour, by default, making some text impossible to read? eh?

alias ls=’ls –color=auto’

To stop this temporarily, you can “unalias ls”, but to stop it permanently for everyone:

vi /etc/profile.d/colorls.sh

comment out the line:

alias ll='ls -l --color=auto' 2>/dev/null
alias l.='ls -d .* --color=auto' 2>/dev/null
# alias ls='ls --color=auto' 2>/dev/null

And that’s it. Cured for life.

While I’m on, why oh why has so much pointlessly changed between RH/OEL6 and RH/OEL7.

Switching off the firewall is now

systemctl stop firewalld
systemctl disable firewalld

And changing the hostname now has a dedicated command all to itself, instead of just amending /etc/sysconfig/network (which you can still do)

hostnamectl set-hostnane new-host-name-here

And what does it do? It creates a /etc/hostname file (and sets the hostname so you don’t need to reboot, so you should use this method)

And another thing. Why has the ifconfig command vanished ?

ifconfig
-bash: ifconfig: command not found

pifconfig
lo
          inet addr:127.0.0.1   Mask:255.0.0.0
          inet6 addr: ::1/128 Scope: host
          UP LOOPBACK RUNNING

enp0s3    HWaddr 08:00:27:75:c8:1e
          inet addr:192.168.56.200 Bcast:192.168.56.255   Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe75:c81e/64 Scope: link
          UP BROADCAST RUNNING MULTICAST

enp0s8    HWaddr 08:00:27:58:20:01
          inet addr:10.10.0.1 Bcast:10.255.255.255   Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe58:2001/64 Scope: link
          UP BROADCAST RUNNING MULTICAST

or more correctly:

ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 12:00:00:00:00:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.201/16 brd 192.168.255.255 scope global enp0s3
    inet 192.168.56.211/24 brd 192.168.56.255 scope global enp0s3:1
    inet6 fe80::1000:ff:fe00:11/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 12:00:00:00:00:21 brd ff:ff:ff:ff:ff:ff
    inet 10.10.0.1/24 brd 10.10.0.255 scope global enp0s8
    inet 169.254.249.91/16 brd 169.254.255.255 scope global enp0s8:1
    inet6 fe80::1000:ff:fe00:21/64 scope link
       valid_lft forever preferred_lft forever

The whole of (the unmaintained) net-tools has been deprecated. No more:

[root@rac12c01 ~]# netstat
-bash: netstat: command not found

We now need to learn to use the iproute2 suite of commands instead:

ifconfig -> ip addr (or ip link - e.g. ip link set arp on)
route    -> ip route
arp      -> ip neighbor (e.g. ip n show)
vconfig  -> ip link
iptunnel -> ip tunnel (add/change/del/show)
brctl    -> bridge
ipmaddr  -> ip maddr
netstat  -> ss (or a bunch of ip commands)

(or you could just yum install net-tools to get the old tools back, but that’s just not the right thing to do, is it)

You might want to yum install lsof and yum install nmap though. They aren’t there by default in OEL7

And another thing – tempfs being in memory being default. Why? It’s too small to be any good for anything really. To switch is back to being a real filesystem (and get your memory back)

systemctl mask tmp.mount
(outputs) ln -s '/dev/null' '/etc/systemd/system/tmp.mount'
... and reboot

and another thing – why is grep (and egrep and fgrep) now aliased to colourise your search results? To be honest, I happen to agree with this. Nice new default feature:

alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'

cat /etc/passwd  | grep oracle
oracle:x:500:500::/home/oracle:/bin/bash

More mini-rants will appear in this blog post as I fall across/remember the issues

PC Pro Oracle DBA Article

PC Pro August 2015 CoverI was interviewed recently by UK publication PC Pro magazine for their regular series about IT Careers with regard to database administration (surprise!) The (somewhat abridged!) interview has just appeared in the August 2015 edition of the magazine. I cannot link to the on-line version of the article so here’s my hopefully readable cropped photograph of it.

PC Pro also has an on-line sister publication, Alphr, which is worth a look

PC Pro Neil Chandler DBA

Follow

Get every new post delivered to your Inbox.

Join 32 other followers

%d bloggers like this: