Thursday 30 April 2015

SQL30082N Security processing failed with reason "24" ("USERNAME AND/OR PASSWORD INVALID")

One of the user reported SQL 30082, Rc=24 in DB2 LUW V10.1 database.

He was sure that pasword supplied was correct and his password not expired and no failed login counts as well.

DB: DB2 LUW
version: 10.1, 9.7
OS: AIX

Users having special character like ! or @ # $  etc. may face RC=24.


Set the below registry variable to fix the issue.

db2set DB2AUTH=OSAUTHDB

Some interesting resources.

DB2 users receive SQL30082N, although OS authentication is possible.
http://www-01.ibm.com/support/docview.wss?uid=swg21651924

Password Validation Messages in db2diag.log
http://www-01.ibm.com/support/docview.wss?uid=swg21623221 

Thursday 23 April 2015

addNode.sh failed with PRCF-2023 while transfering files between RAC nodes

ORACLE GRID 11.2.0.4, Database 11.2.0.3

addNode.sh failed with below error.
./addNode.sh "CLUSTER_NEW_NODES={d002}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={d002-vip}"

PRCF-2023 : The following contents are not transferred as they are non-readable.
Directories:

   1) /u01/app/11.2.0.4/grid/home_2/crs/utl

This directory is owned by root, permissions are 740 on this directory where as addNode.sh is being as 'grid' user. Hence I modified the permissions as chmod -R o+r utl.

utl directory will have four files which need 755 permissions. Make sure these permissions exist. 
-rwxr-xr-x 1 root root  4555 Apr 22 07:40 rootaddnode.sh
-rwxr-xr-x 1 root root  5190 Apr 22 07:40 rootdeinstall.sh
-rwxr-xr-x 1 root root  5992 Apr 22 07:40 rootdelete.sh
-rwxr-xr-x 1 root root  1954 Apr 22 07:40 rootdeletenode.sh

After this change addNode.sh was successfully completed.

root.sh failed with "clscfg: Error in retrieving own node information" while adding node to the RAC cluster

Oracle Grid in 11.2.0.4 and DB in 11.2.0.3

Accidentally all cluster related files were deleted and i have to remove this node from cluster and add it back.

While running root.sh in this node log was showing below error message
---------------------------------------------------------------------
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0.4/grid/home_2
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.4/grid/home_2/crs/install/crsconfig_params
Installing Trace File Analyzer
Unable to retrieve local node number 1.
Internal Error Information:
  Category: 0
  Operation:
  Location:
  Other:
  Dep: 0
clscfg: Error in retrieving own node information
/u01/app/11.2.0.4/grid/home_2/bin/clscfg -add failed
/u01/app/11.2.0.4/grid/home_2/perl/bin/perl -I/u01/app/11.2.0.4/grid/home_2/perl/lib -I/u01/app/11.2.0.4/grid/home_2/crs/install /u01/app/11.2.0.4/grid/home_2/crs/install/rootcrs.pl execution failed


Tried lot of options but all of them failed. At the end tried to deconfig. However this was not succesful 
-------------------------------------------------------------------------
[root@ ~]# /u01/app/11.2.0.4/grid/home_2/crs/install/rootcrs.pl -deconfig
Using configuration parameter file: /u01/app/11.2.0.4/grid/home_2/crs/install/crsconfig_params
Oracle Clusterware stack is not active on this node
Restart the clusterware stack (use /u01/app/11.2.0.4/grid/home_2/bin/crsctl start crs) and retry
Failed to verify resources

This time same command with force option did the trick
------------------------------------------------------------------------------
[root@ ~]# /u01/app/11.2.0.4/grid/home_2/crs/install/rootcrs.pl -deconfig -force
Using configuration parameter file: /u01/app/11.2.0.4/grid/home_2/crs/install/crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

Executed orainstRoot.sh once again
---------------------------------------------------------------------------------
[root ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

This time root.sh was successful.
----------------------------------------------------------------------------
[root ~]# /u01/app/11.2.0.4/grid/home_2/root.sh
Check /u01/app/11.2.0.4/grid/home_2/install/root_d002_2015-04-23_14-57-51.log for the output of root script

[root ~]# tail -100f /u01/app/11.2.0.4/grid/home_2/install/root_d002_2015-04-23_14-57-51.log
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0.4/grid/home_2
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.4/grid/home_2/crs/install/crsconfig_params
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node d001, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded