martes, 31 de marzo de 2020

Cómo enviar RMAN Backup a Oracle Cloud Object Storage desde Windows


El objetivo de este ejercicio es enviar un backup de base de datos Oracle hacia un Oracle Cloud Object Storage.

El escenario es el siguiente:
  • Windows Server 2012 R2 (hostname ORCL-SRV01)
  • Oracle Database EE 12c (db_name ORCL)
  • Oracle Cloud Object Storage (Bucket name ORCL)

Configurando un Oracle Cloud Object Storage

El primer paso consiste en crear nuestro “Bucket” para el almacenamiento de los archivos de backup. En un articulo anterior aprendimos como crear este objeto.

Para este ejercicio usaremos el bucket creado anteriormente con nombre ORCL y los siguientes datos:



Una vez creado nuestro bucket procederemos a configurar los accesos desde el servidor hacia nuestro objeto.

Configurando los API KEYS

Para Windows podemos usar cualquier software de emulador de línea de comandos, para este ejercicio utilizamos Git Bash para Windows.

Una vez instalado el software generamos nuestras llaves de la siguiente manera:


winpty openssl genrsa -out ORCL-SRV01_oci_api_key.pem 2048
winpty openssl rsa -pubout -in ORCL-SRV01_oci_api_key.pem -out FR-ORCL-TEST-02_oci_api_key_public.pem


Cuando hayamos generado nuestras llaves, el siguiente paso es registrarlas dentro de nuestra configuración de Oracle Cloud, además extraeremos los OCID (Oracle Cloud ID) de los objetos que necesitamos para enviar nuestro backup.

Seleccionamos la opción Profile en la esquina superior derecha y presionamos sobre el menú User Settings:


Dentro de la pagina User Settings seleccionamos el usuario que deseamos configurar con las Keys que generamos en el paso anterior, en mi caso ya tenia creado un usuario llamado rman-service:



Dentro del usuario, presionamos el botón Add Public Key y en el pantalla enerada copiamos y pegamos la llave publica generada antes y que enviamos al archivo public.pem





Una vez hecho esto, debemos tomar nota de los siguientes datos:
  • OCID del usuario rman-service
  • Fingerprint generado después de añadir la public key



También necesitaremos el OCID de nuestro Tenant de Oracle Cloud, esto lo obtenemos yendo a la esquina superior derecha y presionando sobre Tenancy: <nombre>




Hasta ahora tenemos:
  • OCID del usuario
  • Fingerprint del usuario
  • OCID del tenant

Instalando Oracle Database Cloud Backup Module for OCI en Windows

Para este paso de la configuración necesitaremos:
Una vez listos los requisitos vamos a ejecutar la siguiente instrucción haciendo uso de los datos que recabamos en el paso anterior:

java -jar oci_install.jar -host https://objectstorage. eu-frankfurt-1.oraclecloud.com -pvtKeyFile D:\keys\OCI\APIKEY\ ORCL-01_oci_api_key.pem ^
-pubFingerPrint 25:a4:fa:98:19:e1:49:2c:64:a7:56:88:53:85:67:46 ^
-uOCID ocid1.user.oc1..aaaaaaaakukwjzadgha22j2hqyp7xw66stbnusjjol7suevp7msa ^
-tOCID ocid1.tenancy.oc1..aaaaaaaa4oeek44i4wef5kuqzcluxwucjyirkuljnss5a ^
-walletDir D:\app\oracle\product\12.0.0\database\opc_wallet ^
-libDir D:\app\oracle\product\12.0.0\database\lib ^
-bucket ORCL
-configFile D:\app\oracle\product\12.0.0\db\config.ora


Los parámetros son los siguientes:
  • Host: la url del bucket, dependerá de la región donde este configurado nuestro tenant de Oracle cloud.
  • pubFingerPrint: dato que tomamos del usuario rman-service
  • uOCID: OCID del usuario rman-service
  • tOCID: OCID de nuestro tenant
  • walletDIR: directorio donde se creara un wallet automático que almacenara la configuración de seguridad para el acceso al bucket.
  • libDir: directorio donde se descargara la librería libopc.so o libopc.dll necesaria para establecer la conexión desde RMAN.
  • Bucket: nombre de nuestro bucket.
  • configFile: archivo de configuración con los datos de las rutas de wallet y librería descargada.

La salida de este comando debería ser, sin errores:




Configurando RMAN

Nota: el backup de RMAN no ira directo al bucket en la nube, debe pasar antes por un almacenamiento físico dentro de nuestro servidor, una vez allí podemos generar backups de los backupsets antes creados.

Ya estamos por finalizar, el primer paso será cambiar la configuración de RMAN como sigue.

rman target /
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY= D:\app\oracle\product\12.0.0\database\lib\libopc.dll, SBT_PARMS=(OPC_PFILE= D:\app\oracle\product\12.0.0\db\config.ora)';
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F';
CONFIGURE ENCRYPTION FOR DATABASE ON;

Enviando un backup a la nube

Una buena practica es crear nuestros backups de manera encriptada, aunque no es un paso necesario, recordemos que por default los datos dentro de un Object Storage son encriptados.
El primer paso será crear un backup hacia un almacenamiento físico dentro de nuestro servidor:

rman target /
SET ENCRYPTION ON IDENTIFIED BY "dsk45opT4dihg_0f_XXXXXXds7" ONLY;
BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;
BACKUP INCREMENTAL LEVEL 1 DATABASE PLUS ARCHIVELOG;


Finalmente, enviamos nuestro hacia el bucket de la siguiente manera:

rman target /
set encryption ON identified by "dsk45opT4dihg_0f_XXXXXXds7" Only;
run {
allocate channel sbt1 device type sbt PARMS 'SBT_LIBRARY= D:\app\oracle\product\12.0.0\database\lib\libopc.dll, SBT_PARMS=(OPC_PFILE= D:\app\oracle\product\12.0.0\db\config.ora)';
backup backupset completed after 'sysdate-1';
}

Cuando la operación finalice, los archivos en nuestro bucket se verán similar a esto:




lunes, 30 de marzo de 2020

Cómo Almacenar archivos en Oracle Cloud Object Storage

Un Object Storage es un servicio de almacenamiento en la nube para un amplio espectro de aplicaciones.

Entre sus características encontramos:
  • Seguridad de acceso mediante Oracle Cloud Infrastructure Identity and Acces Management
  • Autocuración, los datos corruptos se detectan automáticamente y se restauran a través de copias redundantes.
  • Encriptación del lado del servidor, todos los datos almacenados se cifran del lado del servidor utilizando el algoritmo de cifrado AES 256 de forma predeterminada.
Además, al ser un objeto dentro de Cloud Infrastructure, obtenemos las características heredadas:
  • Auto Scaling: podremos incrementar el espacio de almacenamiento bajo demanda y sin límites.
  • Opciones convenientes para el acceso de datos a través de internet mediante un conjunto de APIS disponibles para todas las plataformas.
  • Acceso inmediato una vez creado cargados los datos.
A continuación, veremos como crear un Object Storage.

Como primer paso, debemos iniciar sesión en Oracle Cloud y dirigirnos a Infrastructure Dashboard.


Desde el menú en la parte superior izquierda seleccionamos la opción Object Storage -> Object Storage:



Presionamos el botón Create Bucket, para poder hacer esto, antes tuvimos que haber creado un Compartment. El compartment es el contenedor de todos nuestros objetos en Oracle Cloud, por default, podemos crear un nuevo compartment con el nombre de nuestro login.




El formulario para la creación de un Buket es relativamente sencillo. A continuación los datos que se solicitan:




  • BUCKET NAME: El nombre de nuestro bucket, no podrá ser modificado una vez nombrado.
  • STORAGE TIER: Es la forma en que se almacenaran los datos. Estándar: para datos que son accedidos regularmente y no requieren ningún tipo de compresión. Archive: para datos que no van a ser accedidos regularmente y se van a comprimir, la consulta de estos datos es mas lento pues se utilizan como almacenamiento histórico.
  • OBJECT EVENTS: para utilizar los Events Services de Oracle Clooud que nos notifican de cualquier cambio sobre nuestro objetos.
  • ENCRYPTION: por default se utilizan las claves de Oracle Clooud, podemos seleccionar usar nuestras propias claves.
  • TAGS: los tags son objetos que nos ayudan a identificar nuestros objetos.

Con estos sencillos pasos habremos creado un bucket vacío que podemos comenzar a llenar con cualquier tipo de objeto desde nuestras aplicaciones:




En un siguiente articulo aprenderemos a establecer una conexión para almacenamiento de backups en nuestro bucket.

Nota: como mencione al principio no es la única utilizar de los Object Storage, pero es una muy buena de utilizarlo.

lunes, 23 de marzo de 2020

Client-Side Load Balancing vs SCAN

Some Days Ago I was reviewing a problem in a 2-nodes Oracle RAC 12c, where one instance had more sessions than the sessions in instance  2. Clearly, there was a problem in the load balancing,

A bad load balancing can be caused mainly by:

  • Not using SCAN
  • Using Client-Side Load Balancing 
  • Bad configuration of the local_listener, remote_listener or listener_networks in each instance
  • Bad configuration of the SCAN and its listeners/VIPs
  • Configuration of the "Connection Load Balancing Goal" and "Runtime Load Balancing Goal" parameters in the Service
  • Configuration of the Service (Running in some nothers only)
  • High Load in one (or some) node(s).


In other article I will discuss each by each, but in this blogpost I will limit the content to compare Client-Side load balancing and SCAN. The "Client-Side Load Balancing" is not actually a correct balancing method, it just a random way to create the session. Oracle documentation defines Client-Side Load Balancing as:

"Client-side load balancing is defined in your client connection definition (tnsnames.ora file, for example) by setting the parameter LOAD_BALANCE=ON. When you set this parameter to ON, Oracle Database randomly selects an address in the address list, and connects to that node's listener."
In this example I configured tnsnames.ora to use Client-Side Balancing:

orcl_cslb =
   (DESCRIPTION =
    (LOAD_BALANCE=ON)
     (ADDRESS = (PROTOCOL = TCP)(HOST =rac1-vip)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST =rac2-vip)(PORT = 1521))
    (CONNECT_DATA =
       (SERVER = DEDICATED)
       (SERVICE_NAME = orcl)
     ))

The session will be created randomly between rac1-vip and rac2-vip, the result is the following:

Test 1:
select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        143
         2        157
Test 2:
select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        162
         2        138
Test 3:
select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        158
         2        142
Test 4:
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        155
         2        145

Result: I used only 300 Sessions for this test. It's clearly that using Client-Side Balancing (randomly) creates the session close to a correct load balancing (for sessions), but still there is some margin of error.


Now I will replicate a curious configuration I saw in a customer, where 2 entries were the 2 VIPs of the SCAN + each VIP of each node, this seems to be a combination of Client-Side Load Balancing + SCAN.  

Entry in tnsnames.ora:

orcl_cslb =
   (DESCRIPTION =
    (LOAD_BALANCE=ON)
     (ADDRESS = (PROTOCOL = TCP)(HOST =192.178.5.35)(PORT = 1522))
     (ADDRESS = (PROTOCOL = TCP)(HOST =192.178.5.36)(PORT = 1522))
     (ADDRESS = (PROTOCOL = TCP)(HOST =rac1-vip)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST =rac2-vip)(PORT = 1521))
    (CONNECT_DATA =
       (SERVER = DEDICATED)
       (SERVICE_NAME = orcl)
     ))

The result is this:
Test 1:  
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        186
         2        114
Test 2:
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        149
         2        151
Test 3:
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        193
         2        107
Test 4:
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        176
         2        124

Result: The result is even worse than using only Client-Side Load Balancing. Some times less is more, perhaps who configured this was trying to do a super load balancing algorithm :)


Finally let's test how SCAN works.

Entry in tnsnames.ora:

orcl_cslb =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = TCP)(HOST =rac-scan)(PORT = 1521))
    (CONNECT_DATA =
       (SERVER = DEDICATED)
       (SERVICE_NAME = orcl)
     ))

 Test 1:
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        148
         2        152
 Test 2:
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        150
         2        150
 Test 3:  
SQL> select inst_id, count(*) from gv$session where username='DGOMEZ' group by inst_id

   INST_ID   COUNT(*)
---------- ----------
         1        148
         2        152

Result: The result was almost perfect, very few margin of error, and even in a case it was perfect (150-150).


domingo, 22 de marzo de 2020

Error SLOS depend-msg [No such file or directory] when installing RAC in LXC Containers

I was testing the installation of Oracle 19c in LXC Containers, a while ago I was able to install Oracle RAC 12.1.0.2 and RAC 12.2.0.1 in LXC Containers, so I was thinking Oracle RAC 19c was going to be the same situation but...


During the ejecution of "root.sh" I was receiving the following errors:

2019-11-27 10:19:59.203 :    CSSD:2389941504: [     INFO] clssscInitGlobalCTX: PERF_TIME started CLSFA for Flex
2019-11-27 10:19:59.203 :    CSSD:2389941504: [     INFO] Starting CSS daemon in exclusive mode with a role of hub
2019-11-27 10:19:59.205 :    GPNP:2389941504: clsgpnp_Init: [at clsgpnp0.c:708] '/u01/app/19.3/grid' in effect as GPnP home base.
2019-11-27 10:19:59.205 :    GPNP:2389941504: clsgpnp_Init: [at clsgpnp0.c:774] GPnP pid=37618, cli=clsuGpnpg GPNP comp tracelevel=1, depcomp tracelevel=0, tlsrc:init, apitl:0, tstenv:0, devenv:0, envopt:0, flags=2003
2019-11-27 10:19:59.214 :    GPNP:2389941504: clsgpnpkwf_initwfloc: [at clsgpnpkwf.c:404] Using FS Wallet Location : /u01/app/19.3/grid/gpnp/rac235test8/wallets/peer/

2019-11-27 10:19:59.214 :    GPNP:2389941504: clsgpnpkwf_initwfloc: [at clsgpnpkwf.c:416] Wallet readable. Path: /u01/app/19.3/grid/gpnp/rac235test8/wallets/peer/

2019-11-27 10:19:59.287 :    CSSD:2389941504: [     INFO] clssscInitGlobalCTX: Environment is production
2019-11-27 10:19:59.287 :    CSSD:2389941504: [     INFO] (:CLSN00143:)clssscInitGlobalCTX: CSSD process cannot get real-timepriority
2019-11-27 10:19:59.288 :    CSSD:2389941504: [     INFO] clsssc_logose: slos [-2], SLOS depend-msg [No such file or directory], SLOS error-msg [2]
2019-11-27 10:19:59.288 :    CSSD:2389941504: [     INFO] clsssc_logose: SLOS other info is [process is not running in real-time. rc = 0].

2019-11-27 10:19:59.288 :    CSSD:2389941504: [     INFO] (:CLSN00143:)clssscInitGlobalCTX: set priority system call had failed
2019-11-27 10:19:59.288 :    CSSD:2389941504: [     INFO] (:CLSN00143:)clssscInitGlobalCTX: set priority system call had failed calling clssscExit
2019-11-27 10:19:59.288 :    CSSD:2389941504: [     INFO] (:CSSSC00011:)clssscExit: A fatal error occurred during initialization
2019-11-27 10:19:59.288 :    CSSD:2389941504: [     INFO] clssscagSendNLSToAgent: Sending msg id 1730, size 56, product CRS, facility CRS, to agent
2019-11-27 10:19:59.773 :    CSSD:2230044416: [     INFO] clssscagSelect: endpoint(0x290) authenticated with user(root)
2019-11-27 10:19:59.773 :    CSSD:2230044416: [     INFO] clssscagProcessInitialMsg: Handshake successful with agent 1


Some others from ocss.log:

 CRS-4000: Command Start failed, or completed with errors.
>End Command output
2019-11-20 15:54:03: The exlusive mode cluster start failed, see Clusterware alert log for more information
2019-11-20 15:54:03: Executing cmd: /u01/app/19.3/grid/bin/clsecho -p has -f clsrsc -m 119
2019-11-20 15:54:03: Executing cmd: /u01/app/19.3/grid/bin/clsecho -p has -f clsrsc -m 119
2019-11-20 15:54:03: Command output:
>  CLSRSC-119: Start of the exclusive mode cluster failed
>End Command output
2019-11-20 15:54:03: CLSRSC-119: Start of the exclusive mode cluster failed
2019-11-20 15:54:03: ###### Begin DIE Stack Trace ######
2019-11-20 15:54:03:     Package         File                 Line Calling
2019-11-20 15:54:03:     --------------- -------------------- ---- ----------
2019-11-20 15:54:03:  1: main            rootcrs.pl            355 crsutils::dietrap
2019-11-20 15:54:03:  2: crsinstall      crsinstall.pm        2439 main::__ANON__
2019-11-20 15:54:03:  3: crsinstall      crsinstall.pm        2334 crsinstall::perform_initial_config
2019-11-20 15:54:03:  4: crsinstall      crsinstall.pm        1026 crsinstall::perform_init_config
2019-11-20 15:54:03:  5: crsinstall      crsinstall.pm        1184 crsinstall::init_config
2019-11-20 15:54:03:  6: crsinstall      crsinstall.pm         446 crsinstall::CRSInstall
2019-11-20 15:54:03:  7: main            rootcrs.pl            552 crsinstall::new
2019-11-20 15:54:03: ####### End DIE Stack Trace #######

2019-11-20 15:54:03: ROOTCRS_BOOTCFG checkpoint has failed
2019-11-20 15:54:03:      ckpt: -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_BOOTCFG
2019-11-20 15:54:03: Invoking "/u01/app/19.3/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_BOOTCFG"
2019-11-20 15:54:03: trace file=/u01/app/grid/crsdata/rac235test8/crsconfig/cluutil10.log
2019-11-20 15:54:03: Running as user oracle: /u01/app/19.3/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_BOOTCFG
2019-11-20 15:54:03: Removing file /tmp/2Kp4_6j0kW
2019-11-20 15:54:03: Successfully removed file: /tmp/2Kp4_6j0kW
2019-11-20 15:54:03: pipe exit code: 0

2019-11-20 15:54:03: /bin/su successfully executed


I remember that When I was installing Oracle 12.1.0.2 I had exactly the same errors. Fortunately, at that time Oracle registered that as a bug and fixed the problem in the PSU 12.1.0.2.160719.
The fix was also include in Oracle RAC 12.2.0.1 so that's why I didn't hit the problem with this version. However it seems Oracle forgot to include this fix in 19c ( I didn't test 18c).

The problem is related to the containers itself, and after investigation it seems the following parameteres helped to fix the problem:


lxc.cgroup.cpu.rt_runtime_us = 950000
lxc.cap.drop = mac_admin mac_override sys_module sys_rawio

yum install chrony
systemctl enable chronyd.service
systemctl start chronyd.service



How to Create a Second SCAN in Oracle RAC

In this article I will show how to create a Second SCAN that can be used for a dedicated and isolated purpose.

The initial configuration in the RAC that I  am using for this example is the following:

[root@rac1 ~]# srvctl config network
Network 1 exists
Subnet IPv4: 192.168.1.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
[root@rac1 ~]#

[root@rac1 ~]# srvctl config scan -all
SCAN name: rac-scan, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 192.168.1.15
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
[root@rac1 ~]#

[root@rac1 ~]# srvctl config scan_listener -all
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[root@rac1 ~]#

[root@rac1 ~]# srvctl config listener -all
Name: LISTENER
Type: Database Listener
Network: 1, Owner: oracle
Home: <CRS home>
  /u01/app/12.2.0.1/grid on node(s) rac2,rac1
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
[root@rac1 ~]#

For this example, I am using /etc/hosts for all IPs, but you can adapt it to use DNS. The /etc/hosts has already the IPs for the Second SCAN:

[root@rac1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.16 rac2
192.168.1.14 rac2-vip
192.178.2.15 rac2-priv
192.168.1.15 rac-scan
192.178.5.35 rac-scan2
192.178.5.36 rac-scan2

192.168.1.17 rac1
192.168.1.19 rac1-vip
192.178.2.18 rac1-priv


The first step is to create a new network:

[root@rac1 ~]# srvctl add network -netnum 2 -subnet 192.178.0.0/255.255.0.0
[root@rac1 ~]#  srvctl config network
Network 1 exists
Subnet IPv4: 192.168.1.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
Network 2 exists
Subnet IPv4: 192.178.0.0/255.255.0.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
[root@rac1 ~]#

Adding a second SCAN:

[root@rac1 ~]#  srvctl add scan -scanname rac-scan2 -netnum 2
[root@rac1 ~]# srvctl config scan -all
SCAN name: rac-scan, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 192.168.1.15
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN name: rac-scan2, Network: 2
Subnet IPv4: 192.178.0.0/255.255.0.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 192.178.5.35
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 192.178.5.36
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
[root@rac1 ~]#

Adding scan_listener:

[root@rac1 ~]# srvctl add scan_listener -listener LISTENER2 -netnum 2 -endpoints TCP:1522
[root@rac1 ~]# srvctl config scan_listener -all
SCAN Listener LISTENER2_SCAN1_NET2 exists. Port: TCP:1522
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN2_NET2 exists. Port: TCP:1522
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[root@rac1 ~]#

Starting the scan_listener:

[root@rac1 ~]# srvctl start scan -netnum 2
[root@rac1 ~]# srvctl start scan_listener -netnum 2
[root@rac1 ~]# srvctl status scan -all
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node rac2
SCAN VIP scan2_net2 is enabled
SCAN VIP scan2_net2 is running on node rac1
[root@rac1 ~]#
[root@rac1 ~]# srvctl status scan_listener -all
SCAN Listener LISTENER2_SCAN1_NET2 is enabled
SCAN listener LISTENER2_SCAN1_NET2 is running on node rac1
SCAN Listener LISTENER2_SCAN2_NET2 is enabled
SCAN listener LISTENER2_SCAN2_NET2 is running on node rac1
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac2
[root@rac1 ~]#

The second SCAN was created. Now I will create a new Local Listener, so that the database can balance the connections between the network 1 (scan 1) and the network 2 (scan 2) across the two instances of the RAC. To do so we have to create two more VIPs, one per node and then the new listener will be created:


[root@rac1 ~]#  srvctl add vip -node rac1 -address 192.178.5.38/255.255.0.0 -netnum 2
[root@rac1 ~]# srvctl add vip -node rac2 -address 192.178.5.39/255.255.0.0 -netnum 2
[root@rac1 ~]# srvctl add listener -listener LISTENER2 -netnum 2 -user oracle -endpoints TCP:1522
[root@rac1 ~]# srvctl start listener  -listener LISTENER2

[root@rac1 ~]# srvctl start listener  -listener LISTENER2
[root@rac1 ~]#


[root@rac1 ~]# srvctl config listener -all
Name: LISTENER
Type: Database Listener
Network: 1, Owner: oracle
Home: <CRS home>
  /u01/app/12.2.0.1/grid on node(s) rac2,rac1
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
Name: LISTENER2
Type: Database Listener
Network: 2, Owner: oracle
Home: <CRS home>
  /u01/app/12.2.0.1/grid on node(s) rac2,rac1
End points: TCP:1522
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

[root@rac1 ~]# srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
Listener LISTENER2 is enabled
Listener LISTENER2 is running on node(s): rac1,rac2
[root@rac1 ~]#


Finally the database instance have to be configured to listener the connections in SCAN1 and SCAN2 and redirect the connections to the proper local listener:

ALTER SYSTEM SET local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521))' SCOPE=BOTH SID='orcl1';

ALTER SYSTEM SET local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=rac2-vip)(PORT=1521))' SCOPE=BOTH SID='orcl2';

ALTER SYSTEM SET remote_listener='rac-scan:1521' SCOPE=BOTH SID='*';
ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.19)(PORT=1521))))(REMOTE_LISTENER=rac-scan:1521)), ((NAME=network2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.178.5.38)(PORT=1522))))(REMOTE_LISTENER=rac-scan2:1522))' SCOPE=scope SID='orcl1';

ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.14)(PORT=1521))))(REMOTE_LISTENER=rac-scan:1521))','((NAME=network2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.178.5.39)(PORT=1522))))(REMOTE_LISTENER=rac-scan2:1522))' SCOPE=scope SID='orcl2';

If you receive an error saying that the string is too long, you can create entries in the tnsnames.ora file  in the ORACLE_HOME of the database:

In server 1:

LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-net1-vip)(PORT = 1521))
    )

LISTENER_SCAN2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-net2-vip)(PORT = 1522))
    )

In server 2:

LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-net1-vip)(PORT = 1521))
    )

LISTENER_SCAN2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-net2-vip)(PORT = 1522))
    )

And then:

ALTER SYSTEM SET local_listener='LISTENER_SCAN1' SCOPE=BOTH SID='*';

ALTER SYSTEM SET remote_listener='rac-scan:1521' SCOPE=BOTH SID='*';

ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=LISTENER_SCAN1)(REMOTE_LISTENER=rac-scan:1521))',' ((NAME=network2)(LOCAL_LISTENER=LISTENER_SCAN2)(REMOTE_LISTENER=rac-scan2:1522))' SCOPE=scope SID='orcl1';

Finally Let's test the configuration.

I will create some sessions using the SCAN1, all the session should be balanced across the two instances and redirected to the proper local listener (LISTENER for SCAN1 and LISTENER2 for SCAN2) , the two terminals in the top are for SCAN 1, left for the listener in instance 1, and the right for the listener in instance 2:

Creating some sessions :
[oracle@rac1 ~]$ sqlplus dgomez/dgomez@rac-scan:1521/orcl


As you can see, only the listener "LISTENER" received these sessions, and the sessions were correctly balanced.

Now let's test the SCAN 2. The last two terminals (bellow) are for listener "LISTENER2" which is linked to the SCAN2. Left for the LISTENER2 in instance 1 and the right for the LISTENER2 in the instance 2:




Oracle ACE Director Award - Deiby Gómez

Thanks #OracleACE Program for this awesome certificate recognizing the work I have done in the community for the last year. Looking forwa...