One of the new features included within the Oracle software is the ability to create database clusters (also called Real Application Cluster – RAC). According to Oracle, this feature allows for high availability, performance and scalability. The transparent failover (Transparent Application Failover – TAF) feature is also included, which is used by the deployed applications to synchronize their requests through the Oracle cluster without knowing whenever any of the cluster nodes has been disconnected.

How does it work?

Conceptually, the Oracle RAC architecture is shown on the following diagram:

Pic: Oracle RAC Conceptual Diagram

Fig.1 Conceptual diagram of the Oracle RAC inner workings (failover configuration) including from top to bottom: the application server, the high-speed Ethernet where everything is connected, the Oracle RAC composed by Node 1 and Node 2, a pair of optic fiber connections, a SAN switch to connect the database server to the storage, and the disk array.

The database requests are generated by the application (for instance, from a database connection pool configured on the Application Server), and the Oracle RAC is in charge of redirecting these requests to the working server. Note that in this configuration there is no load balancing, so the showed configuration is plain failover – that is, all incoming requests will reach Node 1, and just in case it ceases to work or is disconnected, all requests will be redirected to Node 2.

However, we need to look how the Oracle RAC really works. This can be better explained if we use a UML component diagram:

Pic: Oracle RAC component diagram

Fig.2 Fig.2 Oracle RAC component diagram.

Actually, the Oracle client or service has the Node 1 configured as the primary connection; if it is not possible to resolve the request on that server, the service will redirect the request to the backup server (in this case the Node 2). Also, it is important to mention that it is the database engine listener what is running on nodes 1 and 2, not the database by itself: the information (that is, the files that form part of the database) are located on a disk array with a mirror configuration to provide redundancy – and therefore, high availability. In short:

The Oracle RAC is a software component that allows the creation of multiple database engine instances on an independent manner, sharing the same storage.

However, RAC has two very important limitations that are necessary to take into account:

  1. There is no load balancing between the database nodes that form part of the Oracle RAC: this default configuration just allows the request failover for performed requests, and if the architecture design requires resource optimization, we are certainly wasting half the processing power of the back-end tier.
  2. Said failover is not as transparent as Oracle claims (see next section).

Implementing TAF

Once an Oracle RAC node is disconnected – being the cause a hardware or network failure o a resource over-demand – all transactions must be automatically redirected to the backup node. The point of such scheme is not to loose the requests that were on-the-fly at the time of the disconnection.

The problem is, according to Oracle (see here), to use the TAF features we must implement the Oracle Call Interface – OCI API. In short, we need to install an Oracle client on the Application Server, and the use of a simple database connection driver – such as the Java Thin Client – is not enough.

Also, at least for the Oracle 10g version, TAF performs the transparent failover just for queries of the SELECT kind only. All other operation types will throw an error automatically:

  • Transactional or data manipulation queries: INSERT, UPDATE and DELETE.
  • Session management operations: ALTER SESSION and SQL*Plus.
  • Temporal objects (those that reside on the TEMP workspace).
  • States as result of stored procedure executions (PL/SQL).

Therefore, we reach the following conclusion:

Oracle RAC, by itself, does not guarantee a transparent failover; it only guarantees database availability. The failover is implemented by the Oracle OCI client, with some restrictions.

This limits the Oracle RAC viability, considering the cost-benefit as this is quoted separately from the database engine (see here).

Improving Oracle RAC

However, not all is lost. Both issues (load balancing and transparent failover) can be solved by a software or hardware load-balanced-cluster. The scalability of such solution depends more on the budget we have, but it supports our decision to implement the Oracle RAC for database high-availability.

The conceptual diagram is shown on the following image:

Pic: Oracle RAC with load balancing

Fig.3 Oracle RAC with load balancing

The balancer is in charge of the load balancing; this can be either a software component (for instance, a web server with round-robin balancing like Apache) or a hardware component (like an F5 Switch) in such configuration as to allow:

  • No dependencies to an Oracle client installed on the application server. This is especially important when it is not possible to install such component or it is necessary to increment the portability of the platform.
  • The resources are optimized when distributing the load between two or more servers that form part of the solution.
  • The load balancing component is in charge of detecting failures/disconnection events on the back-end tier nodes. How are detected such disconnections depends upon the deployed component – for instance, through the use of pings to each node heartbeat – and can be configured as follows:
    • Redirecting at first error. Whenever a disconnection error occurs, the component redirects the remaining requests to the live nodes of the platform, until the affected node’s heartbeat is regain. The problem is that the on-the-fly requests return an error state. This should be resolved with a retry mechanism by the application itself or the application container (in case the Application Server and the database driver support such feature). For this particular case, it can be used Oracle Cache Fusion, which is a component that allows us to synchronize data blocks between the cluster nodes; however we loss the load balancing feature with this solution.
    • Lossless redirection. This is accomplished by using a combination of the OCI client, the load balancer and a special TAF configuration which specifies that requests must be sent to both nodes of the platform and the request that comes first is resolved – discarding the second. This allows for a completely transparent failover without requiring modifications to the application, but has one drawback: traffic between the middle and back end tiers is effectively doubled.
    • The Sun Cluster option. According to Sun Microsystems, the Sun Cluster software allows for request redirection without loss and no need for additional configurations to the platform: Sun Cluster and the agents installed on each of the cluster nodes synchronize the requests, avoiding transaction loss in case of a disconnection. This is the equivalent of a web application failover where the user sessions migrate between the active nodes of the cluster in the event that one fails. (Modified on [12/12/2007]: We already have an implementation of Sun Cluster with Oracle RAC 10g R2: see here the concept and here how to configure it).


Oracle RAC is a component that provides high availability to our back-end by allowing the deployment of multiple instances of a database listener – and a single storage unit. This in turn, allows us – with the help of a balancing component – high availability and scalability of the services offered by the database.

Update on (12/12/2007)

The latest version of Oracle RAC (for Oracle 10g release 2) includes a major overhaul in terms of technology implemented by the solution. Therefore, we now have two features that were very much absent on the previous version:

  • As part of the architecture of Oracle RAC, we now have load balancing between the nodes that make up the cluster. That balancing can be done either through round-robin, or resources consumption (mainly CPU and memory).
  • The load balancing configuration can be done from either the client or the server:

    • The thin clients – such as JDBC – can be configured to make use of load balancing, by changing the connection URL so that they can connect to any of the N nodes that make up the cluster:

      Conventional URL

      “jdbc:oracle:thin:@unique_node:1521”, “username”, “password”

      URL para un Oracle RAC (nodos nodo_1 y nodo_2)

      “jdbc:oracle:thin:@ (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node_1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = node_2)(PORT =1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = myservice) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))))”, “username”, “password”

      As can be seen, the connection string has practically all the contents of a tnsnames.ora file, and it is not necessary to install additional components.

    • Both thick clients as well as OCI clients still can use the load balancing from the tnsnames.ora file.

As a side note, load balancing is done by the RAC itself as long as the only content that exists in the nodes of the Oracle RAC is the database itself – i.e. the RAC does not work properly if within the filesystems of such nodes there is something else, such as external logging files or additional information to be stored and synchronized. In that case, we must use the alternative of an additional balancing component.