Upload eBook. Privacy Policy. New eBooks. Search Engine. As a seasoned professional, you are probably aware of the importance of understanding the technical details behind the RAC stack. This book provides deep understanding of RAC concepts and implementation details that you can apply toward your day-to-day operational practices. You will be guided in troubleshooting and avoiding trouble in your installation.
Successful RAC operation hinges upon a fast-performing network interconnect, and this book dedicates a chapter solely to that very important and easily overlooked topic. In addition, all root scripts must have run successfully on the node from which you are extending your cluster database.
This option adds to the flexibility that Oracle offers for database consolidation while reducing management overhead by providing a standard deployment for Oracle databases in the enterprise.
With Oracle RAC One Node, there is no limit to server scalability and, if applications grow to require more resources than a single node can supply, then you can upgrade your applications online to Oracle RAC. If the node that is running Oracle RAC One Node becomes overloaded, then you can relocate the instance to another node in the cluster. Alternatively, you can limit the CPU consumption of individual database instances per server within the cluster using Resource Manager Instance Caging and dynamically change this limit, if necessary, depending on the demand scenario.
Relocating an Oracle RAC One Node instance is therefore mostly transparent to the client, depending on the client connection. Oracle recommends to use either Application Continuity and Oracle Fast Application Notification or Transparent Application Failover to minimize the impact of a relocation on the client.
For administrator-managed Oracle RAC One Node databases, you must monitor the candidate node list and make sure a server is always available for failover, if possible. Candidate servers reside in the Generic server pool and the database and its services will fail over to one of those servers. For policy-managed Oracle RAC One Node databases, you must ensure that the server pools are configured such that a server will be available for the database to fail over to in case its current node becomes unavailable.
In this case, the destination node for online database relocation must be located in the server pool in which the database is located. Alternatively, you can use a server pool of size 1 one server in the server pool , setting the minimum size to 1 and the importance high enough in relation to all other server pools used in the cluster, to ensure that, upon failure of the one server used in the server pool, a new server from another server pool or the Free server pool is relocated into the server pool, as required.
Oracle Clusterware provides a complete, integrated clusterware management solution on all Oracle Database platforms. This clusterware functionality provides all of the features required to manage your cluster database including node membership, group services, global resource management, and high availability functions. Oracle Database features, such as services, use the underlying Oracle Clusterware mechanisms to provide advanced capabilities.
Oracle Database also continues to support select third-party clusterware products on specified platforms. You can use Oracle Clusterware to manage high-availability operations in a cluster.
These resources are automatically started when the node starts and automatically restart if they fail. The Oracle Clusterware daemons run on each node. Oracle Clusterware provides the framework that enables you to create CRS resources to manage any process running on servers in the cluster which are not predefined by Oracle. Oracle Clusterware stores the information that describes the configuration of these components in OCR that you can administer.
Overview of Oracle Flex Clusters. Overview of Reader Nodes. Overview of Local Temporary Tablespaces. Oracle Flex Clusters provide a platform for a variety of applications, including Oracle RAC databases with large numbers of nodes. Oracle Flex Clusters also provide a platform for other service deployments that require coordination and automation for high availability.
This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery. Reader nodes are instances of an Oracle RAC database that provide read-only access, primarily for reporting and analytical purposes. You can create services to direct queries to read-only instances running on reader nodes.
These services can use parallel query to further speed up performance. Oracle recommends that you size the memory in these reader nodes as high as possible so that parallel queries can use the memory for best performance. While it is possible for a reader node to host a writable database instance, Oracle recommends that reader nodes be dedicated to hosting read-only instances to achieve the best performance. Oracle uses local temporary tablespaces to write spill-overs to the local non-shared temporary tablespaces which are created on local disks on the reader nodes.
It is still possible for SQL operations, such as hash aggregation, sort, hash join, creation of cursor-duration temporary tables for the WITH clause, and star transformation to spill over to disk specifically to the global temporary tablespace on shared disks. Management of the local temporary tablespace is similar to that of the existing temporary tablespace.
Local Temporary Tablespace Organization. Temporary Tablespace Hierarchy. Local Temporary Tablespace Features. Metadata Management of Local Temporary Files. Local Temporary Tablespaces for Users. Atomicity Requirement for Commands. Local Temporary Tablespace and Dictionary Views. The temporary tablespaces created for the WITH clause and star transformation exist in the temporary tablespace on the shared disk. A set of parallel query child processes load intermediate query results into these temporary tablespaces, which are then read later by a different set of child processes.
There is no restriction on how these child processes reading these results are allocated, as any parallel query child process on any instance can read the temporary tablespaces residing on the shared disk.
For read-write and read-only instance architecture, as the parallel query child processes load intermediate results to the local temporary tablespaces of these instances, the parallel query child processes belonging to the instance where the intermediate results are stored share affinity with the reads for the intermediate results and can thus read them. Creation of a local temporary tablespace results in the creation of local temporary files on every instance and not a single file, as is currently true for shared global temporary tablespaces.
You can create local temporary tablespaces for both read-only and read-write instances. For example:. When you define local temporary tablespace and shared existing temporary tablespace, there is a hierarchy in which they are used. To understand the hierarchy, remember that there can be multiple shared temporary tablespaces in a database, such the default shared temporary tablespace for the database and multiple temporary tablespaces assigned to individual users.
If a user has a shared temporary tablespace assigned, then that tablespace is used first, otherwise the database default temporary tablespace is used. Once a tablespace has been selected for spilling during query processing, there is no switching to another tablespace.
For example, if a user has a shared temporary tablespace assigned and during spilling it runs out of space, then there is no switching to an alternative tablespace. The spilling, in that case, will result in an error. Additionally, remember that shared temporary tablespaces are shared among instances. The allocation of temporary space for spilling to a local temporary tablespace differs between read-only and read-write instances.
For read-only instances, the following is the priority of selecting which temporary location to use for spills:. For read-write instances, the priority of allocation differs from the preceding allocation order, as shared temporary tablespaces are given priority, as follows:.
Instances cannot share local temporary tablespace, hence one instance cannot take local temporary tablespace from another. If an instance runs out of temporary tablespace during spilling, then the statement resutls in an error. To address contention issues arising from having only one BIGFILE -based local temporary tablespace, multiple local temporary tablespaces can be assigned to different users, as default.
One local temporary when the user is connected to the read-only instance running on reader nodes. One shared temporary tablespace to be used when the same user is connected on the read-write instances running on a Hub Node. Currently, temporary file information such as file name, creation size, creation SCN, temporary block size, and file status is stored in the control file along with the initial and max files, as well as auto extent attributes.
However, the information about local temporary files in the control file is common to all applicable instances. Instance-specific information, such as bitmap for allocation, current size for a temporary file, and the file status, is stored in the SGA on instances and not in the control file because this information can be different for different instances.
When an instance starts up, it reads the information in the control file and creates the temporary files that constitute the local temporary tablespace for that instance. If there are two or more instances running on a node, then each instance will have its own local temporary files. For local temporary tablespaces, there is a separate file for each involved instance.
The local temporary file names follow a naming convention such that the instance numbers are appended to the temporary file names specified while creating the local temporary tablespace. For example, assume that a read-only node, N1, runs two Oracle read-only database instances with numbers 3 and 4. All DDL commands related to local temporary tablespace management and creation are run from the read-write instances.
Running all other DDL commands will affect all instances in a homogeneous manner. For local temporary tablespaces, Oracle supports the allocation options and their restrictions currently active for temporary files.
To run a DDL command on a local temporary tablespace on a read-only instance, there must be at least one read-only instance in the cluster. A database administrator can specify default temporary tablespace when creating the database, as follows:. When you create a database, its default local temporary tablespace will point to the default shared temporary tablespace.
Local Temporary Tablespace for Users. When you create a user without explicitly specifying shared or local temporary tablespace, the user inherits shared and local temporary tablespace from the corresponding default database tablespaces. You can specify default local temporary tablespace for a user, as follows:. As previously mentioned, default user local temporary tablespace can be shared temporary space. You can change the user default local temporary tablespace to any existing local temporary tablespace.
If you want to set the user default local temporary tablespace to a shared temporary tablespace, T , then T must be the same as the default shared temporary tablespace. If a default user local temporary tablespace points to a shared temporary tablespace, then, when you change the default shared temporary tablespace of the user, you also change the default local temporary tablespace to that tablespace. Some read-only instances may be down when you run any of the preceding commands.
This does not prevent the commands from succeeding because, when a read-only instance starts up later, it creates the temporary files based on information in the control file.
Creation is fast because Oracle reformats only the header block of the temporary file, recording information about the file size, among other things. If you cannot create any of the temporary files, then the read-only instance stays down. Commands that were submitted from a read-write instance are replayed, immediately, on all open read-only instances. All the commands that you run from the read-write instances are performed in an atomic manner, which means the command succeeds only when it succeeds on all live instances.
Oracle extended dictionary views to display information about local temporary tablespaces. Oracle made the following changes:. All the diagnosibility information related to temporary tablespaces and temporary files exposed through AWR, SQL monitor, and other utilities, is also available for local temporary tablespaces and local temporary files. For local temporary files, this column contains information about temporary files per instance, such as the size of the file in bytes BYTES column.
At a minimum, Oracle RAC requires Oracle Clusterware software infrastructure to provide concurrent access to the same storage and the same set of data files from all nodes in the cluster, a communications protocol for enabling interprocess communication IPC across the nodes in the cluster, enable multiple database instances to process data as if the data resided on a logically combined, single cache, and a mechanism for monitoring and communicating the status of the nodes in the cluster.
Understanding Cluster-Aware Storage Solutions. An Oracle RAC database is a shared everything database. All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks, so that all of the cluster database instances can access these storage components. In Oracle RAC, the Oracle Database software manages disk access and is certified for use on a variety of storage architectures.
It is your choice how to configure your storage, but you must use a supported cluster-aware storage solution. A third-party cluster file system on a cluster-aware volume manager that is certified for Oracle RAC. All nodes in an Oracle RAC environment must connect to at least one Local Area Network LAN commonly referred to as the public network to enable users and applications to access the database. In addition to the public network, Oracle RAC requires private network connectivity used exclusively for communication between the node s and database instances running on those nodes.
This network is commonly referred to as the interconnect. The interconnect network is a private network that connects all of the servers in the cluster. The interconnect network must use at least one switch and a Gigabit Ethernet adapter. Oracle supports interfaces with higher bandwidth but does not support using crossover cables with the interconnect. Do not use the interconnect the private network for user communication, because Cache Fusion uses the interconnect for interinstance communication.
This additional network communication channel should be independent of the other communication channels used by Oracle RAC the public and private network communication. If the storage network communication must be converged with one of the other network communication channels, then you must ensure that storage-related communication gets first priority.
Applications should use the Dynamic Database Services feature to connect to an Oracle database over the public network. Dynamic Database Services enable you to define rules and characteristics to control how users and applications connect to database instances. These characteristics include a unique name, workload balancing and failover options, and high availability characteristics.
A typical connect attempt from a database client to an Oracle RAC database instance can be summarized, as follows:. The SCAN listener then determines which database instance hosts this service and routes the client to the local or node listener on the respective node.
The node listener, listening on a node VIP and a given port, retrieves the connection request and connects the client to the an instance on the local node. If multiple public networks are used on the cluster to support client connectivity through multiple subnets, then the preceding operation is performed within a given subnet. Clients that attempt to connect to a VIP address not residing on its home node receive a rapid connection refused error instead of waiting for TCP connect timeout messages.
When the network on which the VIP is configured comes back online, Oracle Clusterware fails back the VIP to its home node, where connections are accepted. Generally, VIP addresses fail over when:.
Oracle RAC 12 c supports multiple public networks to enable access to the cluster through different subnets. Each network resource represents its own subnet and each database service uses a particular network to access the Oracle RAC database.
Each network resource is a resource managed by Oracle Clusterware, which enables the VIP behavior previously described. Incoming connections are load balanced across the active instances providing the requested service through the three SCAN listeners. With SCAN, you do not have to change the client connection even if the configuration of the cluster changes nodes added or removed.
The valid node checking feature provides the ability to configure and dynamically update a set of IP addresses or subnets from which registration requests are allowed by the listener.
Database instance registration with a listener succeeds only when the request originates from a valid node. The network administrator can specify a list of valid nodes, excluded nodes, or disable valid node checking altogether. The list of valid nodes explicitly lists the nodes and subnets that can register with the database.
The list of excluded nodes explicitly lists the nodes that cannot register with the database. The control of dynamic registration results in increased manageability and security of Oracle RAC deployments.
The SCAN listener accepts registration requests only from the private network. Oracle RAC databases generally have two or more database instances that each contain memory structures and background processes. An Oracle RAC database has the same processes and memory structures as a noncluster Oracle database and additional processes and memory structures that are specific to Oracle RAC.
Any one instance's database view is nearly identical to any other instance's view in the same Oracle RAC database; the view is a single system image of the environment. Using Cache Fusion, Oracle RAC environments logically combine each instance's buffer cache to enable the instances to process data as if the data resided on a logically combined, single cache.
After one instance caches data, any other instance within the same cluster database can acquire a block image from another instance in the same database faster than by reading the block from disk. Therefore, Cache Fusion moves current blocks between instances rather than re-reading the blocks from disk.
When a consistent block is needed or a changed block is required on another instance, Cache Fusion transfers the block image directly between the affected instances. Oracle RAC uses the private interconnect for interinstance communication and block transfers.
Cache Fusion monitors the latency on the private networks and the service time on the disks, and automatically chooses the best path. If shared disks include low latency SSDs, then Oracle automatically chooses the best path. The Oracle RAC processes and their identifiers are as follows:. In an Oracle RAC environment, the ACMS per-instance process is an agent that contributes to ensuring a distributed SGA memory update is either globally committed on success or globally aborted if a failure occurs.
Contact Us. Upload eBook. Privacy Policy. New eBooks. Search Engine. Real Application Clusters, commonly abbreviated as RAC, is Oracles industry-leading architecture for scalable and fault-tolerant databases.
RAC allows you to scale up and down by simply adding and subtracting inexpensive Linux servers. Redundancy provided by those multiple, inexpensive servers is the basis for the failover and other fault-tolerance features that RAC provides.
Youll learn to troubleshoot performance and other problems. Youll even learn how to correctly deploy RAC in a virtual-machine environment based upon Oracle VM, which is the only virtualization solution supported by Oracle Corporation.
0コメント