How you manage Oracle Database Quality of Service (QoS) will make a huge difference in how your database performs. You can make the right decisions for your database by learning the right way to manage QoS. To do this you need to learn the important concepts of QoS and understand the various methods of managing it.

This is the first article in a series that I’m writing about the Oracle Database Quality of Service (QoS) Management feature. In this series of articles, I’m going to cover what QoS management is, how it works, its basic architecture, and then show you how to use it.

Quality of service management

In previous versions of Oracle Database, you could use services to manage workloads and isolation. For example, one group of servers might be assigned to the data warehouse, another to the sales application, a third to ERP processing, and a fourth to the user application. Services allow the database administrator to allocate resources to specific workloads by manually changing the number of servers on which the database service may run. Workloads are isolated from each other so that peaks in demand, outages and other problems in one workload do not affect other workloads. The problem with this type of provisioning is that each workload must be provisioned separately for peak load, since resources are not shared.

You can also define services that share resources by overlapping server allocations. But even with this capability, you had to manage server allocation manually and each service was tied to a fixed group of servers.

Starting with Oracle Database 11g, you can use server pools to logically partition the cluster and provide workload isolation. Server pools are a more dynamic and business-oriented method of allocating resources because resource allocation is not dependent on the servers being run. Instead, the server pool allocation is dynamically adjusted as servers enter or exit the cluster to best match the priorities defined in the server pool policy definitions.

Overview of QoS Management

Many companies are consolidating and standardizing their IT systems in the data center. At the same time, the transfer of applications to the Internet has created the challenge of managing peaks in demand that cannot be fully anticipated. In such an environment, you need to pool resources and have management tools that allow you to identify and resolve bottlenecks in real time. Policy-based server pools are the basis for dynamic workload management. However, you can only adjust resource allocation in response to changes in server availability.

Quality of service management is a policy-based, automated workload management (WLM) system that monitors and adjusts the environment to meet enterprise-level performance objectives. Based on resource availability and workload requirements, QoS management identifies resource bottlenecks and makes recommendations to address them. It may advise the system administrator to move the server from one server pool to another, or adjust access to processor resources using the Database Resource Manager to meet current performance goals. With QoS management, the administrator can ensure the following

  • When resources are sufficient to meet demand, enterprise-level performance targets are met for each workload, even as workloads change.
  • When there are not enough resources to meet all the requirements, service quality management tries to meet the most important business objectives at the expense of the less important ones.

QoS management and Exadata Database Engine

The original incarnation of QoS management is a feature of the Oracle Database product family in conjunction with Oracle Real Application Clusters (RAC) software. It was first introduced in Oracle Database 11g Release 2. The original incarnation of QoS management is a feature of the Oracle Database product family in conjunction with Oracle Real Application Clusters (RAC) software. It was first introduced in Oracle Database 11g Release 2.

QoS Management software can work in non-Exadata environments where Oracle Database 11g Release 2 is available. As of version, a subset of the QoS management functionality has been implemented to allow non-Exadata users to monitor performance classes, but not to generate and implement changes in response to currently observed workloads. In its current form, QoS Management provides a powerful database-centric capability that is the first step towards a more comprehensive workload management solution.

Focus on QoS control

QoS management monitors the performance of each work request on the target system. By accurately measuring the two components of performance, resource utilization and latency, bottlenecks can be quickly identified and resources reallocated to address them and maintain or restore service levels. To change or improve the runtime, it is usually necessary to change the source code of the application. Thus, QoS management only observes and manages the waiting times.

QoS management bases its decisions on the observation of the waiting time of work requests for resources. Hardware resources such as CPU cycles, disk I/O queues, and global cache blocks are examples of resources that can wait for work requests.

Other expectations such as locks, latches, contacts, etc. may appear in the database. Although these expectations are taken into account by the QoS management in the database, they are not broken down by type and are not managed. Minimizing unmanaged expectations requires changes that QoS management cannot make, such as. B. Application code changes and database schema optimization. QoS management is still useful in these cases, because measuring and reporting unmanaged latency can be used as a tool to measure the impact of application optimization efforts.

Benefits of QoS Management

The benefits of QoS management include:

  • By categorizing and measuring database performance, QoS management can help administrators determine where additional resources are needed.
  • QoS management focuses on Oracle RAC and uses this basic concept to determine whether additional hardware can be added to maintain acceptable performance.
  • QoS management helps to reduce critical outages. By reallocating runtime resources to the most commonly used business-critical applications, they are less likely to fail.
  • QoS management reduces the time it takes to resolve performance target violations. Instead of requiring administrators to understand and respond to performance changes, much of the work can be automated. Administrators have a simple interface to view and make recommended changes.
  • Performance stress can often lead to system instability. By deploying resources where they are needed most, QoS management reduces the likelihood that systems will suffer from performance overloads and resulting instability.
  • QoS management allows the administrator to set performance targets that ensure Service Level Agreements (SLAs) are met. Once QoS management objectives are established, performance is monitored and changes are recommended when SLAs are not met.
  • When the need for resources changes, QoS management can reallocate hardware resources so that applications use those resources more efficiently. Resources can be removed from applications that no longer need them and added to an application that is suffering from performance stress.

Functional overview of QoS management

QoS Management works with Oracle RAC, Oracle Clusterware, and Cluster Health Monitor (CHM) to manage database resources to meet service levels and manage storage loads on managed servers.

In general, database services are used to aggregate coherent work requests and to measure and manage database performance. For example, a query started by a user in a database may use a different service than the reporting application. To manage the resources used by the service, some services can be deployed on multiple Oracle RAC instances simultaneously, while others can only be deployed on a single instance. In an Oracle RAC database, QoS management controls the nodes on which custom database services are provided. Services are created in a particular server pool and the service runs on all servers in the server pool. If only one service is needed because the application cannot scale efficiently on multiple RAC servers, the service can be hosted in a server pool of up to one.

QoS Management periodically evaluates the CPU latency of database servers to identify workloads that are not meeting performance targets. If necessary, QoS Management provides guidance on adjusting the size of server pools or changing consumer group allocations in Database Resource Manager (DBRM). As of Oracle Database, QoS Management also supports moving CPUs between databases in the same server pool.

DBRM is an example of a resource allocation mechanism; it can allocate CPU shares to a set of groups of resource consumers based on a resource plan defined by the administrator. The resource plan allocates the percentage of capacity to run on the CPU. QoS management does not adjust the DBRM plans. It activates the full multi-tiered resource plan and then, when the recommendation is implemented, moves workloads to specific resource consumption groups to meet performance targets for all the different workloads.

Enterprise database servers can run out of available memory due to too many open sessions or high workloads. A lack of memory can result in failed transactions or, in extreme cases, a restart of the server and the loss of valuable resources for your applications. QoS management reduces memory stress by temporarily disabling services for database instances on the server that are experiencing memory stress. New sessions are thus directed to less congested servers. Redirecting new sessions protects existing workloads and the availability of the memory-intensive server.

When QoS Management is enabled and managing the Oracle Clusterware server pool, it receives a stream metric from Cluster Health Monitor that provides real-time information about server storage resources, including the amount of storage available, the amount of storage in use, and the amount of storage delegated to disk for each server. When QoS Management determines that a node has a memory load, the database services managed by Oracle Clusterware on that node are stopped so that no new connections can be made. Once the memory load is cleared, the services are automatically restarted and the listener can send new connections to the server. Memory pressure can be relieved in several ways (e.g. by closing existing sessions or by user intervention).

QoS management policy sets

The core concept of QoS management is a set of policies. The policy set allows you to specify resources, performance classes (workloads), and a set of performance policies that define a performance target for each performance class and set limits on resource availability. QoS management uses a set of system-wide policies that define performance objectives based on performance classes and resource availability. Specific performance policies can be triggered based on schedules, maintenance windows, events, etc. Only one performance policy can be in effect at any given time.

To maintain current performance targets, QoS management makes recommendations for resource reallocation and predicts its effect. Recommendations are easy to follow at the touch of a button.

The policy suite consists of the following elements:

  • Server pools managed by QoS Management
  • Performance classes representing work requirements with similar performance objectives
  • Performance policy that describes how to allocate resources to performance classes using performance goals and guidelines for top-level server pool. A performance policy ranks performance objectives according to their importance to the business, allowing service quality management to focus on specific objectives when the policy is active.

Server pools

A server pool is a logical subdivision of a cluster. Server pools allow workloads to be isolated within the cluster while maintaining flexibility and allowing users to enjoy other benefits of consolidation. Administrators can define pools of servers typically associated with different applications and workloads. An example is given on the slide. QoS management can help manage the size of each server pool and manage the allocation of resources within the server pool.

When you first install Oracle Grid Infrastructure, a default pool of servers, called a free pool, is created. All servers are initially placed in this server pool. Specific server pools can then be created for each workload to be managed. When a new server pool is created, the servers assigned to that server pool are automatically moved from the free pool to the newly created server pool.

Once the server pool is created, the database can be configured to run on the server pool and cluster-managed services can be created to connect applications to the database. For the Oracle RAC database to benefit from the flexibility of server pools, the database must be created with a policy-based deployment option that places the database in one or more server pools.

One of the key features of policy-based management is the allocation of resources to server pools based on their cardinality and importance. When starting a cluster or adding servers, all server pools are filled with the minimum values in order of importance. After the minimum values are reached, the server pools are filled in descending order until the maximum values are reached. If there are any free servers left, they are allocated to the free pool.

If servers leave the cluster for any reason, they can be redistributed. If there are servers in the free pool and the other server pool falls below its maximum value, the free server is assigned to the appropriate server pool. If no free servers are available, server redistribution occurs only if the server pool becomes smaller than the minimum size. When this happens, the server is selected in one of the following locations, in the order listed below:

  1. The least important server pool, which has more than the minimum number of servers.
  2. The smallest server pool that contains at least one server and is smaller than the affected server pool.

These mechanisms allow server pools to maintain an optimal level of resources based on the current number of servers available. Let’s take the example shown on the slide. If one of the servers in the online server pool fails, the server currently in the free server pool is automatically moved to the online server pool.

Now, if one of the servers in the BackOffice server pool fails, no server can be assigned from the free server pool. In this case, the server serving the batch server pool is dynamically reassigned to the BackOffice server pool because the BackOffice server pool has dropped below the minimum level due to the failure and is more important than the batch server.

When a node is subsequently rejoined in the cluster, it is allocated to the batch pool to meet the minimum for that server pool. Any additional nodes added to the cluster after this point will be added to the free pool as all other pools are filled to their capacity.

Power classes

Performance classes are used to classify workloads with similar performance requirements. The set of classification rules is evaluated by the work requests when they arrive at the system boundary. You can use these rules to assign values to characteristics of work requirements; if there is a match between the type of work requirement and the criteria for inclusion in a performance class, the work requirement is placed in that performance class.

This classification of work requests uses a user-defined name or label that indicates the performance class (PC) to which the work request belongs. All work requests grouped in a given PC have the same performance objectives. In essence, a label links a work requirement to a related work objective. Tags are sent with each work request so that each component of the system can measure and provide data to QoS management to evaluate against applicable performance goals.

QoS Management supports user-defined combinations of connection parameters, called classifiers, to match performance classes to the actual workloads running in the database. These connection parameters fall into two general categories and can be combined to create finer-grained Boolean expressions:

  • Configuration parameters : The supported configuration parameters are SERVICE_NAME and USERNAME. Each classifier in a performance class must contain one or more database services managed by the cluster. Additional granularity can be achieved by identifying the Oracle database user connecting from the client or middle tier. The advantage of using these classifiers is that they do not require modification of the application code to define the performance classes.
  • Application Settings : Supported application parameters : MODULE, ACTION and PROGRAM. These are optional parameters defined by the application as follows:
    • ODP.NET : Specify the ModuleName and ActionName properties of the OracleConnection object.

The PROGRAM parameter is set or edited differently for each database driver and platform. For more information and examples, see the accompanying Oracle Database Developer’s Guide.

To manage the workload of an application, the application code routes connections from the database to a specific service. The name of the service is specified in the classifier so that all work requests that use this service are marked as belonging to the class of service created for this application. If you want to more accurately control the workload generated by different parts of the application, you can create additional performance classes and use classifiers such as MODULE, ACTION, or PROGRAM in addition to SERVICE_NAME or USERNAME.

The power classes used in the environment may change over time. A common scenario is that one performance objective is replaced by several more specific performance objectives, thereby dividing the work requirements into additional performance classes. For example, application developers can offer performance classes to take advantage of QoS management. In particular, the application developer can use the MODULE and ACTION parameters to define a collection of database classifiers and then categorize them into separate performance classes so that each type of work request is handled separately.

Classification and marking

To enable QoS management, work requests must be classified and tagged.

When a database session is established, the parameters of the session are checked against the performance classifiers to determine the classification. Work associated with a session is then tagged according to the session classification until the session ends or the session settings change. If the parameters of the session change, the classification is re-evaluated. Thus, the overhead associated with classification is very small because classification is only evaluated when a session is established or when session parameters are changed.

Labels are permanently assigned to each work request so that all measurements associated with a work request can be recorded in the corresponding performance class. Essentially, a label links a job requirement to a performance class and its associated performance target.

Performance guideline

The QoS administrator defines one or more performance policies to manage various performance objectives. For example, an administrator may define a performance policy for regular work hours, another for non-work hours during the week, another for weekend work, and another for use during quarterly close processing. Note that there is only one performance policy in effect at any given time.

A performance policy has a set of valid performance objectives, one or more for each application managed on the system. Some performance objectives are always more important to an organization than others, while other performance objectives may be more important at some times and less important at others. The ability to define multiple performance policies within a policy set gives QoS management the flexibility to implement different priority schemes as needed.

Power rating grade

Each performance class can also be classified in the performance guideline. This ranking assigns a relative degree of business criticality to each performance objective. If there are insufficient resources to meet all performance targets for all performance categories, performance targets for the most important performance categories should be met at the expense of the less important ones. Available range settings : Highest, middle, lowest. Note that if a given grade is assigned more than one class (e.g. Medium), the classes are sorted alphabetically within that grade.

Performance targets

For each performance class, you create a performance objective to define the desired level of performance for that performance class. The activity objective defines both the business need and the work to which it relates (activity class). For example, a performance objective could state that requests from database employees using the SALES service should have an average response time of less than 60 milliseconds.

Each performance guideline shall include a performance target for each performance category, unless the performance category is identified as a measure only. In this version, QoS supports only one type of performance objective: average response time.

The response time is based on the database client calls from the time the request is received from the database server over the network until the time the request leaves the server. Response time does not include the time required to transfer information across the network to or from the client. The response time of all database client calls in a performance class is averaged and displayed as average response time.

Satisfaction performance indicators

Different performance targets are used to measure the performance of different workloads. QoS management currently only supports OLTP workloads and only uses the average response time. When you configure QoS management, you can have completely different performance goals for each performance class. For example, one goal may specify that a checkout call should end in 1 millisecond, and another goal may specify that a navigation call should end in 1 second. When new performance targets are added to the system, it can be difficult to compare them quickly.

In this context, it is useful to have a common and consistent numerical indicator to determine the extent to which the current workload of a performance class corresponds to its current performance target. This numerical indicator is called the job satisfaction indicator. Thus, the performance satisfaction metric is a normalized numeric value (+100% to -100%) that indicates how well a given performance objective is met, and allows QoS management to compare system performance for very different performance objectives.

Replacing the Server Pool Policy

A performance policy may also include some lifting of server pool directives. The server pool directive override defines the minimum number of servers, the maximum number of servers, and the attributes that are important to the server pool when the performance policy is in effect. Server pool policy overrides serve as constraints on QoS management recommendations because server pool policy overrides are executed as long as the performance policy is active. For example, QoS Management will never recommend removing a server from the server pool if that would cause the server pool to fall below the minimum number of servers.

Server pool directives allow you to specify the normal state of server pools at different times. The image above shows an example. Under normal circumstances, these server pool settings should be able to handle the existing workload. If the workload requirements for a performance class suddenly increase, the associated server pool may require additional resources beyond those specified in the performance policy.

Review of measures

QoS management uses a standardized set of statistics collected from all servers in the system. There are two types of metrics: Performance and resource measures. These statistics provide a direct view of the workload and latency of work requests in each performance class and for each resource requested, as they flow through the servers, networks, and storage devices that make up the system.

Performance measurements are captured at the entry point of each server in the system. They give you an idea of what the system is spending time on and allow you to compare wait times across the system. Data is collected periodically and transferred to a central location for analysis, decision-making and historical recording.

A key measure of performance is response time, which is the difference between the time a request is received and the time a response is sent. The response time of all database calls in a performance class is averaged and displayed as the average response time. Another important measure of performance is the speed with which work requests are received. It is therefore possible to estimate the demand associated with each asset class.

Resource metrics exist for the following sources: CPU, Storage I/O, Global Cache and Other (database standby). Two source measures shall be provided for each source:

  • Time to use resources : Measure the time spent using the resource.
  • Waiting time for resources : Measures the time spent waiting to receive a source.

QoS management metrics provide the information needed to systematically identify bottlenecks in the system by performance class. When a performance class fails to meet its target, the bottleneck for that performance class is the resource that contributes the highest average wait time for each work request in that performance class.

Architecture for QoS control

QoS Management retrieves metric data from each database instance running in the managed server pools and classifies the data by performance class every 5 seconds. The data includes many parameters, such as. B. Call arrival rate and CPU utilization, I/O and global cache, and latency. This data is combined with the current cluster topology and server state in the policy engine to determine the overall performance profile of the system relative to the current performance goals defined by the active performance policy.

The performance evaluation takes place once a minute and leads to a recommendation if the performance class does not reach its target. The recommendation indicates which resource is the bottleneck. Where possible, specific corrective actions are indicated along with the likely impact on all classes of service in the system. The slide shows the collection of data from different data sources by the QoS Management Data Connectors component:

  • Oracle RAC 11.2 communicates with the data connector via JDBC.
  • Oracle Clusterware 11.2 communicates with the data connector via the Oracle Clusterware SRVM component.
  • The server operating system communicates with the data connector via the Cluster Health Monitor (CHM).

Enterprise Manager displays information in several ways (for example, on the Control Panel, Policy Wizard, Performance History, and Warnings and Actions pages).

QoS management policy

If your business experiences periodic spikes in demand, you can purchase additional devices to maintain application performance, available when you need them and idle when you don’t. Instead of forcing additional servers to remain unused most of the time, you can choose to use these servers to run other application workloads. However, if servers are busy with other applications, your core business applications may not perform as expected when demand increases. QoS management is used to deal with these situations.

When you implement a performance policy, QoS Management continuously monitors and manages the system in an iterative process. If one or more performance goals are not met, each iteration attempts to improve the performance of one performance goal; the performance goal with the highest rank that is currently not met. If all performance targets are met, QoS management makes no further recommendations.

Recommendations can take the form of moving servers between server pools, changing the assignment of client groups, or moving processors between databases in a server pool. Changing the client group allocation may involve upgrading a particular workload to give it a larger share of resources, or downgrading a competing workload to give additional resources to the targeted service class. In both cases, the workload is re-prioritized within existing resources.

Moving servers between server pools is another approach to QoS management. In this approach, the distribution of servers is adjusted to the workload requirements. As of Oracle Database, QoS Management can also move CPU resources between databases in the same server pool. This changes the allocation of CPU resources between database instances through instance caching and provides additional control for environments where multiple databases are combined in a single Exadata Database Machine environment.

Implementation of recommendations

If QoS management aims to improve the performance of a particular performance class, it recommends adding more bottleneck resources (such as CPU time) for that performance class or making the bottleneck resource more readily available to work requests in the performance class.

Implementation of this recommendation will make the resource less accessible to other classes of service. The negative impact on the service categories from which the resource is withdrawn may be significantly smaller than the positive impact on the service being accessed, resulting in a net gain for the system as a whole. It may also be that the performance class that is penalized is less important to the company than the performance class that is supported.

When making recommendations, QoS management assesses the impact on overall system performance. If the improvement is small enough for one power class, but the negative effect on another power class is large, the QoS management may indicate that the performance gain is too small and is not recommended. If there is more than one way to resolve the bottleneck, QoS Management provides the best overall recommendation, taking into account variables such as the estimated impact on all classes of service, as well as the expected downtime and resolution time associated with the action. With Oracle Enterprise Manager, you can view the current recommendation and alternative recommendations.

Performance data is sent to Oracle Enterprise Manager for display on the QoS management dashboard and performance history pages. Alerts are generated to send notifications that one or more performance targets are not met or that a problem is preventing the management of one or more server pools. After these notifications, the administrator can implement the recommendation.

In this version, QoS Management does not make recommendations automatically. It provides a performance improvement option, which must then be implemented by the administrator by clicking the Implement button. Once the recommendation is implemented, the system is given the opportunity to stabilize before new recommendations are made. This is necessary to ensure that stable data are used for subsequent evaluations and to avoid recommendations that lead to variability in measurements.

QoS support for administrator-managed RAC databases

Starting with Oracle Database 12.2, you can use Oracle Database QoS Management with Oracle RAC on fully managed systems in policy- and administrator-managed deployments. Oracle Database QoS Management also supports full multi-tenant database management in both policy- and administrator-managed deployments. Earlier versions support Measure and Monitor modes only in multi-tenant Oracle RAC installations and installations managed by an administrator.

Because administrator-managed databases do not run in server pools, the ability to increase or decrease the number of instances by changing the server pool, which is supported in policy-managed database implementations, is not available for administrator-managed databases. This provisioning support is integrated with the Oracle Database QoS management pages in Oracle Enterprise Manager Cloud Control.

Oracle supports schema consolidation in the Oracle RAC database managed by the administrator by adjusting the processor shares for the performance classes running in the database. It also supports database consolidation by adjusting the number of processors for databases running on the same physical servers.

This source has been very much helpful in doing our research. Read more about how update statement works in oracle architecture and let us know what you think.

oracle background processes 19coracle ofsd processqos datahow select query works in oraclehow update statement works in oracle architectureorder of execution of sql query in oracle,People also search for,Privacy settings,How Search works,oracle background processes 19c,oracle ofsd process,qos data,how update statement works in oracle architecture,order of execution of sql query in oracle,how select query works in oracle,mandatory background process in oracle 12c,which statement is true about sql query processing in an oracle database instance

You May Also Like

🥇 VeoTorrent  Close? List of Best Alternatives ▷ 2021

Nine out of ten people in the world use torrent sites to…

🥇 Does TorrentLocura Close?  How to enter? + Alternative Websites ▷ 2021

We are almost all movie lovers, especially if we can enjoy it…

Understanding Power Management in RedHat Virtualization (RHV) –

There are two ways to manage power in RHV. The first is…

🥇 Stream in HD on Facebook Live  Step by Step Guide ▷ 2021

When it comes to a Facebook livestream, the first thing you need…