The first day of learning streams

A concept

1.oracle streams provide specific information sharing solutions.

2. Between databases, between applications, between applications and databases, information sharing between different versions of the database, the database on different operating systems, different databases such as DB2, sql server.

3. oracle streams, the smallest unit of information sharing is known as Message message, a message can be captured in the database changes, the event may be generated from the database, including dml, ddl (such as table and index in addition or the like), however, some operations do not share range, such as increasing the data file, a start up or unmount a table space.

4.oracle streams may be controlled to capture --- message stream, streaming mode information, and the mode of use or application of traffic reaches the target terminal.

5.orale streams have numerous applications, such as message queues, data protection, data warehouse loading, data replication, we mainly study data replication.

6.oraclestreams is an oracle database 11g standard and is fully functional, but the oracle database 11g Standard Edition to synchronize the only way to capture database changes as an automatic capture, the capture is the so-called synchronous data is modified at the same time on the capture, rather than recapture the log after the modification is complete. Enterprise Edition, asynchronous capture support.

.Oracle streams stream in two

1.oracle streams of information flow

------ ---------- staging capture and dissemination of consumer

First, Oracle streams capture process to capture the message, then the message is placed in the capture process and to reformatted (staging queue and insert) a staging area, this area is commonly referred to as a data structure in memory queue streams . The capture process streams are then posted to the message queue. Reads the message queue process streams are called consumer process. These processes need to register them in streams receiving the message queue preferences, choose what you want to receive the message, these processes are called subscribers process. Consumer or subscriber may be a process streams (such as the application process) or an external application process.

For example: in streams queue, local consumers can read the news or make news out of the team, and then process the message; the message can also spread to some other streams queue system, and then processed by other consumers or application process.

2. The three main components responsible for the flow of information oraclestreams

Capture (capture) components.

Temporary (the staging) and propagation (propagation) component.

Consumer components (consumption)

In the architecture of oraclestreams, these three components are included a number of other sub-components, processes, and configuration requirements, the oracle streams replication, these components will automatically collaboration. So as to send a message to the corresponding target.

Three .oracle streams Architecture Overview

3.1. Capture Component

3.1.1 capture process is a database background process, normally used to create a message,

Asynchronous capture process: the redo log file to extract the changes made to the database, and then create a message.

Synchronous capture process: to create a message through the real-time capture dml changes.

Message created is called "logical change log" --logical change records - the LCR ---- message created in a special data format

LCR LCR is divided into implicit and explicit capture LCR ---------------- When the message is automatically captured, the corresponding process is known as implicit capture process, said the creation of information contained in the message the database for the source database. When a message is created by the user application, the corresponding process is called explicit capture process. 

Created by the asynchronous message capturing process is referred to as implicit LCR, a buffer queue of messages to be queued.

Messages created by the synchronous capture process also known as implicit LCR, but to line up consists of a continuous queue stored on the disk for these messages.

CHEN valley created by the user messages called explicit LCR, this book there is no explanation, temporarily understand this. - creating storage locations understood buffer queue object side.

 

 

3.1.2. Content captured

If we do not want to capture all the changes, you can specify the changes need to be captured. In streams, these instructions are called rules, in other words, the rules streams associated with the capture process to capture the decision process modifications need to capture, these rules can be created manually or automatically, you can change existing rules or custom rules.

 

 

3.1.3. Capture way

3.1.3.1. Based on the captured log

Logminer use functionality to modify the mining database redo logs to capture database changes. Advantages: Because the information is captured from the redo log, so long as there is an archive, you can capture, to ensure that the database crashes or recoverability intermediate wrong time.

3.1.3.2. Local capture

The capture process is usually run as a background process on the database source database, which is called "local capture process," the source database it is local, local capture process can seamlessly scan or tap the memory redo log buffer, the online redo logs even archive log (when needed) to capture changes to the local database. When modify the rules in line with the selection criteria defined by the rule of capture, it is installed for the capture and said the LCR, then placed in a pool of memory called the stream amount staging area, which is the default behavior of locally caught.

3.1.3.3. Downstream Capture

By running the capture process on another database server to capture the changes made to the source database, in this case, in addition to the log file is written to the source database server, it will also written to the remote database server. oracle streams using the transport service and the log function log oracle data guard written to the remote server. The capture process on a remote server logs from the source database mining, and its staging in locally. If the application process is the remote database of subscribers to these changes, these modifications can be applied to the remote database, otherwise, these changes will be propagated to another staging area for processing.

Advantages: First, you can modify the capture process offloaded from the production database and placed into another database. Second, the use of data guard protection mode can be remote write redo log, so you can choose a suitable environment for a specific model, you can use a remote database to capture modify multiple sources to the database.

3.1.3.4. ----- synchronous capture captures only dml operations. 

Synchronization acquisition is the oracle database 11g new functions, synchronous capture to capture information not dig redo changes, but to capture changes to the table produced by dml statement. Once the table data changes, synchronous capture process to capture real-time or modify this message and convert it into an LCR, then the LCR is not written to the staging area in memory, but is written to disk queue. In oracle database 11g, the pass does not capture ddl statement can not be captured by the Changsha City can be modified. Synchronous capture more suitable for low-volume users to copy those DML activity for a few tables.

 

 

3.2. Staging and propagation component

Subscribers and consumers need to be captured LCR, oracle streams to achieve this functionality through the dissemination LCR.

3.2.1. Staging

All messages are stored in a capture staging area, the staging area is a buffer memory, and also a portion of an example of System Global Area (SGA) database. Messages created by the user application are also stored in the staging area, created by the synchronous capture process LCR is not stored in the memory queue, but stored in a disk queue list.

Until all subscribers have been processed, the staging area before the message disappears. Subscribers can read the contents of the staging area, and then select information to meet the requirements. Subscribers may be an application, a staging area, a different application process database. Staging area so that the application can be displayed in a team news or read the messages to process them. If the staging area of ​​the subscriber is an application process, then the application process can make the message a team and apply it.

3.2.2. Propagation

Using the database links oracle net, a region can be propagated to another message temporarily stored in another database in the staging area. streams in the choice of message routing is very flexible. Application of rules may be propagated propagated to select another message in the staging area. You can modify existing propagation rules or custom rules.

In some cases, not required for propagation. When the capture process creates a message, with a staging area consumers can make the message from the team. In this case, the publisher processes and consumer processes are running on the same database.

There the network (Directed Network): oracle streams can control message propagation in the network. Captured in a database message before reaching the preset subscribers or target, either publish or spread to other databases on the network can also be spread through any other databases on the network. This function is called a directed network.

Even if the source and destination databases without direct network communication, the message can still be performed by the relay and the other has an intermediate database of network communication between the source and target databases.

 

 As shown above, the database is a database message sent by not oh database B to C.

Togo database may exist between databases A and C, may also be present, not just the target database database Togo C.

When sending a message to multiple destinations, the source database sending messages only to the intermediate database, and then sends the same message from the intermediate database to all other targets, rather than sending a plurality of times with the source database.

Intermediate database simply the contents of a stream queue to another queue streams, which is referred to "forwarding queue" (Queue Forwarding)

Messages can also be applied in the middle of the database, and then use the capture process on the intermediate database to capture it and spread to other database again, this is called the application forward "Apply Forwarding"

3.3. Consumer components

 When the message is processed from the staging area from the team. Application of the process implicit message is removed from the staging area. If the message is processed by the application process and is applied to the objects in the database, the database is called "target database." The application process runs locally on the target database.

Application or process will be explicit message is removed from the staging area, the staging area with respect to the user application may be local or remote.

Applications may decide to rule out the team and the message from the target database application process applications. By default, the application process will be applied LCR captured, but can also be used from PL / SQL stored procedure definition to intercept and process the LCR. Application of the process also allows the buffer dequeue messages from the queue and insert it into a persistent queue to allow other applications to process it.

3.3.1. The default application process

The default application is configured to process multiple database background processes on the target database. By default, if captured lcr is a modification of the source database and ddl dml generated, the application process will automatically apply them. When applying changes to the target database, the application process can detect all data collisions.

In a heterogeneous environment, transparent gateway (oracle transparent gateway) service to send messages to remote non oracle database using suitable oracle.

3.3.2 custom application process

Custom application process in the function and application process is similar to default. Complete control over the processing of the application process LCR way through customization. You can customize the PL / SQL stored procedures. In the oracle streams, the user create a stored procedure called "application processing stored procedures."

Application processing procedure stored in the LCR may generally be applied to all, some applications may have lcr selected application can define different handler to handle various modifications dml. For example, a separate application program for processing same table insert, delete and update operations defined. In this way, you can choose to ignore all the delete statement for the target database selected data tables. The delete command LCR also can be modified to update the command, so that, delete the statement was changed to update statements against the target data table, which can update a field. It is because of this flexibility, you can implement a custom database replication to meet the unique business and legal needs.

3.3.3. Conflict Detection and Resolution

By design, the application process to modify the application to detect data conflicts when database. When an application process attempts to update or delete rows identification data, if the data corresponding to the target rows in the table and in the lcr not pair, conflict will occur. LCR including modified value in the column source lines before and after the modification, and the desired target row is the value before the modification. If the value before the amendment can not be paired, I think that data conflict occurs. In this case, you can call a stored procedure conflict resolution as needed. oracle is provided with a number of pre-generated conflict with stored procedures. You can use stored procedures to provide conflict management or custom stored procedures to resolve the conflict in order to meet business needs.

If you can not resolve conflicts or conflicts with stored procedures reported abnormal, then the application will process the entire transaction into a permanent error queue. Can re-execute the transaction after error correction data conflict. If the conflict can be other methods do not need to re-execute the transaction to resolve the error, you can also delete incorrect transaction as required.

3.4. Queue

The queue may be viewed as a storage location of the message. Applications can send messages to the message queue or extracted from the queue. When an application needs to communicate with other applications or processes, the application may be a message into the message queue, and then the other application will be able to extract the message from the queue.

oracle streams component exchange messages using queues. Message Queuing possible to provide a trouble free asynchronous communication for different processes or applications. Queue support is inserted into the message queue, so that the message from the team and spread the message queue or to other systems.

Message comprises: information and content - load two parts.

Content of the message may be a special primitive data type or data type. oracle streams have a generic data type is called ANYDATA. All LCR must be temporarily stored in the ANYDATA queue.

oracle streams Advanced Queuing feature supports the use of abstract ANYDATA or other types of messages. ANYDATA main advantage is to allow an application queue sending different types of messages on the same queue. A queue can be stored in the database using one or more tables.

3.5.oracle stream is a label

All database modifications redo information includes a tag or a label. By default, the value of this tag is null, and it does not take up any space in the redo log. Data type tag field is RAW, the size limit is 2000 bytes. When the capture process will redo information into the LCR, the label field will become part of the LCR.

If a unique database to generate a particular LCR, then the oracle streams can be identified and tracked advantage of this feature of the database. Configured for bidirectional or multidirectional replication environment, such identification is mandatory. If this lack of recognition, will cause the message loop back to its original source database.

By default, it is used in the capture, streams dissemination and application of the rules of the process to check the value of the tag field. Only the processing label is empty (null) the LCR field, discarding the LCR value is not null. When you modify the application to the target database, database application process on the target value is set to 00 (0) 16 hexadecimal tag field. Therefore, the transaction generated by the application execution redo information will have a non-null value of the tag field. As a result, a two-way replication for the capture process (if any) will ignore such changes made by the local process, so as to avoid the cycle changes.

You can also use this feature to temporarily suspend the replication of certain actions. You can change the values ​​in the tag field in the database for the session, thus making the capture process ignores all LCR in the session generated by the redo records. Then, the target database need more to do the same operation and to maintain the data in the source database synchronization. The value of the tag field can be reset to its original value for the session to redo the normal copy or simply exit the session.

3.6. Rules and Rule Sets

Oracle Streams usage rule control message capturing, processing, and dissemination. A rule is a database object, such as a table or index. When the configuration component streams, as a rule condition. Rule conditions are similar to sql statement where clause.

Rules include:

Rule Condition: merge together by one or more expressions, it returns a Boolean value.

Context Evaluation: defines the external data while evaluating the rule condition may be referenced rules. External data may be a variable, table data or both.

Context Action: This is the optional information when evaluating the rule conditions are interpreted by the client of the rules engine. Capture, dissemination and application of client processes are the rule engine.

Relevant rules together to compose a set of rules, a set of rules and a crown beam assembly relative streams.

oracle streams supports two types of rules:

Positive set of rules: If a positive rule set evaluation rule evaluates to TRUE, streams will include the LCR during processing.

Negative set of rules: If a rule in a set of negative assessments TRUE, the streams will discard the LCR.

Rule sets may include both positive and negative rule sets streams in a component, in this case, the negative first rule set is evaluated. If the assessment is TRUE, then the rule sets are being ignored, because the message will be discarded.

*** Smart synchronization process has a positive set of rules.

If you do not create a rule combined set of rules when configuring replication streams, then the oracle will automatically generate them, they will rule together called the rule set generated by the system. For most simple replication environment, these system-generated rule combined set of rules is sufficient. You can customize the rule set rules together, you can also change the rule set rules together generated by the system to meet demand.

3.7. Examples of

 When copied from the source database to the target database tables, the database needs to contain a copy of the target correspondence table. If the target database is not included in this table, you must create it, or instantiate a table in the source database. There are many examples of the method table. According to a particular environment and requirements, may be used CTAS (create table as select), data pump, import / export, portable table space, or the split mirror copy Recovery Manager (recovery manager rman).

First, we must be ready to instantiate a data table in the source database. In the replication configuration process, oracle will automatically be ready to instantiate the data table. You may also be used provided stored procedure to prepare the data table to be instantiated. In this regard, oracle will record the database modifications number (system change number, scn) and streams the data dictionary of the interior of the global name populate the database, table name and object number, column name and column number. Then, if the target database table does not exist, you must create it using the contents of the source database. Finally, examples of the target table is set to scn scn source database. import and export data pump and tool settings to instantiate the target database scn winning when importing data. It may also be used provided the amount of a stored procedure to set scn instantiate target database table.

Examples of scn controls which include LCR database changes to be applied to the target database through the application process and which LCR is ignored. If you submit instantiation SCN scn than the target database table in the LCR in the source data table is large, then the application will process an application to modify the table. Otherwise, ignore the LCR. In ignoring this lcr, oracle does not report any warnings or error messages.

3.8.LogMiner data dictionary

oracle database processes using the database data dictionary to map the object number, internal version of the object information and the corresponding table column number, column names and object data. The data dictionary is always synchronized with the current database configuration.

Although the capture process can read redo log files or archive log files, but the current information in the database data dictionary and these might not redo the same information, so streams capture process requires a separate data dictionary. This information may be generated first, and then before the capture process to scan the log file database data dictionary might have made a difference.

Data dictionary used by the capture process known as LogMiner data dictionary. oracle in the first capture process creates a database redo logs to extract the data dictionary information. After the capture process first starts, the redo log read this information and creates logminer data dictionary data dictionary. The lominer data dictionary content stored in the internal logminer table.

Logminer source database can have multiple data dictionary. Multiple capture processes to share a common data dictionary logminer, or each process can have a separate logminer data dictionary.

Note: synchronization acquisition process does not use logminer data dictionary.

3.9.streams data dictionary

Dissemination and application process is similar to the capture process also requires a separate data dictionary database to track the source of the summary object name and object number. When the source database ready to instantiate objects, and instances of information and details related to the subject together written to the redo log. Capture process reads the information and database runtime information is added to the streams data dictionary. Local capture process streams data dictionary at the source database summary, while the downstream capture process streams downstream data dictionary in the database. In preparation for the object to instantiate the update streams data dictionary.

In order to assess the communication process rule when dealing with LCR captured require the source database object mapping information streams data dictionary. oracle will automatically be multiple versions of a local streams data dictionary is added to the configuration of the communication process of the database.

Similarly, the application process in order to evaluate rules when dealing with LCR captured, but also need the source database object mapping information streams data dictionary. oracle will automatically be multiple versions of a local streams data dictionary is added to the configuration of the application process database.

 3.10.NOLOGGING operation and operating UNRECOVERABLE

oracle streams captured from the redo log database changes. When the operation is performed in dml nologging manner, no redo log. When SQL * Loader direct path load to UNRECOVERABLEF way, will disable the generation of redo logs. In these cases, due to the lack of redo log records, dml changes will not be captured for these changes successfully copied, the need to avoid the use of nologing and unrecoverable operation.

To ensure proper recording the change log table, you can forcelogging in the table space or database snake several times. Once set, oracle can automatically for all operations and unrecoverable nologging redo log information generating operation.

If for performance reasons must noogging operation and unrecoverable operations on the source database, you need to perform the same operation on the target database to ensure data synchronization. Otherwise, data on the target database matching errors result in the application process subsequent dml operations in publishing errors.

3.11 Supplementary log

Database modifications and the desired information is recorded in the redo log, which is performed for instance when activating the database error or media error. oracle streams using the same redo log to create a message (LCR) applied to the target database. Redo information contained in the modified column values ​​occurring in the source database. But sometimes this information is not sufficient to correctly identify the target table row to apply the same changes.

Supplemental logging is an additional column of process data recorded in the redo log. The entire process of capturing additional information into the LCR. Use the entire application process additional information to correctly identify the line needs to be applied to modify.

If you need an application to ensure data integrity in the outside oracle database, and the table has no primary key or unique constraint, then it is necessary to configure enough additional logs. Sometimes you need to do all the columns in the table supplemental logging.

**** always supplemental logging at the source database configuration, has nothing to do with the local capture process or downstream capture process.

Supplemental logging can be configured at the database level even at the table level. At the database level, may be identified in the redo log row, it may be a particular type of record values ​​before and after modification by column arranged by arranging it in the redo log record additional information, such as a primary key column, a unique index column, or foreign key column in the table all the columns.

If a table has no primary key constraint or unique index, then it is necessary to configure supplemental logging for a number of columns.

At the table level, supplemental logging to create a separate log set, including column names for each table. This log may be conditional or unconditional. When supplemental logging conditional only on the specified column has the column is updated, mirrored columns in front of all these modifications will be recorded in the redo log. Supplemental logging unconditionally regardless of whether the designated column is updated, will be in the redo log to record the image in front of all these columns changes. This is sometimes referred to as "log at any time", the original data for all columns must be used to identify the supplemental logging unconditional.

**** synchronous capture process does not require supplemental logging information.

3.12. Logical Record Review

After the capture process captures from the log file to modify information in the database to do the format conversion. After the message format conversion logic referred to modify records representing changes to the database.

Capture process can create two LCR:

Each row LCR LCR (sometimes referred to as the DML LCR) represented edits to a single row. Furthermore, for some types of data modification LOGN, LONG RAW storage type or a CLOB single column of XML type, there may be a plurality of LCR. And a single DML statement can affect multiple rows, leading to create multiple LCR.

DDLLCR The LCR represents the modification resulting from a ddl command.

The LCR contains enough information to apply the change to the target database. In addition, in order to audit and track, but also can include additional information in the LCR. LCR dull data format used internally by the oracle sreams, but some may be stored procedures to access and modify the information contained in the LCR.

3.13 Comparison table data

oracle database 11g stored procedure comprising comparing data in a distributed synchronization and replication environment (combined) in the shared table. DBMS_COMPARSON package (package) during storage may be included in the comparison table data without the intervention of other applications. These processes may be stored either in the entire table of comparison data, the comparison data may be a subset of data or data range. Comparison can be carried out periodically or at any time. You can row level or table-level consistency checking of data. For the identified differences, can be viewed. These discrepancies may be due to incomplete transactions, unrecoverable errors. Once you find discrepancies, the stored procedure may be combined provided the difference and confirm the data table does not match the error has been resolved.

 

Summary: Oracle Streams is an information sharing solution that provides a robust and flexible infrastructure to manage the flow of information between the oracle database and non-database oracle. Redo information, oracle streams can be seamlessly captured in the network and to easily modify the secondary database. Users capture process, the communication process and streasm rule application process provides customization features, you can control the selection of messages based on business needs, routing and processing. Synchronous capture feature, you can use the oracle database 11g Standard Edition rush to copy the data from the database. Downstream capture configuration log mining process can be offloaded to other database products to reduce the load on the system. oracle streams are integrated into a characteristic oracle database software.

 

 

 

 

 

 

 

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/liang-ning/p/11896837.html