Use the command analyze statistics ANALYZE - collect statistics about a database

① collecting and dropping indexes, tables and statistics cluster
② verification table, index and cluster structure
③ appraisal form and cluster and row migration and row link
for collecting and removing statistics in terms of function analyze
Oracle recommends using packages instead DBMS_STATS analyze collected information to optimize
DBMS_STATS parallel gathering information, gathering global information partition table
information Furthermore, CBO will use the statistical package DBMS_STATS out https://blog.csdn.net/iteye_14608/article/details/82447870

1, to calculate the model collection forms for all, table column and table all the indexes of statistical information

analyze table table_name compute statistics;

2, table statistics collection, and to calculate a pattern

analyze table table_name compute statistics for table;

3, collect statistics to calculate the index mode

analyze index index_name compute statistics;

4, in order to collect statistical information for the calculation mode column of the table

analyze table table_name compute statistics for table for columns col1,col2;

4, delete statistics for all indexes of the table, all the columns and tables table

analyze table table_name delete statistics;

 

Do not optimize use compute and estimate collect analyze statistics, analyze command is outdated, usually with dbms_stats to collect optimization statistics, dbms_stats command can be used parallel, to collect global statistics for partitioned objects, fine tune your statistics collection in other ways. Based cost depends on the optimizer statistics

Analyze command to gather statistical information and the required cost-based optimizer statistics unrelated to, the following cases better than dbms_stats use analyze:

2, under what circumstances analyze command

  (1) collect or remove an index, index partition, table, partition table, the statistics cluster,

 Analyze statistics collection tables are placed all_tables dba_tables user_tables table inside, statistical information table refers NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_COUNT, AVG_ROW_LEN

         analyze table has the following limitations: You can not collect statistical data dictionary

                                 You can not collect statistics on external table, it can be collected by dbms_stats

                                 Default can not collect statistics analyze temporary table

                           You can not compute or estimate, REF column types, varrays, nested tables, LOB column types types of statistical information

     analyze the index in the collection of statistical information on USER_INDEXES, dba_indexes, ALL_INDEXES table, table of statistical information refers to BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS, AVG_LEAF_BLOCKS_PER_KEY, AVG_DATA_BLOCKS_PER_KEY, CLUSTERING_FACTOR

      cluster statistics are placed ALL_CLUSTERS, USER_CLUSTERS, andDBA_CLUSTERS

(2) Verify index, index partition table, partition table index table organization structure of the object

(3) determination table rows and cluster connection or migration

3 Under what circumstances dbms_stats command

 Dbms_stats generally used to collect optimization statistics, dbms_stats command can be used parallel, to collect global statistics for partitioned objects, fine tune your statistics collection in other ways. Cost-based optimizer depends statistics

GATHER_INDEX_STATS

GATHER_TABLE_STATS

GATHER_SCHEMA_STATS

GATHER_DATABASE_STATS

 

4 analyze grammar

ANALYZE
{ { TABLE [ schema. ] table
| INDEX [ schema. ] index
} [ partition_extension_clause ]
| CLUSTER [ schema. ] cluster
}
{ validation_clauses
| LIST CHAINED ROWS [ into_clause ]
| DELETE [ SYSTEM ] STATISTICS
} ;

When we collect statistics on the index or table if there is a lot of data is deleted, or if a compute estimage to collect, can be carried out full table scan, so you would use a lot of time

  To validate tables, indexes, materialized views structural integrity of the cluster, and can analyze sentence plus validate structure option to verify, if it is valid error is not returned, if structural problems, an error is returned

  Specify the VALIDATE REF UPDATE ref to verify the value specified in the table, ref detect each of the rowid rowid and it is carried out, and if necessary modify, it will modify this statement can only be used in the analysis table

  If a user does not select permission on the table dependent objects, then the oracle will think they are not legitimate and is set to null, then the query ref value is not available, even if this object has the appropriate permissions. 

TABLE emp VALIDATE STRUCTURE the ANALYZE;
 VALIDATE STRUCTURE analysis used to verify the legitimacy of the object structure, not to collect optimizer uses this statistical information

    For the table, a database to verify the integrity of data blocks and the row for the index table organization, the database will generate the statistics for the primary key

   For cluster, automatically verifies the rationality of the database structure of the cluster table

   For a partitioned table, the database will verify each line belongs to the correct partition, if a line is not assigned to the correct partition, her rowid will be inserted into INVALID_ROWS table;

   For the legality of temporary tables, database tables and indexes will be detected in the current session

    For an index, the database will verify the integrity of each index block and block for damage. This command does not actually row of each table whether the row and index matching. You can use the cascade to verify

         oracle ordinary index calculated for each compression statistics, statistical information and to store the index to INDEX_STATS andINDEX_HISTOGRAM

TABLE emp VALIDATE STRUCTURE CASCADE the ANALYZE;
 CASCADE index information related to the legality of the table or cluster and verify the table or cluster, usually cascade will conduct a complete verification legitimacy, need to consume more resources

TABLE emp VALIDATE STRUCTURE CASCADE the ANALYZE the FAST;
 the FAST used to detect damage present on the table, instead of reporting the details of the specific damage, you can determine if the damage with a fast option, with the cascade without fast to determine the details of the damage, if you use this method validation function index has been enable, then may return an error, you have to rebuild the index

 

TABLE emp VALIDATE STRUCTURE CASCADE the ANALYZE ONLINE;
 ONLINE option is used when an object is being DML operations, to verify the legitimacy of an object, in order to ensure concurrency, and reduces the performance of the legitimacy of the authentication object when you use online verification when the legitimacy of the object, the object does not collect statistics, back to collect statistical information when you use the offline. You can not use the online cluster analysis

OFFLINE: is the default. When you use offline, it will increase the performance verification of the legality of the object, but will hinder INSERT, UPDATE, and DELETE statements concurrent access to the object, it does not affect the select statement,

 INTO statement is valid only for partitioned tables, rowid database will not partition table into a legitimate line of the table to go, if you ignore the user mode, consider the table specified in the current user, if you ignore the user mode and table , it will be considered a table named INVALID_ROWS, sql script for this table is $ ORACLE_HOME / rdbms / admin / utlvalid.sql


create table INVALID_ROWS (
  owner_name         varchar2(30),
  table_name         varchar2(30),
  partition_name     varchar2(30),
  subpartition_name  varchar2(30),
  head_rowid         rowid,
  analyze_timestamp  date
);

 

ANALYZE CLUSTER emp_dept LIST CHAINED ROWS INTO CHAINED_ROWS;
 

 LIST CHAINED ROWS allows you to determine the row migration and row connection by analyzing the tables and cluster, you can not take this statement is used in the analysis index

 INTO statement will connect the line and row migration row into the table, if you ignore the user mode, consider the table specified in the current user, if you ignore the user mode and table, it will be considered a table named CHAINED_ROWS, this table must be in your local database, create a script for this table is

 

TABLE the Orders DELETE STATISTICS the ANALYZE;
DELETE STATISTICS: can be deleted by analyzing the data stored in the dictionary information

When you can automatically delete the definition of indexes on a table with this statement statistics

If you want to delete the statistical information system without deleting user-defined statistics, you can specify the system user to delete, if you ignore the system, then the user-defined statistics in the column or index will be deleted

5 、 example

SQL> create table t1 as select *From emp;

Table has been created

SQL> select OWNER,TABLE_NAME,STATUS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE,CHAIN_CNT,AVG_ROW_LEN from dba_tables where table_name='T1';

OWNER      TABLE STATUS     NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ----- -------- ---------- ---------- ------------ ---------- ---------- -----------
SCOTT      T1    VALID

 

SQL> ANALYZE TABLE T1 COMPUTE STATISTICS;

Table has been analyzed.

SQL> select OWNER,TABLE_NAME,STATUS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE,CHAIN_CNT,AVG_ROW_LEN from dba_tables where table_name='T1';

OWNER      TABLE STATUS     NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ----- -------- ---------- ---------- ------------ ---------- ---------- -----------
SCOTT      T1    VALID            12          4            4       7533          0          41

SQL> ANALYZE TABLE T1 DELETE STATISTICS;

Table has been analyzed.

SQL> select OWNER,TABLE_NAME,STATUS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE,CHAIN_CNT,AVG_ROW_LEN from dba_tables where table_name='T1';

OWNER      TABLE STATUS     NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ----- -------- ---------- ---------- ------------ ---------- ---------- -----------
SCOTT      T1    VALID

 

SQL> exec dbms_stats.gather_table_stats('SCOTT','T1');

PL / SQL procedure successfully completed.

SQL> select OWNER,TABLE_NAME,STATUS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE,CHAIN_CNT,AVG_ROW_LEN from dba_tables where table_name='T1';

OWNER      TABLE STATUS     NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ----- -------- ---------- ---------- ------------ ---------- ---------- -----------
SCOTT      T1    VALID            12          4            0          0          0          39

Conclusion: You can find analyze statistical information can be collected dba_tables table, and dbms_stats.gather_table_stats only collected AVG_ROW_LEN

 

SQL> create unique index ii on t1(empno);

The index has been created.

 

SQL> select OWNER,INDEX_NAME,INDEX_TYPE,COMPRESSION,BLEVEL,STATUS,NUM_ROWS,DISTINCT_KEYS,LEAF_BLOCKS,DEGREE from dba_indexes where table_name='T1'

OWNER      INDEX_NAME                     INDEX_TYPE                  COMPRESS     BLEVEL STATUS     NUM_ROWS DISTINCT_KEYS LEAF_BLOCKS DEGREE
---------- ------------------------------ --------------------------- -------- ---------- -------- ---------- ------------- ----------- ----------
SCOTT      II                             NORMAL                      DISABLED          0 VALID         12               12           1 1

SQL> ANALYZE TABLE T1 DELETE STATISTICS;

Table has been analyzed.

SQL> select OWNER,TABLE_NAME,STATUS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE,CHAIN_CNT,AVG_ROW_LEN from dba_tables where table_name='T1';

OWNER      TABLE STATUS     NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ----- -------- ---------- ---------- ------------ ---------- ---------- -----------
SCOTT      T1    VALID

 

Conclusion: delete statistics on the table, then this table is defined in the statistics for the index are also deleted together

SQL> analyze table t1 validate structure cascade;

Table has been analyzed.

SQL> select OWNER,TABLE_NAME,STATUS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_SPACE,CHAIN_CNT,AVG_ROW_LEN from dba_tables where table_name='T1';

OWNER      TABLE STATUS     NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ----- -------- ---------- ---------- ------------ ---------- ---------- -----------
SCOTT      T1    VALID

 

Conclusion: When analyzing the table structure, does not collect statistics

 

SQL> List the Analyze the Table T1 chained rows;
the Analyze the Table T1 List chained rows
*
Line 1 error:
ORA-01495: specified link row table not found


SQL> @?/rdbms/admin/utlchain.sql

Table has been created.

SQL> Analyze table t1 list chained rows;

Table has been analyzed.

SQL> select *From CHAINED_ROWS;

Unselected row

 

Conclusion: If you do not specify a table name, you must manually create the default table CHAINED_ROWS

 

SQL> drop table t1 purge;

Table has been deleted.

SQL> create table t1 as select *From emp;

Table has been created.

SQL> create index ii on t1(empno);

The index has been created.

SQL> select *From V$OBJECT_USAGE
  2  ;

Unselected row

SQL> alter index ii monitoring usage;

The index has changed.

SQL> analyze table t1 compute statistics;

Table has been analyzed.

SQL> alter index ii monitoring usage;

The index has changed.

SQL> select *From V$OBJECT_USAGE
  2  ;

INDEX_NAME                     TABLE MON USE START_MONITORING    END_MONITORING
------------------------------ ----- --- --- ------------------- -------------------
II                             T1    YES NO  03/15/2013 15:31:13


SQL> exec dbms_stats.gather_table_stats('SCOTT','T1');

PL / SQL procedure successfully completed.

SQL> select *From V$OBJECT_USAGE ;

INDEX_NAME                     TABLE MON USE START_MONITORING    END_MONITORING
------------------------------ ----- --- --- ------------------- -------------------
II                             T1    YES NO  03/15/2013 15:31:13

Conclusion: dbms_stats.gather_table_stats collects statistics when statistics gathering index statistics, but does not collect statistics analyze the index at the time of the collection of statistical information, but we must remember to use dbms_stats collection is based cost optimization is used, but analyze statistical information collected in dba_table Yes, analyze command to verify the structural validate table, you can check the connection row migration row

 

ANALYZE - collect statistics about a database

 

SYNOPSIS

 

ANALYZE [ VERBOSE ] [ table [ (column [, ...] ) ] ]

DESCRIPTION Description

ANALYZE  to collect statistics about the contents of PostgreSQL tables, and stores the results in the system tables pg_statistic years. Subsequently, the query planner can use the most efficient planning of these statistics to help determine the query.


 If there is no parameter, ANALYZE check all the tables in the current database. If there are arguments, ANALYZE examines only that table. You can also give a name field, only this time to collect statistical information for those fields.

PARAMETERS Parameters

VERBOSE

 Open the display process information.
table

 Specific table to analyze (possibly schema name) name. The default is the current database for all tables.
column

 Name of a particular field to be analyzed. The default is all fields.

OUTPUTS Output


 If you declare VERBOSE, the ANALYZE  emits progress messages to indicate which is currently being processed row. Also print a lot of other information about the tables.

NOTES Note


 Periodically run ANALYZE, or after the table of contents for most changes made immediately run it is a good habit, accurate statistics will help the planner to choose the most appropriate query plan, and thus improve the speed of query processing. A common strategy is used daily in a low load operation when the VACUUM [ Vacuum (. 7)] and ANALYZE.


 VACUUM FULL and the difference is, ANALYZE requires only a read lock on the target table, so it can run on tables and other activities in parallel.


 Statistics collected for each field generally comprises a list of the most common values ​​and a covered wire showing approximate distribution for each data field. If ANALYZE think that they have no use, (for example, no common values ​​in a unique-key field) or the column data type is not supported by the relevant operator, then they can be ignored. In Chapter 21 `` Routine Database Maintenance '' have more information about the statistics.


 For large tables, ANALYZE takes a random sample collected for statistical table of contents, rather than examining every row. This ensures that even on a large table, we just need some little time to complete the analysis. Note, however, that the statistics are only approximate, but each time you run ANALYZE will result in the expected cost planner EXPLAIN shows there are some small changes, even if the actual table contents did not change this. In the case of a very small probability, this non-determinism will cause the query optimizer to choose a different query plan between runs of ANALYZE. To avoid this problem, you can increase the number of statistics collected by ANALYZE, as described below.


 The extent of analysis can be achieved by adjusting the default_statistics_target parametric, or on a per field basis by ... ALTER COLUMN ... SET STATISTICS with ALTER TABLE (TABLE see the ALTER [ ALTER_TABLE to (7)]) set the goal of each field of statistics control. The maximum number and maximum number of blocks in FIG envelope target value set in the value list of the most commonly used in the recording. The default target value is 10, but we can adjust this value to obtain a balance between the number of space planner accuracy and the time required for ANALYZE and occupied in pg_statistic. In particular, the statistics target to zero disables collection of statistics on the field. For those who do never involved in the query WHERE, GROUP BY, or ORDER BY clause in the field is very useful, because the planner will have no use for statistics on such to the field.


 In the largest field being analyzed statistical objectives determine the number of statistical sampling tables in rows. Increasing the target will cause the time to do ANALYZE increases in proportion to the demand of time and space. 


————————————————
https://blog.csdn.net/wll_1017/article/details/8672227

SYNOPSIS

 

ANALYZE [ VERBOSE ] [ table [ (column [, ...] ) ] ]

DESCRIPTION Description

ANALYZE  to collect statistics about the contents of PostgreSQL tables, and stores the results in the system tables pg_statistic years. Subsequently, the query planner can use the most efficient planning of these statistics to help determine the query.


 If there is no parameter, ANALYZE check all the tables in the current database. If there are arguments, ANALYZE examines only that table. You can also give a name field, only this time to collect statistical information for those fields.

PARAMETERS Parameters

VERBOSE

 Open the display process information.
table

 Specific table to analyze (possibly schema name) name. The default is the current database for all tables.
column

 Name of a particular field to be analyzed. The default is all fields.

OUTPUTS Output


 If you declare VERBOSE, the ANALYZE  emits progress messages to indicate which is currently being processed row. Also print a lot of other information about the tables.

NOTES Note


 Periodically run ANALYZE, or after the table of contents for most changes made immediately run it is a good habit, accurate statistics will help the planner to choose the most appropriate query plan, and thus improve the speed of query processing. A common strategy is used daily in a low load operation when the VACUUM [ Vacuum (. 7)] and ANALYZE.


 VACUUM FULL and the difference is, ANALYZE requires only a read lock on the target table, so it can run on tables and other activities in parallel.


 Statistics collected for each field generally comprises a list of the most common values ​​and a covered wire showing approximate distribution for each data field. If ANALYZE think that they have no use, (for example, no common values ​​in a unique-key field) or the column data type is not supported by the relevant operator, then they can be ignored. In Chapter 21 `` Routine Database Maintenance '' have more information about the statistics.


 For large tables, ANALYZE takes a random sample collected for statistical table of contents, rather than examining every row. This ensures that even on a large table, we just need some little time to complete the analysis. Note, however, that the statistics are only approximate, but each time you run ANALYZE will result in the expected cost planner EXPLAIN shows there are some small changes, even if the actual table contents did not change this. In the case of a very small probability, this non-determinism will cause the query optimizer to choose a different query plan between runs of ANALYZE. To avoid this problem, you can increase the number of statistics collected by ANALYZE, as described below.


 The extent of analysis can be achieved by adjusting the default_statistics_target parametric, or on a per field basis by ... ALTER COLUMN ... SET STATISTICS with ALTER TABLE (TABLE see the ALTER [ ALTER_TABLE to (7)]) set the goal of each field of statistics control. The maximum number and maximum number of blocks in FIG envelope target value set in the value list of the most commonly used in the recording. The default target value is 10, but we can adjust this value to obtain a balance between the number of space planner accuracy and the time required for ANALYZE and occupied in pg_statistic. In particular, the statistics target to zero disables collection of statistics on the field. For those who do never involved in the query WHERE, GROUP BY, or ORDER BY clause in the field is very useful, because the planner will have no use for statistics on such to the field.


 In the largest field being analyzed statistical objectives determine the number of statistical sampling tables in rows. Increasing the target will cause the time to do ANALYZE increases in proportion to the demand of time and space. 

Guess you like

Origin www.cnblogs.com/klb561/p/12064409.html