The customer has an 11g rac environment. The db1 memory failed some time ago. After the replacement, the sga and pga of db1 need to be adjusted. The operation is as follows
SQL> alter system set sga_target=25G scope=spfile sid='*';
System altered.
SQL> alter system set sga_max_size=25G scope=spfile sid='*';
System altered.
SQL> alter system set pga_aggregate_target=15G scope=spfile sid='*';
System altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 8017100800 bytes
Fixed Size 2269072 bytes
Variable Size 1795162224 bytes
Database Buffers 6207569920 bytes
Redo Buffers 12099584 bytes
Database mounted.
Database opened.
SQL> show parameter sga
NAME TYPE VALUE
------------------------ ----------- --------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 7680M
sga_target big integer 7680M
SQL> show parameter pga
NAME TYPE VALUE
-------------------- ----------- -------
pga_aggregate_target big integer 2560M
After restarting, it is found that sga and pga are the same as before restarting. It is strange. Check the spfiles of the two hosts, the same file, there is no problem
SYS@orcl2> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/orcl/spfileorcl.ora
SYS@orcl2>
SYS@orcl1> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/orcl/spfileorcl.ora
SYS@orcl1>
Export pfile from spfile to check, and found that the sga and pga of the two nodes are indeed different
orcl1:/home/oracle@db1> more /tmp/orcl1.pfile
orcl2.__db_cache_size=22213033984
orcl1.__db_cache_size=6476005376
orcl2.__java_pool_size=402653184
orcl1.__java_pool_size=100663296
orcl2.__large_pool_size=469762048
orcl1.__large_pool_size=117440512
orcl1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
orcl2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
orcl2.__pga_aggregate_target=16106127360
orcl1.__pga_aggregate_target=2684354560
orcl2.__sga_target=26843545600
orcl1.__sga_target=8053063680
orcl2.__shared_io_pool_size=0
orcl1.__shared_io_pool_size=0
orcl2.__shared_pool_size=3489660928
orcl1.__shared_pool_size=1275068416
orcl2.__streams_pool_size=134217728
orcl1.__streams_pool_size=33554432
*.audit_file_dest='/u01/app/oracle/admin/orcl/adump'
*.audit_sys_operations=TRUE
*.audit_trail='DB','EXTENDED'
*.cluster_database=TRUE
*.compatible='11.2.0.4.0'
*.control_file_record_keep_time=14
*.control_files='+DATA/orcl/controlfile/current.260.1021284275','+FRA/orcl/controlfile/current.256.1021284275'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_create_online_log_dest_2='+FRA'
*.db_domain=''
*.db_name='orcl'
*.deferred_segment_creation=FALSE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
orcl2.instance_number=2
orcl1.instance_number=1
*.java_jit_enabled=TRUE
*.log_archive_dest_1='LOCATION=+FRA'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
orcl1.pga_aggregate_target=2684354560
*.pga_aggregate_target=16106127360
orcl1.processes=1000
*.processes=5120
*.recyclebin='ON'
*.remote_listener='db-scan:1521'
*.remote_login_passwordfile='exclusive'
*.sessions=1105
orcl1.sga_max_size=8053063680
*.sga_max_size=26843545600
orcl1.sga_target=8053063680
*.sga_target=26843545600
orcl2.thread=2
orcl1.thread=1
*.undo_retention=14400
orcl2.undo_tablespace='UNDOTBS2'
orcl1.undo_tablespace='UNDOTBS1'
It’s a bit confusing here, calm down and think about it, adjust the sid=’orcl1’ used by the memory when db1 fails, use sid=’orcl1’ to adjust and try
SQL> alter system set sga_target=25G scope=spfile sid='orcl1';
System altered.
SQL> alter system set sga_max_size=25G scope=spfile sid='orcl1';
System altered.
SQL> alter system set pga_aggregate_target=15G scope=spfile sid='orcl1';
System altered.
After the adjustment, restart again, and found that the sga and pga have been adjusted and achieved the expected effect.
This is the first time I have encountered this kind of problem. It is recorded that the April 2019 patch has been applied to this environment.