library cache lock on BUILD$ object

 

I was testing an application performance in 12c, and one job was constantly running slower than 11g. This post is to detail the steps. I hope the steps would be useful if you encounter similar issue.

Problem

In an one hour period, over 90% of the DB time spent on waiting for library cache lock waits. Upon investigation, one statement was suffering from excessive waits for ‘library cache lock’ event. We recreated the problem and investigated it further to understand the issue.

Following is the output of wait_details_rac.sql script (that I will upload here) and there are many PX query servers are waiting for ‘library cache lock’ wait event.

   SID PID        EVENT                          USERNAME   OSUSER     STATE               WAIT_TIME   WIS P1_P2_P3_TEXT
------ ---------- ------------------------------ ---------- ---------- ------------------- --------- ----- ----------------------------------------
                                                                                                            2163

   276  12445     library cache lock             TST_USR    test       WAITING                     0     1 handle address 399021346904-lock address
                                                                                                            2147

   288  12449     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
                                                                                                            2136

   303  12453     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
                                                                                                            2136

   315  12457     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
 ...Snipped..                                                                                                           2152


Lock address to object

For all session waiting, library cache handle address is the same. That means that the sessions are waiting for one library cache object. Querying x$kgllk confirms that and we can also identify object name.

  select ses.sid, ses.serial#,lck.kgllkcnt, lck.kgllkmod,lck.kgllkreq, lck.kglnaobj
  from x$kgllk lck , v$session ses
  where kgllkhdl in
     (select kgllkhdl from x$kgllk where kgllkreq >0)
  and lck.KGLLKUSE = ses.saddr
/
   SID    SERIAL#   KGLLKCNT   KGLLKMOD   KGLLKREQ KGLNAOBJ
------ ---------- ---------- ---------- ---------- ------------------------------------------------------------
   276       6518          0          0          2 5ce7869058
    84       6429          0          0          2 5ce7869058
   329      19358          0          0          2 5ce7869058
   342      37088          0          0          2 5ce7869058
   407      17734          0          0          2 5ce7869058
    74      26333          0          0          2 5ce7869058
...snipped..

Object name 5ce7869058 is not a typical schema object and seems to be an internal object. We need to identify the owner of the object and that might lead to some clues.

 Two ways to find the row in x$kglob:
1. Convert the handle address to HEX and pad it.
SQL>  select to_char(399021346904,'xxxxxxxxxxxxxx') from dual;   
5ce7869058

SQL> select kglnawon, kglnaobj from x$kglob where kglhdpar =hextoraw('0000005CE7869058');
KGLNAOWN KGLNAOBJ
-------- -------------------------
$BUILD$  5ce7869058

2. Join to x$kgllk.
SQL> select kglnaown, kglnaobj from x$kglob where kglhdadr in 
        ( select kgllkhdl from x$kgllk where kgllkreq >0);
KGLNAOWN KGLNAOBJ
-------- -------------------------
$BUILD$  5ce7869058

Interesting. These objects are owned by the user $BUILD$. But, there is no such database user and this object must be an internal object.

At this time, I took a few pstack samples of the process and tried to learn a bit more about the problem. Nothing too interesting, function call kglLockWait indicates that we are waiting for library cache lock.

# pstack 12485
#0  0x00007f42737d100a in semtimedop () from /lib64/libc.so.6
#1  0x000000000cda832d in sskgpwwait ()
#2  0x000000000cda5c98 in skgpwwait ()
#3  0x000000000c8f2bab in ksliwat ()
#4  0x000000000c8f1fb1 in kslwaitctx ()
#5  0x0000000000c00810 in ksfwaitctx ()
#6  0x0000000003b6765d in kglLockWait ()
#7  0x000000000cdcd441 in kgllkal ()
#8  0x000000000cdc53e4 in kglLock ()
#9  0x000000000cdbeb32 in kglget ()
#10 0x000000000cb7650c in kksfbc ()
#11 0x000000000cb166ec in opiexe ()
#12 0x00000000021ee529 in kpoal8 ()
#13 0x000000000cb1254d in opiodr ()
#14 0x0000000003117d7e in kpoodr ()
#15 0x000000000cd41b4a in upirtrc ()
#16 0x000000000cd2cde6 in kpurcsc ()
#17 0x000000000cd28014 in kpuexec ()
#18 0x000000000cd420e9 in OCIStmtExecute ()
#19 0x0000000002073a10 in kxfxsStmtExecute ()
#20 0x0000000002073387 in kxfxsExecute ()
#21 0x000000000206df42 in kxfxsp ()
#22 0x000000000206c489 in kxfxmai ()
#23 0x00000000020a91b3 in kxfprdp ()
#24 0x00000000031014a6 in opirip ()
#25 0x0000000001bb0a08 in opidrv ()
#26 0x00000000026c0f71 in sou2o ()
#27 0x0000000000bbd85e in opimai_real ()
#28 0x00000000026cb6bc in ssthrdmain ()
#29 0x0000000000bbd72c in main ()

Conclusion

At this time, I have some information. So, I searched for ‘library cache lock $BUILD$’ in support.oracle.com and of course, found a bug matching with my symptoms. Further investigation from BDE confirmed my analysis. I didn’t want to list the bug numbers as similar symptom may have different root cause. So, you need to work with support further.

猜你喜欢

转载自blog.csdn.net/wdnq1022/article/details/80065762
今日推荐