Latch, mutex and beyond

July 9, 2011

Mutex waits. Part 1. “Cursor: Pin S” in Oracle 10.2-11.1. Invisible and aggressive.

Filed under: Contention,History,Instrumentation,Mutex,Patches,Spinlock — andreynikolaev @ 1:18 pm

Oracle KGX mutexes appeared more than 7 years ago. However, mutex waits are still obscure. Oracle Documentation provided only brief description of mutex wait events without any information about wait durations and timeouts.
Look at the following timeline:

  • In December 2010, Oracle created another Patch 10411618: Add different wait schemes for mutex waits for 11.2. This patch completely changed the mutex waits and introduced exponential backoff. With this enhancement, mutex waits work like Oracle 8i latch waits. The patch tunables are “_mutex_spin_count”, “_mutex_wait_time” and “_mutex_wait_scheme”. This patch was later was included in 11.2.0.2.2 PSU
  • In May 2011, new MOS note Waitevent: “cursor: pin S” Reference Note [ID 1310764.1] shed some light on “cursor: pin S” waits. I will refer to this note during the post.
  • On 14 May 2011 Oracle released new version of Patch 10411618 and recommended to reapply the patch. Note 10411618.8 now states that Patch 6904068 is superseded by Patch 10411618
  • On 17 May 2011 Oracle urgently recommended to apply Patch: 12431716 Unexpected Change in Mutex Wait Behavior on top of the 11.2.0.2.2 PSU.

The above seems arcane. Why wait could cause CPU consumption? What is the meaning of mutex waits? What all these patches did? How we should interpret mutex waits? How we should tune them?

This series of posts will investigate these questions. The first post will explore the “cursor: pin S” waits in most commonly used now Oracle 10g Release 2 and 11g Release 1.

I. The Experimental Environment

I would like to explore the “cursor: pin S” waits in the way similar to my investigation of latch waits. In previous posts, I acquired latches artificially using oradebug call kslgetl. Unfortunately, KGX mutex is more complex to acquire then latch. I encountered two major difficulties:

We do not know the mutex address.

“Cursor Pin” mutex is dynamic structure created within library cache object. The only place where Oracle externalizes the mutex address is the x$mutex_sleep_history fixed table. We need to have contention for mutex to obtain its address. Fortunately, I already have corresponding testcase.
The testcase uses ‘select 1 from dual where 1=2’ SQL operator with hash value 3222383532. All my experiments in this post will be done using Oracle for SPARC 64bit platform. At first I will experiment with fresh 10.2.0.4:

sqlplus /nolog @library_cache_mutex_contention 2
...
SQL> SELECT MUTEX_ADDR,mutex_type,MAX (GETS),MAX (SLEEPS)
        FROM x$mutex_sleep_history
        WHERE MUTEX_IDENTIFIER = 3222383532 AND MUTEX_TYPE='Cursor Pin'
     GROUP BY mutex_addr, mutex_type
   MUTEX_ADDRESS MUTEX_TYPE                        MAX(GETS) MAX(SLEEPS)
---------------- -------------------------------- ---------- -----------
00000003A9371338 Cursor Pin                          3988556        4253

Now we have the mutex address. The another problem was:

There is no single Oracle routine to acquire mutex.

According to note 1310764.1:
“A session waits for “cursor: pin S” when it wants a specific mutex in S (share) mode on a specific cursor and there is no concurrent X holder but it could not acquire that mutex immediately. … The reason for the wait is that in order to acquire the mutex in S mode (or release it) the session has to increment (or decrement) the mutex reference count and this requires an exclusive atomic update to the mutex structure itself… ”

In other words, session waits for “cursor: pin S” event when the mutex is in some transient state neither X, nor S. Systemstate dumps and DTrace also confirm that, unlike the latch, busy mutex can be held in three different modes:

  • “Shared” (“SHRD” in systemstate dumps) mode allows mutex to be held several sessions simultaneously. It allows read (execute) access to structure protected by mutex. In shared mode the lower 4(2 on 32bit) bytes of mutex value represent the number of sessions holding the mutex. Upper bytes are all zero.
  • “eXclusive” (“EXCL”) mode is incompatible with all other modes. Only one session can hold the mutex in exclusive mode. It allows session exclusively access the structure protected by mutex. In X mode upper bytes of mutex value are equal to holder SID. Lower bytes are zero
  • “Examine” (“SHRD_EXAM” in dumps) mode indicates that mutex or its protected structure is in transition. In E mode upper bytes of mutex value are equal to holder SID. Lower bytes represent the number of sessions simultaneously holding the mutex in S mode. Session can acquire mutex in E mode or upgrade it to E mode even if other sessions are holding mutex in S mode. No other session can change mutex at that time.

Session that acquires mutex in S mode waits on “cursor: pin S” event when another session is holding mutex in E mode.

Oracle session changes the mutex state through another dynamic structure called Atomic Operation Log (AOL). The AOL is an analog of latch recovery area. To operate on mutex, session first creates AOL, fills it with mutex data, and calls one of mutex acquisition routines. I will discuss mutex modes and routines in one of my next posts.

It is clear that artificial mutex acquisition is too complex to do using oradebug call. In future, I will use DTrace “destructive actions” for this purpose.

However, in this post, I will only simulate busy mutex in platform independent way using oradebug poke. This sqlplus command allows me to change any location in Oracle SGA. To simulate mutex held in E mode I will change the mutex 8 bit value to 0x100000001 (upper 8 bytes are 1 and the lower bytes are 1):


SQL>oradebug setmypid
SQL>oradebug poke  8 0x100000001
BEFORE: [3A9371338, 3A9371340) = 00000000 00000000
AFTER:  [3A9371338, 3A9371340) = 00000001 00000001

This look exactly like session with SID 1 is holding the mutex in E mode. At this moment any concurrent ‘select 1 from dual where 1=2? hangs waiting for “cursor: pin S” wait:

SQL> select 1 from dual where 1=2;
...

Oracle indeed thinks that wait is “blocked” by session “1”:


$sqlplus "/ as sysdba" @mutex_waits
  SID SQL_ID        EVENT                      IDN  BLKS RFC LOC MUTEX_OBJECT
----- ------------- ------------------- ---------- ----- --- --- ---------------
  159 47z9gc7013axc cursor: pin S       3222383532     1   1   5 select 1 from dual

To “release” the mutex, I will change its value back to zero. I wrote script mutex_e_mode.sql that “holds” mutex in E mode for 50 seconds.

I combined all previous SQL commands into single script cursor_pin_s_waits.sql. I will use it here and in the next posts to investigate “cursor: pin S” waits in different Oracle versions.

II. “Cursor: Pin S” wait event in Oracle 10.2 – 11.1

When the ‘select 1 from dual where 1=2′ hangs, we caged the beast that had never been seen before. This is the session statically waiting for “cursor: pin S” event during 49 (50-1) seconds. Now we can explore this phenomenon. Surprisingly all in this definition occurred to be incorrect:

  • Did we observe the static wait? Obviously no. One of my CPUs gets crazy. Its utilization jumps to 100%.
    mpstat 1
    ...
    CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
    ...
     16    0   0    0     1    0    0    0    0    0    0     0    0   0   0 100
     17    0   0   55   180  138  134   48   16    0    0 31969   89  11   0   0
     18    0   0    0     2    0    2    0    1    3    0     1    0   0   0 100
    ...

    This is the busy wait. The waiting process consume ones of the CPUs completely. Frequent syscalls cause high sys context CPU time consumption.

  • Was this the single wait? No. Look at corresponding AWR report:
    Top 5 Timed Foreground Events

    Event Waits Time(s) Avg wait (ms) % Total Call Time Wait Class
    CPU time   59   93.5  
    control file parallel write 384 11 28 17.0 System I/O
    db file parallel write 436 6 13 9.0 System I/O
    log file parallel write 161 4 24 6.2System I/O
    cursor: pin S 2,865,013 3 0 4.7Other

    It shows 2.8 million mutex waits. The 10064 trace contains many waits of microseconds durations:

    ...
    WAIT #2: nam='cursor: pin S' ela= 1 idn=3222383532 value=4294967297 where|sleeps=21474885649 obj#=57980 tim=2121493718879
    WAIT #2: nam='cursor: pin S' ela= 4 idn=3222383532 value=4294967297 where|sleeps=21474885649 obj#=57980 tim=2121493718951
    WAIT #2: nam='cursor: pin S' ela= 3 idn=3222383532 value=4294967297 where|sleeps=21474885649 obj#=57980 tim=2121493719001
    ...
  • As observer, we know that the session really waited for 49 seconds. However, Oracle Wait Interface registered only 3 seconds of mutex wait time out of actual 49 seconds wait. 94% of “wait” time was accounted as CPU. It was not considered “wait” in pre-11.2 Oracle. Background AWR housekeeping almost hide the mutex problem by standard waits.

Only one row of x$mutex_sleep_history continuously increased during wait.

SQL> select to_char(SLEEP_TIMESTAMP,'hh:mi:ss.ff') SLEEP_TIMESTAMP,MUTEX_ADDR,MUTEX_IDENTIFIER,MUTEX_TYPE,GETS,SLEEPS,
  2               REQUESTING_SESSION,BLOCKING_SESSION,LOCATION,MUTEX_VALUE from x$mutex_sleep_history where mutex_identifier=3222383532;
SLEEP_TIMESTAMP MUTEX_ADDR MUTEX_IDENTIFIER MUTEX_TYPE GETS SLEEPS REQUESTING_SESSION BLOCKING_SESSION LOCATION MUTEX_VALUE
12:05:18.326511 3A9371338 3222383532 Cursor Pin 4000000 2871014 159 1 kkslce [KKSCHLPIN2] 100000001

This confirmed my previous conclusion that x$mutex_sleep_history is an array hashed by MUTEX_ADDR and BLOCKING_SESSION.

Obviously the next step is to trace operating system calls made by the process waiting for “cursor: pin S” using Solaris truss utility. The results are interesting:

truss -p 15750
...
/1:	yield()						= 0
/1:	yield()						= 0
/1:	yield()						= 0
/1:	yield()						= 0
/1:	yield()						= 0
...

Here we observe only the yield() syscalls in the truss output for waiting shadow process. There were no one real “wait” syscalls like pollsys(), semop() or nanosleep(). The man yield page states:
“The yield() function causes the current lightweight process to yield its execution in favor of another lightweight process with the same or greater priority.”

Craig Shallahamer first described this behaviour in post “How mutexes sleep”. MOS note 1310764.1 confirms this:
“Typically each wait is just a yield of the CPU in releases up to (and including) 11.2.0.2. A session will wait over and over on the same wait event until it acquires the mutex in S mode. If the wait is just a yield of the CPU then this can result in quite a high amount CPU, especially if there is very heavy concurrency for a particular cursor…”

Let me discuss the meaning of such “waits.” After unsuccessful spin, the Oracle session registered wait begin and yields the CPU. If the process was descheduled from CPU, the session had to wait until the process run again. At that time session registered the end of wait and computed time waited.

The session waiting for “cursor: pin S” just allows other processes to run. If system has free CPU power, the session will not be descheduled, and Oracle thinks it is not waiting at all. My experiments demonstrated that Oracle versions 10.2 to 11.1 used this wait pattern.

Therefore, in Oracle 10.2-11.1 the “cursor: pin S” wait time is the pure wait for CPU.
If you observe long “cursor: pin S” waits in these Oracle versions, this only mean contention for the CPU, not for mutex.

On the opposite, until you have free CPU resources, 10.2-11.1 mutex contention is almost invisible to Oracle Wait Interface and AWR/ADDM.

The diagnostics signature of 10.2-11.1 “cursor: pin S” mutex contention is the large number of waits for mutex, not the wait time. Your system may suffer from huge mutex contention without any sign of it in “Top 5 Events.”

Therefore, unlike latch, 10.2-11.1 mutex is “classic” spinlock without sleeps. If the mutex holding time is always small, this algorithm minimizes the elapsed time to acquire mutex. Spinning session acquires mutex immediately after its release. All waiting time is the “cpu time”.

Such spinlock are vulnerable to variability of holding time. If some sessions hold “Cursor Pin” mutex in E mode for long, pure spinning wastes CPU. Spinning sessions can aggressively consume all the CPUs and affect the performance by priority inversion and CPU starvation. Tanel Poder described this phenomenon in detail.

This “waits” are not eco-friendly. We have registered abnormally high electric power consumption and overheating of server room during my experiments.

On the other hand, when sessions do not hold mutexes for a long time and you have CPU power, this algorithm achieves the best possible performance.

III. “Cursor: Pin S” wait in 10.2.0.4 with Patch 6904068

If you continuously observe long “cursor: pin S” waits,” then you system do not have enough spare CPU for busy waiting. For such a case, Oracle provides the possibility to convert “busy” mutex wait into “standard” wait. This is the legendary “Patch 6904068: High CPU usage when there are “cursor: pin S” waits”.

On Solaris SPARC 10.2.0.4 this patch is tuned by “_second_spare_parameter”. On other platforms and versions, the patch uses “_first_spare_parameter”.

I installed the patch, set “_second_spare_parameter”=1 and rerun the testcase. High CPU usage was magically diappeared:

Top 5 Timed Foreground Events


Event Waits Time(s) Avg wait (ms) % Total Call Time Wait Class
cursor: pin S 977 48 10 84.3Other
CPU time   9  15.7  
db file sequential read 386 1 2 1.3User I/O
log file parallel write 24 0 21 .9 System I/O
control file parallel write 19 0 16 .5System I/O

With this patch and parameter set to 1 the “cursor: Pin S” became ordinary wait event with 10 ms timeout. Oracle Wait Interface now registers repeated waits and low CPU consumption. Look on truss of waiting process:

...
/1:     semtimedop(99, 0xFFFFFFFF7FFF7F64, 1, 0xFFFFFFFF7FFF7F50) Err#11 EAGAIN
/1:             semnum=22    semop=-1    semflg=0
/1:             timeout: 0.010000000 sec
/1:     semtimedop(99, 0xFFFFFFFF7FFF7F64, 1, 0xFFFFFFFF7FFF7F50) Err#11 EAGAIN
/1:             semnum=22    semop=-1    semflg=0
/1:             timeout: 0.050000000 sec
/1:     semtimedop(99, 0xFFFFFFFF7FFF7F64, 1, 0xFFFFFFFF7FFF7F50) Err#11 EAGAIN
/1:             semnum=22    semop=-1    semflg=0
/1:             timeout: 0.010000000 sec

The parameter value defines the wait timeout in centiseconds.

The semtimedop() is “normal” OS wait. The waiting process can be awakened by post. However, I did not found any case when Oracle uses post/wait mechanism for mutexes.

The patch significantly decreases CPU consumption by spinning. Its drawback is larger elapsed time to obtain mutex. Ten milliseconds is long wait in Oracle timescale. Usually average mutex holding time is around 1 us and most sessions are waiting for nothing.

MOS note Database Hang due to Mutex Contention in 11g [ID 1292249.1] recommends “_first_spare_parameter”=10. This results in huge 0.1 sec timeout between spins. However, for all workloads I saw, the value 1 performs better. I will discuss the effectiveness of timeout choice in separate post. Look also on this results.

The “_first_spare_parameter” is dynamic. If you omit it or set it to 0, the instance behave exactly like without the patch. Therefore, I recommend to install the patch 6904068 in OLTP for the possibility to react dynamically on “cursor: pin S” contention without downtime.

Summary

I discussed how “cursor: pin S” mutex wait works in popular Oracle versions 10.2 to 11.1.
This is CPU consuming busy wait, almost invisible to Oracle Wait Interface and standard diagnostics tools. Patch 6904068 converts this wait into “normal” wait.

Stay tuned on this blog to see how revolutionary the mutex waits changed in Oracle 11g Release 2

21 Comments »

  1. Latches are visible to Oracle wait interface only when sleeping. Since mutexes never sleep 10.2 – 11.1 (before applying 6904068), they are visible only when they are waiting in run queue for processor. Excellent 🙂 And the hack with oradebug poke is really amazing. How did you know the address and right value to acquire mutex (gdb breakpoints, or…)?

    We are hitting “cursor: pin S” problem hard one HP-UX 32 cores machine. Becuase HP-UX does not perform well on such a high number of cores (because of intra CELL “communication” ), the problem is getting even worst (the system is loaded on 85%, we are unable to use the rest of cores and the % of wait on “cursor: pins S” is between 10 – 20%).

    I’m really upset, because 6904068 is not part od 10.2.0.5.3 PSU and believe or not, is also not part of 10.2.0.5.4, which was released few days ago.
    Unfortunatelly we have one of patch installation conflict and waiting more then one month for Oracle Development to deliver merge patch 😦

    Comment by Pavol Babel — July 22, 2011 @ 5:30 pm | Reply

    • Hello!

      >How did you know the address and right value to acquire mutex (gdb breakpoints, or…)?

      Actually I used the Solaris DTrace. I will demonstrate this in the next posts on this topic.

      > We are hitting “cursor: pin S” problem hard one HP-UX 32 cores machine.
      > the system is loaded on 85%, we are unable to use the rest of cores and the % of wait on “cursor: pins S” is between 10 – 20%

      This is interesting. Since average CPU consumption is less then 100%, may be you observe “priority inversion” described by Tanel Poder.
      HPUX_SCHED_NOAGE=178 since 10.2.0.4. Do oracle user have RTSCHED and RTPRIO privileges?

      The other reason may be very “bursty” workload.
      Is the “cursor: pin S” waits caused by some specific SQL or library cache object ?
      Is there a possibility to tune the most CPU consuming SQL?

      >waiting more then one month for Oracle Development to deliver merge patch

      In nowadays the Development is slow for 10.2.0.5 on HP-UX. Did you considered “_kks_use_mutex_pin”=false ?

      Comment by andreynikolaev — July 23, 2011 @ 8:31 pm | Reply

      • Hello,

        thanks for answer and advices

        – good old dtrace. Unfortunately, we do not have any oracle installation on Solaris (and well, SPARCS are very ver slow for database server)
        – yes, I know HPUX scheduling problem and we have RTSCHED and RTPRIO privileges for oracle user.
        – yes, there are 2 heavily used library cache objects (2 SQL called more then 30 million times during night job)
        – we are just system integrator. The code could be changed (at least to create more versions of cursor in PL/SQL), but vendor is complaining of oracle bug and the customer does not won’t pay vendor change request to change the code 🙂 unfortunately, there is no dbms_shared_pool.markhot backport to 10g.

        To be honest, I do not like HP-UX very much. Our customer had one application, which ran faster on 16 Itanium Cores than on 24, because of slow intra cell CPU communication (when cores are placed on diferent cards, the bus communication is very slow). The new Itanium is slightly better, but not good enough.
        The 85% utilization could be misleading because of hyper threading, too (32 cores, 64 threads).

        – I have to try it once more time, but it seems _kks_use_mutex_pin is ignored since 10.2.0.4 on HP-UX. Probably the best way to get rid of cursor pinning problem would be cursor_space_for_time=true, but it has been deprecated in 10.2.0.5 and 11.1.0.7.

        Comment by Pavol Babel — July 24, 2011 @ 12:16 am

      • You are right. Just checked that both cursor_space_for_time and _kks_use_mutex_pin are ignored in 10.2.0.5.
        I did not know this before.
        Can you post the top locations from v$mutex_sleep?

        Comment by andreynikolaev — July 24, 2011 @ 8:16 pm

      • “The new Itanium is” … should be “The new Super Dome 2 with new Itanium is slightly better…”

        Comment by Pavol Babel — July 24, 2011 @ 12:18 am

  2. […] Oracle Support confirmed that, yes, four billion mutex waits in a span of an hour appeared to be the cause of our pain.  Luckily for us, this is not undiscovered country.  A quick Google search later revealed that 11g is notorious for this particular type of pain.  The fix, of course, was a patch.  Specifically, 10411618: Add different wait schemes for mutex waits. […]

    Pingback by “Wait… is that billions?” | Gray Matter Boundaries — August 20, 2011 @ 2:44 am | Reply

  3. Please explain “_first_spare_parameter in bit more detail .

    Best Regards,
    Prashanth Subbakrishna
    Oracle DBA
    Bangalore

    Comment by Prashanth Subbakrishna — September 12, 2011 @ 6:09 pm | Reply

  4. Great post, indeed. Thanks a lot for sharing your researchs to oracle community.

    I’m looking forward for the next post regarding the mutex in Oracle 11g Release 2.

    Best Regards,

    Comment by Thiago Maciel — September 15, 2011 @ 4:39 am | Reply

  5. Oracle has finally realeased merge patch for HP-UX. Patch 12740366: MERGE REQUEST ON TOP OF 10.2.0.5.0 FOR BUGS 10294761 6904068 . The bug 10294761 was filed by our company, too (and it is very interesting bug), so I suggest to install this merge patch insead of 6904068). We are happy we are preapred for further aggressive mutex attacks, at least I hope so.

    Comment by Pavol Babel — October 7, 2011 @ 11:51 am | Reply

  6. Andrey,

    I have one more question. I am able to simulate mutex held in SHRD_EXAM mode. But only once 🙂

    I perform
    oradebug poke 0xC000000028B14AB0 8 0x100000001

    and then the second session hang (because of “cursor: pin S” wait event) when I run sql protected by mutex.

    But when I perform oradebug poke “0xC000000028B14AB0 8 0x0” and run “oradebug poke 0xC000000028B14AB0 8 0x100000001” again, then it is no more blocking the second session ( I can ran sql as many times as I need)

    Comment by Pavol Babel — October 7, 2011 @ 2:38 pm | Reply

    • It seems I have found answer for my own question. My testcase to simualate “curos: pin S” did not use EXECUTE IMMEDIATE in PL/SQL. Then I used oradebug poke and run SQL directly from sqlplus. For the first time sqlplus session hang, because it was trying to create new child cursor (because of TOP_LEVEL_RPI_CURSOR in v$sql_shared_cursor, I keep forgetting this fact). However, after setting mutex to 0x0 and then to 0×100000001 the sqlplus session didn’t have to create child cursor, just execute and that’s why it was not blocked by mutex again.

      Now it makes sense, because each child cursor is protected by different mutex.

      Comment by Pavol Babel — October 8, 2011 @ 12:34 am | Reply

  7. […] previous post demonstrated that before version […]

    Pingback by Mutex waits. Part II. “Cursor: Pin S” in Oracle 11.2 _mutex_wait_scheme=0. Steps out of shadow. « Latch, mutex and beyond — October 25, 2011 @ 4:23 pm | Reply

  8. […] pin S wait on X, cursor: pin S are both mutext waits, which can burn CPU at a very high rate (see here for details). This gives us a good idea of how the CPU is wasted (and if one looks in ASH, one can […]

    Pingback by AWR reports: interpreting CPU usage « Oracle Diagnostician — April 6, 2012 @ 5:06 am | Reply

    • Interesting post!
      More precisely, the “cursor: pin S” in 10.2-11.1 was the wait for CPU, and was a consequence that someone burns CPU. This may be invisible mutex contention. Or maybe not.

      On the other hand in 10.2 the “cursor: pin S wait on X” was the ordinary wait with 10ms timeout. I discussed this event in my Hotsos 2012 presentation will blog about it.

      Comment by andreynikolaev — April 6, 2012 @ 5:39 am | Reply

  9. […] time to discuss how contemporary Oracle waits for mutexes. My previous posts described evolution of “invisible and aggressive” 10.2-11.1 mutex waits into fully accounted and less aggressive 11gR2 mutexes. Surprisingly Oracle […]

    Pingback by Mutex waits. Part III. Contemporary Oracle wait schemes diversity. « Latch, mutex and beyond — July 30, 2012 @ 12:22 pm | Reply

  10. Hi,
    I tried to reproduce the test case on ‘cursor: pin S’ using your example but somehow I could not manage it. Not sure what I did wrong.

    From session 1:
    ==============
    SQL> select hash_value from v$sql where sql_text like ‘select 1 from dual where 1=2%’;

    HASH_VALUE
    ———-
    3222383532

    SQL> SELECT MUTEX_ADDR,mutex_type,MAX (GETS),MAX (SLEEPS) FROM x$mutex_sleep_history
    2 WHERE MUTEX_IDENTIFIER = &hash AND MUTEX_TYPE=’Cursor Pin’
    3 GROUP BY mutex_addr, mutex_type;
    Enter value for hash: 3222383532
    old 2: WHERE MUTEX_IDENTIFIER = &hash AND MUTEX_TYPE=’Cursor Pin’
    new 2: WHERE MUTEX_IDENTIFIER = 3222383532 AND MUTEX_TYPE=’Cursor Pin’

    MUTEX_ADDR MUTEX_TYPE MAX(GETS) MAX(SLEEPS)
    —————- ——————————– ———- ———–
    00000004089363E8 Cursor Pin 99632158 14568

    SQL> select vsize(saddr) word_size,
    decode(vsize(saddr),4,’0x10001′,’0x100000001′) mutex_value
    from v$session where rownum=1; 2 3

    WORD_SIZE MUTEX_VALUE
    ———- ———–
    8 0x100000001

    SQL> oradebug setmypid
    Statement processed.
    SQL> oradebug poke 0x00000004089363E8 8 0x100000001
    BEFORE: [4089363E8, 4089363F0) = 00000000 00000000
    AFTER: [4089363E8, 4089363F0) = 00000001 00000001

    SQL> select to_char(sysdate, ‘dd/mm/yyyy hh24:mi:ss’) from dual;

    TO_CHAR(SYSDATE,’DD
    ——————-
    16/08/2012 09:04:35

    From Session 2:
    =============
    SQL> select to_char(sysdate, ‘dd/mm/yyyy hh24:mi:ss’) from dual;

    TO_CHAR(SYSDATE,’DD
    ——————-
    16/08/2012 09:05:04

    SQL> select 1 from dual where 1=2;

    no rows selected

    SQL> select 1 from dual where 1=2;

    no rows selected

    SQL> select 1 from dual where 1=2;

    no rows selected

    SQL>

    The query in session 2 is not blocked.

    Regards

    Comment by Thiinh Tran — August 15, 2012 @ 11:07 pm | Reply

    • Hello!
      There are may be two reasons why your query was not blocked.
      Either:
      1. The original cursor for “select 1 from dual where 1=2” was aged out of shared pool.
      You should run the blocking script immediatedly after cursor_pin_s_contention.sql

      or:
      2. You had several child cursors for “select 1 from dual where 1=2” and blocked the wrong one.
      This is why my demo scripts include “restart force” of the instance.

      Actually, direct “select 1 from dual where 1=2” from sqlplus is blocked by mutex only during the first execution/parse.
      This execution creates another child cursor with another mutex. The child cursors differs by TOP_LEVEL_RPI_CURSOR reason.
      All the subsequent executions from sqlplus will use the second child. Look at Pavol Babel comments before.

      Try instead the following code:

      variable a number;
      execute :a :=dbms_sql.open_cursor;
      execute dbms_sql.parse(:a,'select 1 from dual where 1=2',dbms_sql.native);
      

      It should choose the correct child cursor and hang.

      Comment by andreynikolaev — September 2, 2012 @ 6:29 pm | Reply

  11. Thank you! Wonderful article, so feasible and helpful! Have you had an opportunity to test similar waits in Oracle versions 11.2.0.3 and above? eager to read such testcases and your comments

    Comment by Yekaterina — October 17, 2013 @ 5:41 am | Reply

  12. […] pin S wait on X, cursor: pin S are both mutext waits, which can burn CPU at a very high rate (see here for details). This gives us a good idea of how the CPU is wasted (and if one looks in ASH, one can […]

    Pingback by AWR reports: interpreting CPU usage | Oracle Diagnostician — February 3, 2015 @ 11:17 am | Reply

  13. […] S wait on X, cursor: pin S are both mutext waits, which can burn CPU at a very high rate (see here for details). This gives us a good idea of how the CPU is wasted (and if one looks in ASH, one can […]

    Pingback by AWR reports: interpreting CPU usage – Oracle World — May 29, 2019 @ 5:28 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a reply to Thiago Maciel Cancel reply

Create a free website or blog at WordPress.com.