enq: TM - contention

Every now and then I'm being asked by my fellow coworkers to diagnose locking issues, aka. "the application is frozen". A quick glance at gv$session or gv$active_session_history shows a lot of sessions waiting for event "enq: TM - contention".

select inst_id, sid, serial#, event
from gv$session

select inst_id, sample_time, session_id, session_serial#, event
from gv$active_session_history

Documentation tells us that this kind of wait event can be caused by missing indexes on foreign key constraints. Scripts exists to find those constraints but it certainly doesn't make sense to blindly create indexes for all foreign keys. For example: you have a code table for currencies (USD, EUR,...) and a table for item prices with a foreign key constraint pointing to currency. You rarely delete a currency or query item prices by currency, so there really is no point in indexing the currency column. Additional space will be consumed for the index and additional work will be done on insert/update/delete of item prices to update the index.

When locking takes place basically you want to know two things:
  • which index on which table is missing and
  • which sql statement caused the locking.

Step # 1 - find the table


First, you query the ASH (active session history) and find the object id, which is the table that was locked due to a missing index. The object id is reported in p2 column.

select inst_id, sample_time, session_id, session_serial#, p2 object_id
from gv$active_session_history
where event = 'enq: TM - contention'

INST_ID SAMPLE_TIME SESSION_ID SESSION_SERIAL# OBJECT_ID
1 9.7.2009 19:31:30,419 28 38 15705
1 9.7.2009 19:31:29,419 28 38 15705
1 9.7.2009 19:31:28,419 28 38 15705
1 9.7.2009 19:31:27,410 28 38 15705
1 9.7.2009 19:31:26,409 28 38 15705
1 9.7.2009 19:31:25,409 28 38 15705
1 9.7.2009 19:31:24,409 28 38 15705

You get the table name from all_object view.

select object_name
from all_objects
where object_id = 15705

Now that you know the table, you can focus on the foreign key constraints of that particular table. The ones that don't have indexes are candidates for the next step - finding out the sql that started the locking.

Step # 2 - find the sql


All the sql statements that haven't aged out of the SGA are in the gv$sql view. Chances are, you will find the statement that caused the lock. By searching for delete operations on referenced tables, you will soon find the statement.

select inst_id, sql_text, sql_id, first_load_time, parsing_user_id, module, action, program_id
from gv$sql
where upper(sql_text) like 'DELETE%[REFERENCED TABLE]%'
order by first_load_time desc

Since the statement must have been run before the locking took place, the first_load_time must be sometime before the first sample_time from ASH. From my experience, the difference is always about a second:

  • first_load_time in gv$sql is 9.7.2009 19:31:23
  • earliest sample_time in v$active_session_history is 9.7.2009 19:31:24,409

Of course, this is not always the case. The delete operation could happen minutes or even hours ago but the transaction is still active, thus still holding the lock on the table. By that time, the statement could have been removed from the SGA. Despite this, I was able to track down the sql statement and the missing index successfully with this method.


Once you know which sql is causing the problem, it's up to the developers to create that index or to prevent the users to issue the delete statement. A solution is to flag the row as deleted and then delete it during night time.

  1. gravatar

    # by Unknown - October 13, 2014 at 1:23 PM

    Nice tip regarding the p2 field. Nevertheless the delete cannot always be found if there is a delete cascade.