MS SQL Server: Blocking Locks
Hier ein etwas älterer Artikel von mir, den ich andernorts schon mal veröffentlicht hatte. Aber ich denke dieses Thema ist nach wie vor Interessant. Es handelt sich um einen technischen Artkel, der einen Aspekt bei der Performance von RDBMS behandelt. Ich hatte ihn ursprünglich auf englisch geschrieben, daher wöchte ich ihn auch hier unverändert beitragen.
Viel Spass damit:
No reasonable strategy for the resolution of concrete blocking problems can be found until one has determined the precise nature of the scale block. The most obvious criterion for distinguishing between different scale blocks is their average duration. The following categories emerge as a result of this division:
- Short-time scale blocks:
average duration less than three seconds
- Mid-time scale blocks:
average duration ranges from 3 seconds to a maximum of 10 minutes
- Long-time scale blocks:
average duration more than 10 minutes
- Deadlocks:
in theory, there is no limit to the duration
These four different types of scale blocks are discussed below in terms of their possible causes and in reference to appropriate strategies for resolving them.
Short-time scale blocks
These kinds of situations are inevitable in multiuser databases; normally they represent no problem for the operation of an application. Nevertheless, these types of scale block should be investigated in cases where they have begun to occur to only a few users or where they occur very frequently.
If these scale blocks appear very frequently, it can lead to snowball effects and eventually paralyze the entire system.
Even in cases where it is likely that the problem cannot be completely solved, one should nevertheless at least attempt to defuse it.
Causes:
Short-time scale blocks which occur very frequently can be traced back to the following basic problem: the total performance capacity of the system is not enough for the current user load.
Resolution strategies:
Such problems can best be met by carrying out an extensive performance tuning of the server and of the application.
If this situation does not occur until there is a very high user load, then one should begin the investigation by examining the server and the network.
However, if this situation already begins to occur with low user load levels, then one should investigate whether certain use cases of the client application are able to block either one another or even themselves in conceptual terms. If this should be found to be the case in a concrete situation, then an investigation needs to be carried out to determine whether the probability of the occurrence of this unwanted situation can be reduced through atomic transactions or the use of „row level locking“ or similar strategies.
Mid-time scale Blocks
This type of scale block is the most difficult to find. It lasts long enough to bring the operation of an application to a stop, but at the same time is still generally too short for manual analysis.
Causes:
The possible causes for this type of block are the following:
- Transactions which are (too) complex
- Extremely inperformant interrogations in a transaction
- Non-error-free interrogations in transactions (e.g. unintended cross-joins)
- User interaction within transactions (e.g. message box „Should data really be deleted?“)
- Unnecessarily time-consuming processing within transactions
- Unnecessarily distributed processing with high data transport volumes
- Snowball effect of short-time scale blocks
Resolution strategies:
Once one has established which transactions are blocking other processes
The following questions emerge in cases of complex transactions:
- Can the transaction be simplified?
- Can the transaction be divided up into smaller parts?
- Can the transaction be greatly accelerated through performance optimization?
- Can the transaction be carried out at a later time (e.g. a nighttime job)?
- Could the transaction also utilize separate resources (e.g. temporary tables) in order to carry out the actual modifications in concentrated fashion if successful?
- Can the data transport within the transaction be reduced (e.g. swapping processing out into SP’s)?
- Does the transaction block resources unnecessarily (e.g. select with locks to look-up tables)?
These questions offer approaches for solving the problem. Should it happen that all of the questions are to be answered in the negative, then one is faced with the choice of either accepting the situation as it is or of thinking through the business processes to find a way to replace the problematic transactions with others as necessary.
Additional resolution strategies:
- If the transaction involves extremely inperformant interrogations, replace them with faster ones.
- If user interactions – such as message boxes – take place during database transactions, then these must be eliminated! User interactions have lost absolutely nothing within database transactions. Make sure without fail that your client programmers are also aware of this. Programmers with a background in desktop database development are particularly susceptible to having frequent problems distinguishing between business transactions and database transactions.
- The same holds true for unnecessarily time-consuming processing within the transaction as much it does for more complex transactions. You could however also ask yourself the question, „Does this processing really have to take place within the transaction?“
- In cases where a large amount of data is transported from the server to the client within a transaction, this should be minimized where possible. The processing of x-amount of data is fundamentally just as fast at the client end as it as on the server. The bottleneck is as a rule the transport through the network. Therefore, always carry out the processing at the place where most of the necessary information is to be found. Examples: if one needs only to know the total number of certain types of data sets (and not to have the data sets themselves), then using a „select count(*)“ will always be faster than even the most wonderful algorithm at the client end (because the client first needs to obtain all the data via the network). If, on the other hand, an SP requires kilobytes of parameters, then the client may be considerably faster.
- If a snowball effect is present, then follow the strategies used against short-time scale blocks.
Long-time scale Blocks
These blocks are very similar to mid-time scale blocks. As a general principle, they can also have the same causes as the mid-time scale blocks.
In addition, they can have the following causes:
- Unintentionally non-closed transactions and
- Endless loops within transactions
The following applies to both causes: their durations could fall within the mid-time scale block range if, for example, transaction timeouts have been defined or if the end user shuts down his client.
Resolution strategy:
The resolution strategy for both of the additional causes is clear: eliminate the error(s).
The special case of deadlocks
In view of the fact that a great deal of information concerning deadlocks is readily available in the relevant literature, they will be treated only briefly here: deadlocks are a special case in terms of scale blocks, because there is no unambiguously „guilty party“ in such situations.
The MS SQL Server (as well as other RDBMS Servers) is also very adept at recognizing and treating deadlocks, as you will find when you read the MS Documentation. Essentially, the SQL Server: It terminates a process. But while the SQL Server can not recognize a guilty party when faced with a deadlock, he chooses a „victim“.
One can never completely exclude deadlocks from larger systems. The strategies used to reduce their quantity are the same as those which can be used to avoid scale blocks in general.
In addition, one can take care to ensure that there are no transactions present which require the same resources in the reverse order.
General strategies and guidelines for making scale blocks as infrequent as possible
- User interactions are forbidden in transactions.
- Always keep transactions as limited in size and as brief as possible.
- Use no unnecessary resources in transactions.
- Carry out as few external processings as possible during a transaction.
- Always select with the option „with no locks“ except in cases where you have an important reason for doing otherwise.
- Use row level locking. (Default in the current MSSQL Versions)
- Use „dirty reads“ wherever possible (never make this option global, because it is not possible except with only a very few interrogations).
- Make your system as performant as possible.
- Carry out necessary complex transactions such as invoicing at times when the user load is as low as possible.
- Schedule maintenance tasks which put extra loads on the server (e.g. Full Backup, defragmentation of indexes) at times when the user load is minimal (nights, weekends)
- Always select as little as possible and only as much as is necessary.
Schreibe einen Kommentar