Deadlocks on SQL Server
A deadlock occurs when two transactions block each other permanently because each transaction is waiting for a lock to be released by the other. The database periodically checks for deadlocks and selects a transaction to terminate with an error. Certain practices and settings can reduce the chances of deadlocks occurring.
Access Tables in Same Order
If your application accesses tables in the same order, deadlocks are less likely to occur. For example, if transactions always lock the User table followed by Group table, then new transactions will wait for the User table. After the first transaction completes, the second one will be allowed to continue, and no deadlocking occurs.
Tables in Same Channel
Isolate tables with high contention to their own channels. Channels operate independently, so placing tables that often deadlock into the same channel will cause them to be accessed in the same order regardless of the status of other channels. Tables that are referenced with foreign keys should also be included in the channel since locks are often extended to them. Your application also needs to access tables in the same order for this to help.
Smaller Batches
Smaller batches loaded quicker, which releases locks sooner and decreases the chances of deadlocks. The default batch size is 1000 rows. Adjust the size of the channel's max batch size to a lower number of rows.
UPDATE SYM_CHANNEL SET MAX_BATCH_SIZE = 250 WHERE CHANNEL_ID = ?
Table Maintenance
Optimized runtime tables result in batches that load quicker, which releases locks sooner and decreases the chances of deadlocks. Make sure the Purge Job is running, which removes old rows from runtime tables. Reduce the retention period of old rows to the minimum amount needed. Run periodic maintenance on tables to reorganize or rebuild their indexes. See Maintenance and Tuning.
Route Data Less Often
Decrease the frequency of the Routing Job to reduce how often it places contention on the capture log. The Routing Job runs periodically to query the data table and create batches. The default period is every 10000 milliseconds, which is 10 seconds. Adjust the period so that it runs less frequently but often enough to handle outstanding data.
job.route.period.time.ms=90000
Concurrent Workers
When a node is receiving synchronization from multiple nodes at the same time, there is a greater chance of deadlocks. The default number of workers is 20. Reduce the number of concurrent workers allowed with a SymmetricDS property:
http.concurrent.workers.max=10
Enable Row Level Locking
Row level locking only locks the rows affected by the transaction. Using smaller granularity decreases the chance of overlapping with another transaction. Row locks can be enabled for the indexes of a table. Row locks can be enabled for application tables and SymmetricDS runtime tables where contention is occurring.
When a transaction is updating a large number of rows on a table, the lock can be escalated to a page or even table-level lock. Starting with SQL Server 2008, there is a table level option to prevent most lock escalations. Prevent lock escalation on application tables and SymmetricDS runtime tables where contention is occurring.
ALTER INDEX ALL ON SYM_DATA SET (ALLOW_ROW_LOCKS = ON); ALTER INDEX ALL ON SYM_DATA SET (ALLOW_PAGE_LOCKS = OFF); ALTER TABLE SYM_DATA SET (LOCK_ESCALATION = DISABLE); ALTER INDEX ALL ON SYM_DATA_EVENT SET (ALLOW_ROW_LOCKS = ON); ALTER INDEX ALL ON SYM_DATA_EVENT SET (ALLOW_PAGE_LOCKS = OFF); ALTER TABLE SYM_DATA_EVENT SET (LOCK_ESCALATION = DISABLE); ALTER INDEX ALL ON SYM_OUTGOING_BATCH SET (ALLOW_ROW_LOCKS = ON); ALTER INDEX ALL ON SYM_OUTGOING_BATCH SET (ALLOW_PAGE_LOCKS = OFF); ALTER TABLE SYM_OUTGOING_BATCH SET (LOCK_ESCALATION = DISABLE); ALTER INDEX ALL ON SYM_INCOMING_BATCH SET (ALLOW_ROW_LOCKS = ON); ALTER INDEX ALL ON SYM_INCOMING_BATCH SET (ALLOW_PAGE_LOCKS = OFF); ALTER TABLE SYM_INCOMING_BATCH SET (LOCK_ESCALATION = DISABLE);
As of 3.6 SymmetricDS applies the above statements automatically by default