Parallel Apply v5

What is Parallel Apply?

Improved in PGD 5.2

This feature has been enhanced in PGD 5.2 and is recommended for more scenarios.

Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance.

When Parallel Apply is operating, the transactional changes from the subscription are written by multiple writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error is generated and the transaction can be rolled back.

This mechanism has previously meant that if a transaction is pending commit and modifies a row that another transaction needs to change, and that other transaction executed on the origin node before the pending transaction did, then the resulting error could manifest itself as a deadlock if the pending transaction has taken out a lock request.

Additionally, handling the error could increase replication lag, due to a combination of the time taken to detect the deadlock, the time taken for the client to rollback its transaction, the time taken for indirect garbage collection of the already applied changes, and the time taken to redo the work.

This is where Parallel Apply’s deadlock mitigation, introduced in PGD 5.2, comes into play. For any transaction, Parallel Apply now looks at upcoming writes, and where there is a potential conflict, places the transaction on hold until the conflict has passed.

Configuring Parallel Apply

Two variables which control Parallel Apply in PGD 5, bdr.max_writers_per_subscription (defaults to 8) and bdr.writers_per_subscription (defaults to 2).

bdr.max_writers_per_subscription = 8
bdr.writers_per_subscription = 2

This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to 8 writers for a subscription.

You can only change bdr.max_writers_per_subscription with a server restart.

You can change bdr.writers_per_subscription for a specific subscription, without a restart by:

  1. Halting the subscription using bdr.alter_subscription_disable.
  2. Setting the new value.
  3. Resuming the subscription using bdr.alter_subscription_enable.

First though, establish the name of the subscription using select * from bdr.subscription. For this example, the subscription name is bdr_bdrdb_bdrgroup_node2_node1.

SELECT bdr.alter_subscription_disable ('bdr_bdrdb_bdrgroup_node2_node1');

UPDATE bdr.subscription
SET num_writers = 4
WHERE sub_name = 'bdr_bdrdb_bdrgroup_node2_node1';

SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1');

When to use Parallel Apply

Parallel Apply is always on by default and, for most operations, we recommend that it's left on.

From PGD 5.2, Parallel Apply works with CAMO. It is not compatible with Group Commit or Eager Replication and should be disabled if Group Commit or Eager Replication are in use.

Up to and including PGD 5.1, Parallel Apply should not be used with Group Commit, CAMO and eager replication. You should disable Parallel Apply in these scenarios. If, using PGD 5.1 or earlier, you are experiencing a large number of deadlocks, you may also want to disable Parallel Apply or consider upgrading.

Monitoring Parallel Apply

To support Parallel Apply's deadlock mitigation, PGD 5.2 adds columns to bdr.stat_subscription. The new columns are nprovisional_waits, ntuple_waits and ncommmit_waits. The nprovisional_waits value reflects the number of operations on the same tuples being performed by concurrent apply transactions. These are provisional waits which aren't actually waiting yet but could.

If apply operations are affected by the deadlock mitigation measures, they will wait until they can be safely applied and are counted in ntuple_waits. Fully applied transactions which waited before being committed are counted in ncommit_waits.

Disabling Parallel Apply

To disable Parallel Apply set bdr.writers_per_subscription to 1.