Looking for best practise for Synchronizing repeated parallel tasks with similar runtime which avoids Optimistic Locking Exceptions


Hi All,
i need some advise with handling some parallel tasks. Currently im thinking of using Signals in some way or implement some Pessimistic Locking mechanism. I would prefer to use least amount of Java code possible.

Here is a short description of my problem:
In one process i want to start around 100 REST calls to around 5 workers with different parameters.
(Not relevant for the actual problem: At the end i need to collect all worker results and pass them over with a final REST call.)
The whole process shouldnt take longer then a couple of minutes.

My appraoch:
Decide all parametersets with a DMN and start workers within parallel sub processes. The timeconsuming occurs when combining all results.

This Error occurs:
org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of ‘UPDATE VariableInstanceEntity[eaeeb2a7-e29b-11e8-a46c-54e1adf459c3]’ failed. Entity was updated by another transaction concurrently.

Setup / Settings:
Camunda 7.9.0
Spring starter 2.1.1
I am running an MS SQL server. For maxJobsPerAcquisition I increased the size to 50.

Example processes with same behaviour:

parallelLog.bpmn (2.6 KB)
RestCallProcess.bpmn (2.4 KB)


@Julius when you say you are combining the results: this means you are trying to “combine” all of the results from your sub-processes into a single variable in the single parent process?

Edit: and does combine mean “add up” or more like create a large array of results?


@StephenOTT Thanks for quick reply! We just loop through the results via JavaScript in the Listeners and create a large array. But its not related to my actual problem.


Well… based on the description you provided (if i am understanding correctly), you are basically updating the same variable multiple times? As in you have a script somewhere that is doing a update to a process variable. So if you have multiple sub-processes completing at the same time, that means you have multiple attempts to update your process variable in the parent process, and thus the locking exception.


The problem occurs in the provided models as well. There is no custom implementation besides inside the subprocess.


Where in your process are you updating a variable? Your provided BPMNs have no “setVariable” usage


As a another note, when you get into the need to aggregate your values from your mulit-instance:

I just posted this from our internal code samples: https://github.com/StephenOTT/camunda-concurrency-helpers-process-engine-plugin

Basically it provides you with a in-mem ConcurrentMap that you can use to aggregate data outside of the execution, and thus should not (not tested in all scenarios atm) cause the typical concurrency exceptions.

There are of course risks to use such a what occurs for rollbacks, engine failures, etc. Up to you to decide how to implement these rules.


Thanks a lot @StephenOTT I actually was hoping there was a pure modeling solution for that kind of problem. I like your approach a lot but sadly its not directly solving my problem. Even i like your solution it sadly doesnt help with the actual problem. The optimistic locking exceptions are thrown even without any variables (like in the example process there is no “setVariable”).


If you remove your logger script do you stil get the same issue?

The Bpmn you uploaded are the exact processes? You ha e not removed anything ?


I changed the “log” script to “JavaScript”-Format and changed it to this:
print(“any log”);
The Subprocess is called 5 times.
Outputlooks like this:
any log
any log
any log
any log
followed by 15 OptimisticLockingException.
I want exactly this process to run without optimistic locking exceptions. Since I want to run it with multiple instances of camunda I probably cannot use your solution for synchronizing.

ParallelSubprocessCallInOneModel.bpmn (4.4 KB)


You seem to have a issue in your configuration somewhere

see this unit test:

import org.camunda.bpm.engine.runtime.Job
import org.camunda.bpm.engine.test.ProcessEngineRule
import org.junit.ClassRule
import spock.lang.Shared
import spock.lang.Specification
import static org.camunda.bpm.engine.test.assertions.ProcessEngineAssertions.assertThat
import static org.camunda.bpm.engine.test.assertions.ProcessEngineTests.*

class MultiInstanceFeatureSpec extends Specification {

  @Shared ProcessEngineRule processEngineRule = new ProcessEngineRule('camunda_config/camunda.cfg.xml')
  @Shared String deploymentId

  def setupSpec(){
    def deployment = repositoryService().createDeployment()
    deploymentId = deployment.getId()
    println "Deployment ID: '${deploymentId}' has been created"

  def 'Manage multi-instance'() {
    when: 'Starting a process instance'
      def process1 = runtimeService().startProcessInstanceByKey('multiinstance')
    then: ''

    List<Job> jobs = managementService().createJobQuery()
    assert  jobs.size() == 5
    jobs.each {

    def cleanupSpec() {
                                                 true, // cascade
                                                 true, // skipCustomListeners
                                                 true) // skipIoMappings
       println "Deployment ID: '${deploymentId}' has been deleted"

using the BPMN that you provided previously.

the output is:

Deployment ID: '1' has been created
any log
any log
any log
any log
any log
Deployment ID: '1' has been deleted

no errors, no exception locking issues


Thank you @StephenOTT, since you could run it perfectly the problem had to be in the modeling/configuration:

Enabling the options “Asynchronous Before” and “Asynchronous After” for calling the sub process did the trick.

<bpmn:subProcess id=“SubProcess_ParallelTask” name=“parallelTask” camunda:asyncBefore=“true” camunda:asyncAfter=“true” camunda:exclusive=“false”>

Before we just had those options on StartEvent and EndEvent inside the sub processes.