How to increase speed of a multi-instance process

Hi,

Working on Camunda 7.12

I have a multi-instance process which works well when looping over 10 elements, I tried it with 6000 elements but the process is very slow. The multi-instance itself contains only simple script tasks embedded in call activities for clarity, but there are no external tasks.

I have ~60 processes done per minute, which is pretty slow, how can I increase the speed?

Thanks for your help

bn_notification_post_new.bpmn (9.0 KB)

I would suggest looking at the job executor settings. You’ve already using an async before on the multi-instance but you should also pay attention to the number of threads available to the job executor. You should also be clustering the engine.

Hi @Niall,

Thanks for your reply. We already have 10 threads available for the executor and we have 4 nodes for the engine. We unfortunately face optimistic lock issues, we tried to set as much as possibles variables to local variables but with no sucess, what would be the best practice? Is having a “send signal” task inside the loop the source of the problem?

Best,

Ch.

Have you ever experienced that with more than 6000 elements in your collection @StephenOTT?

You need to set async before on your multi instance to insure all of your instances are created as jobs. Otherwise they are being executed under a ~single thread (afaik).

If you are hitting locking issues then you need to fix the way you are storing data for each instance.

You may also want to consider doing batches: rather than each instance being a single process you may want a process/sub-process to handle batches of the work so you don’t pay the overhead for each task lock, execute, and completion.

If you are running this often I would also offload the work to a dedicated Camunda instance and turn off the history, so you only have your runtime tables.

Thanks for the tips @StephenOTT! I did set the async before but improvement is rather small in terms of processing time. I would love doing bacthes however, since I’m creating a notification body which is personalized for each member I can’t use batches. I’ll probably need to divide the input collection into N sub-collections and start an new instance for each of them. Will try that

When you do the async before, you are still going to pay the price for all of the job creations. So if you have 6k jobs then it would be 6k job inserts into the dB and all of the other associated row creations (activity instances, etc). All of this will slow stuff down as there is lots of work going on.

Something that may give you a idea of what’s going on: load up the Metabase docker image and have it connect your DB, and then run a histogram on the Historic Activity instance table and look at the duration of the activity executions.

I see. Mmmh so it means that I can’t really improve processing time, unless I increase the number of threads. For my 6000 notifications it takes ~ 1hr, which isn’t that bad but I expected more efficiency

Try disabling history, and enable debug logging in the job executor: take a look at how long it is taking for all of the jobs to be created.

Then take a look at the graph in metabase to see what the distribution of duration is for your multi instance activity.

If you have multiple activities executing for each instance, look at using transient variables so the variables are not saved to the dB.

thanks, I’ll take a look at that. I already set variables as local when possible, it reduced the processing time significantly.

Local or transient? Local is still a database row