Multi-instance optimization (parallel execution)

Hello there.

I have call activity element (multi-instance) that I’m trying to execute in parallel. I have 30 000 instances that I need to run. For now, it’s too slow. It takes about 3 hours to finish with the starting all process instances.

On my call activity I’ve set the following properties:
Collection: ${productList}
ElementVariable: product

Multi Instance Asynchronous Before: checked
Multi Instance Asynchronous After: not checked
Multi Instance Exclusive: checked

Asynchronous Continuation

  • Asynchronous Before: checked
  • Asynchronous After: not checked
  • Exclusive: not checked

NOTE: Inside call activity I check target days and on each day I’m calling different activity (new call activity element - sub process).

Does anyone knows anything about this?

You should consider changing the settings of the Job Executor in order increase the number of jobs that can be done.
I’m not sure if you’re using a cluster or not, but if you’re only using a single node you shouldstart up a few nodes to help with processing.

1 Like

Hi @Niall, thank you for your answer. I’m using a single node. I will try to run in cluster.

@Niall is there any other way to speed up the processes?
Here is my job-executor configuration:

job-execution:
core-pool-size: 10
max-jobs-per-acquisition: 10
max-pool-size: 10
queue-capacity: 10

and I’m using audit history level.

What exactly are you trying to achieve - are you specifically testing performance or do you have this specific load in a production system?

@Niall I have this specific load in production. It’s too slow for now… I’m running ~30 000 process instances every morning. It takes about 3h to finish but I only have 40 minutes to finish with all processes.

I’m using MySQL, Spring Boot and Centos7 server with 16GB RAM and 2 CPUs.

It’s not entirely clear exactly what is is that is causing the delay, it could be the delegate code, it could be slow network calls. The engine itself shouldn’t have too much trough with that load so i’m not sure if the engine itself is the bottle neck.

Either way have you tried adding more nodes? Creating a cluster?

Here is the part of main process:
main-process-part

In this call activity I pass about 30 000 instances (products). Then the following BPMN is called for each instance.

For now I don’t have possibility to add more nodes. I’m limited to single node. I can only increase number of cores to 8 and RAM to 32GB.

The product variable is about 8KB.

The conditions for the conditional start events of the event subprocesses will be evaluated immediately when starting the process IIRC. If checking those conditions is processing intensive, that might be an issue because there are three at the root level.

@tiesebarrell those conditions only checks the properties of passed instance (product). For example condition is: product.getDueDays() == 3 && product.getProductCatalogType() == ‘loan’

Also, there is about 20 sub-processes with conditions. On image are only three because is the only one part of process screenshot.

OK, then it’s unlikely that they take up so much time. For the ones that get started, those 20 (!) conditions will be re-evaluated with variable updates. So if you’re updating variables a lot, there’s a lot of work going on there potentially, if you didn’t specify and restrict which variable updates should be used.

1 Like

@tiesebarrell So, is it a better idea to put those conditions, maybe in DMN, or in service delegate? What you suggest?

There’s no easy way of saying in a generic way, because it depends massively on the functional goals for the process. If those conditions are only relevant at a certain point in the process, then a decision at that point in the process makes more sense. But if the process is truly monitoring 20 conditions all at once all the time, then what you have is possibly correct. It seems like an uncommon case though. You could also actively trigger re-evaluation of the conditions in a single event subprocess with a message start event, or check them from time to time in one with a timer start event. At least then you could bundle a lot of them.

Again, it depends very much on the functional requirements. But looking back at your original question, first step would be t determine if the evaluation of the conditions is actually slowing stuff down. You could experiment with a test where you strip them all from the process and start the same number of instances. If that makes a big difference, that’s your pointer to look at the conditions.

Hey @hedza06

Did you found any solutions?

From my tests I saw that the larger the collection the more time it takes to initialize the multi instance.
It seems that the collection is passed to each instance.
So if you have 30k values then you have 30k instances with a collection of 30k elements.

I wonder if there is the possibility to avoid passing the collection along.

Best regards,
Cosmin

Hi @Cosmin,

I have removed all those conditions from BPMN and make one service task in which I’m using java parallel stream to start processes manually (with runtime service).
Also, I have implemented custom history level to reduce number of queries. It seems faster but not too much.

For those 30k instances execution time is about 1 hour which is generally slow.

Hey @hedza06,

My solution was to split the collection.
Now I have 20-30 multi instance activities with 500-1000 entities.

The process seems to move a lot faster in my case.

Hi Cosmin, are you sure about this? As far as I observe, only the particular item in the collection is repeated per each instance, not the whole collection.