Best Practice For Populating Multi-Instance List Variable

I’ve got a workflow that starts with a multi-instance subprocess. When starting the workflow, we know that there will be N instances of the subprocess (each associated with an id) so we just set a “subprocessIds” variable and use that as the multi-instance collection. All good.

Each sub-process instance generates M artifacts (each with an artifactId). Once all subprocesses have completed, I need to kick off another multi-instance task to process the N*M artifacts that were produced, but I’m unclear on the best way to populate the Collection variable used by this task.

I have tried setting an “end” execution listener on the sub-process which queries the database and sets an “artifactIds” variable with the results. This works, but appears to run once for each sub-process execution, rather than when all subprocess executions have completed. This is ok in this case, since the last execution will get the full list from the database, but it’s just a bunch of needless querying since I only care about the last one.

It’s also simple enough to create a JavaDelegate-based Task whose sole job is to query the database and set the variable for consumption by the downstream multi-instance task. Having to explicitly model this in the workflow just to set the variable seems like overkill, though. I’m currently leaning towards this solution though as it’s pretty explicit about what’s going on and doesn’t run a whole bunch of extra times.

Is there a better way to deal with multi-instance tasks where the “collection” needs to come from a database query?

@JPrice take a look at: Pattern Review: DMN Looping for Array Input

It is a example of using DMN in a BPMN, but the concept is the same. In the above example it uses a JSON array to hold the results. After the execution of each instance, it appends the value to the json array.

Thanks Stephen. Yes, I see how that could work.

Is there any concern around concurrent access when doing something like that? We have lots of tasks that tend to finish all at the same time, so multiple tasks could try to be updating that variable at the same time. I suppose it all eventually needs to hit the Camunda db which will take care of things, but what happens if an execution listener encounters an OptimisticConcurrencyException?

Do your tasks need to run is parallel? Can they be sequential?

If parallel is needed then what comes to mind is setting up a greater number of retries on a failed task. By default the engine only pulls 3 jobs at a time into the queue. So set your multi-instance to async and give it a try. You will likely get some locking, but I believe this should throw a error and cause camunda to retry the task. It will attenpt 3 retries and then throw a incident. You can configure it to attempt more retries and also the delay between retries. See the docs in job executor

Other idea is if you need lots of parallel support is to send your items into another DB or another table. Such as a redis db: Storing process variables: Json (Spin) vs. String / what are the impacts?

I think the latter suggestion (store items in another DB) is what I’m doing right now. Some of the things you’ve touched on relate to other things I’ve been wondering about as well, though, so I’m hoping you don’t mind elaborating on a couple things.

I was probably being overly vague in my initial description, and some extra details will probably help. I’ve attached a simplified diagram of relevant portion of our workflow.

One instance of the Transcode Workflow process is kicked off for each of a fixed number of source videos. Based on their formats, sizes, etc. the “Determine Required Transcodes” step populates two lists of transcode requests and passes them as variables downstream to the two Transcode tasks. Transcode tasks then run as External Tasks and on completion of each we write a record to our database with information about the completed transcode.

There’s some other stuff that happens, but later on in the pipeline we then need to perform some additional actions with all of the transcodes of “Type 1” in parallel, and it’s the Collection to pass into this multi-instance task that I’m trying to populate. I’m accomplishing this currently by introducing an extra Service Task ahead of the the “Extra Processing…” task whose sole job is to query our database and populate the Collection variable to pass down stream. Is that what you had in mind when you suggested storing the items (transcode records, in this case) in another DB?

Sort of changing topics, but I have noticed that we’re hitting OptimisticConcurrencyExceptions as parallel Transcode tasks complete at the same time, and while I’ve only started looking into it, some of your above suggestions line up mesh with other things I’ve read but maybe don’t fully understand yet:

Transcode jobs are executed by an external service that makes a callback to notify completion. If completing the task fails because of a concurrency issue, the whole callback fails. The external service will then retry (usually) successfully, so everything works out in the end, but I’m not entirely happy about leaking the initial failure out to an external service.

In your comment you said setting the multi-instance task to async “should throw an error and retry the task”; does that mean it would retry the entire transcode task (not desirable), or just the act of marking it as complete and progressing the workflow (which is probably what I want, but seems unlikely)? I’ve been working through the Transactions documentation but still a little fuzzy on some of the details, so my apologies for any misunderstandings.

Thanks again for your responses!

Hello @JPrice,

if you either mark the joining parallel gateway with async before or the two MI-service tasks with async after, the optmistic locking exception will only repeat the execution of the sequence flow.

The completion of the external service tasks will be saved in the database before joining the execution paths.

Hope this helps, Ingo

Thanks Ingo. Yes, that helps a lot. Sounds like that’s exactly what we’d want to have happen, so I’ll put those changes in place and give it a shot.