How to Accomplish Multi-instance I/O Mapping

Say I’m running a process definition that allows a user to report the hours they work based on a list of projects.

harvest_detail_norepeat

All works fine, but now I want the parent process to known which projects the user actually reported on, Basically, I want to retain projects on that list that the user reports on, and to remove the ones they don’t report on.

I know I can store this information out-board, but I have a good reasons to want to keep all my process related state on the process. It seems to me a pretty common use case.

The only way I can think of solving this problem is to pass the parent process id into the sub-process and then to update the process variable in the “Send to Harvest” message task.

Is there another way?

1 Like

Could you use a script that executed at the parent process level that gets the reported time returned from the subprocess, and the parent process would add that object/value to a array variable in the parent process. I would recommend using SPIN Json Array to do this.
See Pattern Review: DMN Looping for Array Input for a similar scenario where the output of a multi instance DMN is stored on a array/Json SPIN variable at the parent process level.

Thanks for responding. I did not realize the execution listener has access to both the sub process variables and the parent process (execution). If this is true, it might work perfectly.

You have to Map the Output Variables on the parent process. See the DMN example in the link.

I see this. This is all I have to do?

var combinedResult = execution.getVariable("combinedResult");
var dmn_output = execution.getVariable("dmn_output")["user"];

combinedResult.prop("values").append(dmn_output)

execution.setVariable("combinedResult", combinedResult)

Yup. You are just getting, updating and setting the variable.

It’s actually not so simple, when you use a subprocess, instead of a dmm task, all of the local subprocess variables are gone by the time you listen to the end event. I may yet have a workaround, but this is much more difficult than it ought to be.

@rmoskal, take a look at the following BPMN example which is a fully working example.
If this is not your usecase, then please provide more details of what you are trying to do.

parent-sub.bpmn (11.0 KB)

edit: fyi/reminder to run the multi-instance in sequential or you will have optimistic locking kick in as multiple instances are trying to update the parent-process’ process variable at the same time.

2 Likes

Thanks for that! The trick was in mapping the variable on the CallActivity:

<camunda:out source="someCalculatedValue" target="MyResult" local="true" />

That step wasn’t obvious from the DMN example, though I think you mentioned it.

I mean this to be an operation that will be used across many process and wound up implementing a delegate listener on the sub-process end state and then setting a variable on the DelegateExecution SuperExecution.

Any reason to prefer your method?

A listener sounds like it would be a good use case.

@StephenOTT I have a question. Based on the spec of Caumnda, Mulit-Instance doesn’t support output mappings. As each instance output data to the same variable, the different instance maybe overwrite the variable.
Is it possible the end listener on your “Some sub-Process” task gets the variable which had been overwriten by other instance? If so, the result is not as we expected.

Not sure I understand your question. Can you provide more info?

@StephenOTT I’m sorry I didn’t clear my question.

On the example parent-sub.bpmn model, a out mapping variable “MyResult” was setup for Multi-Instance task “Some Sub-Process” as follows.

But the user guide of Camunda said that " No output mapping for multi-instance constructs" and “The engine does not support output mappings for multi-instance constructs. Every instance of the output mapping would overwrite the variables set by the previous instances and the final variable state would become hard to predict.”.

See also link :https://docs.camunda.org/manual/7.11/user-guide/process-engine/variables/#multi-instance-io-mapping

YOu are correct that the docs to say that. BUT in my example you will see how we use the end listener mapping in the listeners to use a script to append a single variable. Every time the sequential instance completes, it appends to a existing variable. (“Adding to the existing list”)

It is outlined here: Pattern Review: DMN Looping for Array Input

WHere each time the DMN executes the result is appended to a single variable in the parent scope using the end listener

1 Like

@cheppin This statement is valid for embedded sub-processes. If you change the call activity to an embedded sub process, you will notice that “Output Parameters” section in “input/output” tab is vanished.

This brings me to my question. I am taking liberty to ask the question in the same thread rather than opening a new one as the heading is “How to Accomplish Multi-instance I/O Mapping” and it’s not specific to Call Activities.

@StephenOTT Do you have any idea how we can achieve this when it’s a multi-instance embedded subprocess? For instance, when in your pattern process array-input-dmn.bpmn if you also want the output of the individual subprocesses, how to achieve that? By setting super-execution variables?
BTW, your examples are winners!

Thanks and Regards
Chaitanya

@chaitanyajoshi if you want each output of a embedded sub process then you follow the same pattern as the dmn array loop thread. If you want all variables to be merged into a single in the parent then you need to use sequential multi instance. When you are saving your variable in the sub process, you should be able to access the variables of the main process. So you just need to update the main process variable with your additional data.

There are some weirdness to how embedded sub processes are executed: executions, instance ids etc.

Assuming your multi instance is not long running, then I would look at a solution such as https://github.com/StephenOTT/camunda-concurrency-helpers-process-engine-plugin so you don’t need to deal with the constant variable updates on the db.

Fore sure. At present I solved the problem by adding a variable directly to the process instance (remember, for embedded sub-process, it is only process instance and not parent process instance as there is no parent process per se. Embedded sub processes are just variable scopes for with new activity IDs.) from the activity in the subprocess.

execution.getProcessInstance().setVariable("varInProcess", execution.getVariable("localVarInSubprocess"));

This is for serial multi-instance therefore the code is so straight forward. For a parallel multi-instance process, I would have had to create a container variable prior to starting the sub-processes and then add the local variable values to this container as had been explained in Stephen’s template.

One would argue that this variable will be overwritten in the parent process every time it’s set. I am aware of this and this is by design as I am using this variable as a termination condition. If it’s not a termination condition variable then again the approach is storing variables in a container.

@StephenOTT Once again, thanks for your help and directions. And ParallelUpdateMap plugin for Camunda is a brilliant idea!

Thanks and Regards,
Chaitanya

BTW, in Zeebe this pattern of collecting output from multi-instance activity is supported out-of-the-box, without any need of listeners:

The output of a multi-instance activity (e.g. the result of a calculation) can be collected from the instances by defining the outputCollection and the outputElement variable.

outputCollection defines the name of the variable under which the collected output is stored (e.g. results ). It is created as local variable of the multi-instance body and gets updated when an instance is completed. When the multi-instance body is completed, the variable is propagated to its parent scope.

outputElement defines the variable the output of the instance is collected from (e.g. result ). It is created as local variable of the instance and should be updated with the output. When the instance is completed, the variable is inserted into the outputCollection at the same index as the inputElement of the inputCollection . So, the order of the outputCollection is determined and matches to the inputCollection , even for parallel multi-instance activities. If the outputElement variable is not updated then null is inserted instead.

If the inputCollection is empty then an empty array is propagated as outputCollection .

Very nice mechanism in Zeebe. I wonder if Camunda will follow?

1 Like