Can the process engine be too fast?

Camunda BPNM: 7.13
Environment: Docker - Engine 19.03.08
Setup: Tomcat

We have added a plugin to the Process-Engine which triggers the BPMN task-lifecycle (start, end etc.) of User-Tasks and Service-Tasks as well as events like signals, messages etc and sends the triggered events to a message-broker(RabbitMQ) using the amqp-protocol. Behind the message-broker several microservices consume the messages of the message-broker.

The main logic of of the application we drive with the camunda process-engine consists of a workflow with parent-tasks whereas every parent-task might have multiple instances. Every instance calls a sub-process. which looks like the enclosed one.

This subprocess contains two service-tasks which run one after the other (“Vormontageschritt Ende”, “Cleanup”). The process logic makes it necessary to end the “Cleanup” task immediately after the microservice get the start event of this task from the message broker. The irritating phenomenon we can see in our application is that when the “Vormontageschritt Ende” Task fires its end event the microservice gets informed by the process-engine / message-broker in such a fast way that a fetch-and-lock transaction for the “Cleanup”-task using the camunda REST-Api does return nothing although the fact that the event was fired by the process-engine should mean that the “Cleanup”-task must exist. The only explanation we have is that the database-transaction of the process-engine is not yet commited when the microservice tries to fetch the task.

I’m not sure if I correctly understood this. As far as I understand - your “Cleanup” task is an external task which you want to execute via something from outside the engine. And you are trying to do this after end event of a task before (“Vormontageschritt Ende”)? If yes, then it is of course possible to call fetch and lock before your task is already there. I wonder why don’t you use the start event of the “Cleanup” task? As I remember our team faced also similar issue, but we weren’t using any plugins but just self written listeners and in our case the solution was custom execution listener (something implementing ExecutionListener), there inside the code like:

                        commandContext -> doNotify(...)

Where doNotify is our code with sending the event via amqp or whatever you want.

I don’t know exactly what does your plugin do and how, so it’s difficult for me to help with this.

You’re right. I have to be more precise. The exact Problem is the following:

  1. The Task “Vormontageschritt Ende” is started by the Process-Engine when the previous User-Task has been finished through the REST-Api.

  2. The start-event of this External Task is broadcasted to a queue of the message-broker and the appropriate microservice of the back-end which is listening to this queue of the message-broker gets informed that the task has been started by the process-engine.

3.After having executed the functions belonging to the “Vormontageschritt Ende” Step, the microservice end this external task through the REST-Api .

  1. Now the “Cleanup” Task is started by the process-engine as designed by the workflow. The microservice gets informed of the start event as intended. Until here everything works fine. But now something queer happens.

As the microservice is informed from the process-engine that the “Cleanup” Task was started, it must expect that a fetch-and-lock request issued through the REST-Api should return exactly that task provided the variables used for the request are OK. But the REST request returns nothing and throws an error.
If we catch the error in the microservice an issue the request again 50 milliseconds later the fetch-and-lock works. Strange isn’t it?

But nevertheless your hint might be helpful. The plugin I mentioned is just a listener as the one you sketched in your reply but our listener does not trigger the TransactionState. I will try to implement that. I suppose the problem will then be solved.

Anyway thank you very much.