How to ensure parallel execution behind parallel gateway?

Hi,
we have a process where we do a lot of AQMP Based communication. We are using camunda as the Orchestration Service for all our Services.

This is an example of how we do this:
grafik

  • Send Message A means, the Service running the Camunda Engine sends an AMQP Message
  • The Message is processed from a different service which will eventually send Message B
  • The camunda running service has a AMQP Listener for Message B and tries to corellate the received Message with the process.

Usually that works quite well. But sometimes the message is received and the process token is not yet at the “Receive Message Task”. This is really bothering us. How can we ensure, that the process token is at the receive task when the Message is received.

Some notes for our process:

  • All our Task are flagged with “Asnychronous Before”
  • The usual processing time of the complete process is around 1-2 seconds

Any help on this issue is greatly apreciated!

cheers
Reinhard

Hi @Reinhard,

You could set the job priority of the Receive Task higher than the priority of the Send Task. This is possible if you have marked it as Async Before. Then you should also set the jobExecutorAcquireByPriority engine configuration to true. This means that the Receive Task will be executed first and the message will be created before you send the message A.

regards,
Dominik

1 Like

I am curious why the send and receive actions are not modelled serially, for example using two intermediate message events?

Catching the Message B with a non interupting message start event might also be a possible solution for this problem (untested):
image

Hi @mEdling

The potential problem with modelling the send task followed by a receive task is there is a small race condition such that the response to the send task could arrive before the process engine has completed setup of the receive task (the receive task will cause a DB flush).

For an asynchronous pattern such as this, it would be a very unusual event and if it did occur, a synchronous request/response service task may be a better option in the first place.

The original post tried to get around this problem by creating a parallel flow. In their case, they may have created yet another race condition as they marked both the send task and the receive task as async before, so the engine may not have created the receive task before the send task occured. If only the send task was marked as async before, the parallel execution would have created the receive task before the send task, and the receive task is a natural async continuation point anyway…

regards

Rob