External Task : change retry on lock timeout

Hi,

I was wondering how to decrease the retry if external task was executor for too long and was overtaken by another camunda external task handler?

Example:

  • camunda 1 starts - fetch and lock available tasks every minute
  • camunda 2 starts - same as above
  • external task 1 is created and send to topic external topic 1
  • camunda 1 fetches task 1 and locks for 30 seconds
  • task 1 takes 45 seconds and lock was exceeded
  • camunda 2 fetches and takes over task 1 (because lock time was exceeded)

At this stage I would like to decrease retry for external task by one, it is possible to do using complete/handleFailure but how camunda 2 can know that task 1 was already executed and failed because of the execution time was exceeded?

As I understand there is no way to implement system wide external task execution listener which could intercept such event?

Regards,
Adam.

1 Like

Hi @abednarski79

This is a great question, thanks for asking.
One solution is to use the Extend Lock call from worker1. If it’s still running and approaching the end of the lock it can extend it.
You could also decided to Unlock the task from worker1 if it’s taking too long. This would let worker2 pick it up without any fear of it being run at the same time by worker1

Hi @Niall

Are there any drawbacks to decrease retries of task (using setretries rest api method) just before doing task?

Like, I use fetchandlock => received task => decreased task retries and then try to do it.

Hi @aaksarin,

No. You have to fill the retries in the failure reponse.

If you havn’t decreased the number, you will run into an infinite loop of retries.

Hope this helps, Ingo

1 Like

Hi @Ingo_Richtsmeier
Thanks for the reply. But will failure method work if task lock duration for this worker is passed and task is already taken by another worker? Or failure will work only when task lock duration by this worker is active?

I recently got a problem when workers do task longer than lock duration and my workers did that task until my other db was full. And now I want to protect myself from this by setting task retries just before doing any task work by worker.

Hi @aaksarin,

I assumed that you just asked about the internal retries variable in your worker code, living in memory.

There it makes no difference when you update it.

When sending the response to the process engine, the task is available for other workers.

For this problem, you can send Extend Lock on External Task | docs.camunda.org to increase the lock time.

Hope this helps, Ingo