Fetch And Lock API - usage, use priority and limits

Hello,

At the moment i am using the FetchAndLock camunda api. I got a C# “worker” implementation for every unique topic name ( external bpmn task ), this worker is polling “work” with the FetchAndLock request. There is a timer between each such poll. So at the moment i got a 1:1 relation between my workers and the different external tasks on the different bpmn schemes which i am using.

Side note - I am using the external tasks in a “generic” manner so the same external task can be used in different bpmn’s with the same code behind logic.

I am thinking to make some adjustments maybe going as far as having only 1 such c# “worker” who is going to poll work from all of the active external tasks in my bpmn schemes in my different active workflows with the FetchAndLock api.

I wanted to ask a few questions about it,

  1. Is there a limit to how many different external tasks (topics) i can ask for in the FetchAndLock API request?

  2. I read that the variable usePriority in the FetchAndLock API can be set to decide how to fetch external tasks - based on their priority or arbitrarily. Is there some sort of mechanism to prevent an external task starvation?
    For example - Lets say i begin to launch 10,000 different instances of a few bpmn workflow schemes i am working with.
    Can it be that an external task which is active will have to wait for “hours/minutes” before it will be picked because the FetchAndLock request will pick tasks randomly? Or even worse what if i will continue to launch more and more instances of bpmn workflows and that external task which is in an active state will be forgotten, “starved”, and his workflow instance wont move forward for hours or even days.

  3. I was wondering what might be a better approach for using the FetchAndLock API?
    Is it having 1 c# worker per external task ( 1:1 ) or maybe 1 worker for all of the tasks ( 1 : All ), or maybe some sort of a hybrid, for example maybe seperate the external tasks logically into groups and have 1 worker per each such group polling the active tasks?

Thank you from advance!

I believe this filter translates to an IN clause in the SQL query. E.g. Oracle can only handle up to 1000 values in one such clause.

There is no such mechanism. We considered implementing something like that for job execution at one time, but I don’t believe there is an efficient way to do so when the queue is built on top of a relational database. So make sure you have enough processing resources :). It may be possible to build e.g. a priority elevation mechanism outside of the engine by using the ExternalService#setPriority API.

I haven’t got experience with this, sorry. Probably depends on your use case and which component (Camunda engine, task poller, task executor) limits the performance. Note that having more than one poller can also be useful in terms of failover.

Cheers,
Thorben

2 Likes