Calling an external API in Service task and wait for callback in next process

@Ingo_Richtsmeier, I have another one question about external tasks.

I’ve switched to external tasks and use task id as correlation id (UUID4) for callbacks.
So I just call externalTaskService.complete() or externalTaskService.handleFailure() when I get success or error callback.
I suggest such approach should work even in case of parallel multi instance executions.
But I got some problem. I have a restriction from my infrastructure, that I must use unique correlation id for each request.
So when task fails, and engine retries it, I send request to external rest-api with the same correlation id (task id).
What I could use instead of task id, that will be unique even for tasks retries?
Or maybe there is some possibility to regenerate id for failed task.
Or I could fetch tasks one by one and generate unique workerId (UUID4) every time for each task, and when get callback find task by workerId, but I’m not sure that it is correct behavior.

Hi @thedenische,

what about appending a sequence number to the taskId like taskId1234#1?

You can easily remove the appendix before completing the task.

Hope this helps, Ingo

Yes, it should help, but I have another restricrtion… correlationId has UUID4 format.
Currently I generate unique UUID4 workerId for each task and use it as correlation Id.
It also allows to avoid the situation, when I send the request and after callback timeout send a request one more time (when expired lockDuration). If after that I get a callback from the first request I’ll just ignore it, because the task will already locked by another workerId.
The only thing, I’m worried about in such approach, is that I use generated workerId and in all examples I saw it as a constant value.

Hi @thedenische,

The workerId is useful to identify which worker has locked the task. If you run several workers for the same topic, you can easily identify, which of them has died.

But the engine cares about the workerId only, that the same worker who locked the task should also complete it, otherwise it respones with an error.

Hope this helps, Ingo

@Ingo_Richtsmeier, thanks you a lot for your help.

I’m sorry for late questions, but I think it will be more suitable in the same topic.
I have one more question about external tasks for asynchronous integrations and callback timeout handling:

For executing tasks I use following code:

        externalTaskService.fetchAndLock(1, workerId).topic(topicId, lockDuration)
            .execute().forEach(task -> {
                // do async request to external system
            });

If the request fails or callback with error received I decrement task retries:

        externalTaskService.handleFailure(taskId, workerId, errorMessage, taskRetries - 1, retryTimeout);

But I’m not sure how to handle cases when callback waiting timeout happens (lockDuration expired).
I tried to decrement retries before fetching task:

        externalTaskService.createExternalTaskQuery()
            .lockExpirationBefore(new Date()).withRetriesLeft()
            .list().forEach(expiredTask -> {
                externalTaskService.setRetries(expiredTask.getId(), expiredTask.getRetries() - 1);
            });

        externalTaskService.fetchAndLock() ...

But failed tasks also satisfy the condition, and this code decrements retries for them second time (first in externalTaskService.handleFailure())
Is there some possibility to separate failed tasks from expired.

It might be solved if I’ll decrement retries when locking task (and don’t decrement if a task fails)

        externalTaskService.fetchAndLock(1, workerId).topic(topicId, lockDuration)
            .execute().forEach(task -> {
                externalTaskService.setRetries(task.getId(), task.getRetries() - 1);
                // do async request to external system
            });

But in this case, if there are no retries left Incident will be created only after retryTimeout and not immediately when a task failed.