Proper Scalability of Significant DMN Workload Through the REST API

Currently, I am experimenting with the external task pattern for the whole process thereby keeping the load on the Camunda minimal, as everything, including tasks that are in essence of “multi instance” nature, are performed by the external tasks and Camunda’s almost only role is orchestration of tasks. However, as there is a business need to make some rules/decisions parametric and visual, I keep the DMN where we need some decisions. My approach is deploying the DMN model into the Engine and then providing variables and getting the result via Evaluate Decision REST API from External Task workers. But, as there is only one decision task in the BPM model, and because in that task I apply millions of rows to decide through a loop, this has become a bottleneck in the process.

What should be the next step for scalability? I can’t increase the number of external task workers because there is only one task (but in that task there are millions of rows to process). So, in that one task, I iterate through my array of rows of variables, and for each row I call the Evaluate Decision API. If I try to convert this worker in some kind of concurrent and parallel type, would the Camunda Engine able to respond to each properly? Ideally, I’d like to “get” the DMN model into my external task code and apply the decision without connecting to the REST API for each row, but I believe that’s not an option in the Rust language as there is no DMN engine implemented.