Sizing containers

Hi Team, We are planning to use Camunda container based deployment in Openshift for the Enterprise edition of the Camunda BPM platform and we need sizing information, for eg if daily it has to process 1K to 2k Process instances and 400-500 concurrent user. what would be the best recommendation? Is there already a formula available or guide by which can do this? Can you please guide us. Thank you !!

Hi @bkr

You’re going to hear “It depends” a lot - because some things like the number of instances started per second matter a lot while other things like the number of concurrent users doesn’t often have too much of an affect.
I suggest you start with the best practice guide on how to size your environment and if you have any other questions and you’re an EE customer then you should be able to organize some time with the consulting team to help you through it step by step.

Thanks Niall for your swift response. We already looked into the article and found that our use-case is somewhat fitting middle tier (between medium and high) How many pod of containers we need based upon our requirement. Also Is there any standard formula that fits all. Rather than looking for consulting help. We would be EE customer but we are not and we want to understand the cost implication for operational management. How to proceed? Would you be able to help us? Thanks.

There really is no formula that fits everything. There are two many variables to consider.
Some depend on the number of wait states in your process or the number of times users will be queryines for tasks.

There are also lots of ways of understanding how to scale a your setup one tool you have is to increase the number of Camunda nodes, but you may also need to ensure that your process engine settings make sense.

The only way to know for sure is to test with the expected load and tweak the setup until it’s optimal.

Hi @bkr,

from my experience it is more important to have a proper database that can handle the load than a huge number of process engines.

Two pods with common memory size to enable failover should be sufficient for the most use cases.

Hope this helps, Ingo

Thanks both for your responses.