Embedded or separate process engine process?

Hi,

In deciding which architecture to go for, we need some support.

Some deciding factors

  • we are Kotlin + Springboot shop
  • we would like to use Java API as much possible
  • we want to keep the workflow engine separate from our application

We tried to run the Camunda as separate app and then communicate via REST API, but REST API are not fun to use especially the serializization and deserilization is something we are not liking.

Like seeing the JSON format the object needs to be Variables in the REST API | docs.camunda.org , it looks a bit non standard so do we write our own deserialization and serialization module? Thats why would have been nice if we can just use JAVA Fluent API and all its done in the background for us.

Could someone throw some light on which direction to go and what is the best approach.

Sorry if its too noob question, as we are just getting started with it. :slight_smile:

Thanks

Sounds like you’d be best setup to use Camunda as part of a spring boot application.
For details on how exactly to make that kind of choice you read this blog post:

You can also check out our best practice guide on the various lower level choices like how to implement your services and picking database and application server:
https://camunda.com/best-practices/deciding-about-your-stack/

Thanks for quick response. Yeah I was coming from those links and not sure if I am confident enough still on which way to go :slight_smile:

So if we go with the springboot application then the workflow engine needs to be part of the application and cannot be separate process ?

So the engine does indeed need to be part of the application, but you can consider it nothing more than adding a dependency to your spring boot project.

The process models themselves don’t actually need to be kept in the application server they can still be deployed to it independently via the REST API and subsequently would be stored in the Database not the application.

Right. And if we go with the REST API, then we are stuck with our own serialization and deserialization ? or is there some JAVA API we could utilize.

Also I am trying to understand is there any limitation if we go REST API way, we really dont want to get stuck later on :frowning:

Also any limitation on choosing the embedded engine approach (springboot way ?)

There are pros and cons for both approaches.
it really depends on your use case. The only way to know for sure is to do some prototyping or proof of concepts.

HI @Niall
In the “embedded camunda process engine in the existing spring-boot application” approach - if we deploy 4 or 8 instances of this service, will all of those instances have their own process engine and hence their own cockpit and tasklist?
In this case, is REST API with external tasks with a standalone process engine the only option?

Yes

Not necessarily, the engine and it’s front ends are independent so if you wanted to you could create nodes that are intended only for external tasks to register to get work while others can be dedicated to front end users.

@Niall thanks for the reply.
But let’s say one node goes down then the process instances running on that node also will be lost right?
Can multiple process engine share the same database and same frontend? If yes please let us know how to do that. In that case can other nodes pick up the process instance and complete the task?
Since we perform immutable deployment, the next time we deploy to a new VM the old data from the process engine will be gone as well.

There could be an alternative for you: Home
Is a community extension that implements the Camunda Java api via rest. So you are implanting against the api, but calls are sent to a remote engine via rest. It is not feature complete yet, so you would have to verify that your use cases are supported.

Nope, no process data is stored on a the node itself. it’s stored in the database. you could have more than one node using the same database. So if one goes down the other will still be able to do the work.

This isn’t a problem and happens out of the box if you just startup two nodes pointing at the same database there’s no additional work needed.

It depends on you deploy things. If you’re deploying a process model it’ll be stored in the databse so it doesn’t matter if the node where the engine is goes down as long as the database always persists the data.

1 Like