Providing a separate Camunda API artifact

Hi folks,

during the development of community extensions there are certain use cases where we want to be able to depend on Camunda Java API but not include the entire engine. I would like to strat the discussion about the possibility to provide a Camunda Engine API artifact. In addition, the same might apply for providing the Camunda DMN Engine API artifact.

I had an intersting discussion on that with @falko.menge and @BerndRuecker and would like to get the discussion rolling.

I also prototyped a very simple solution as a PoC for this by just repackaging the existing artifact.

There it is:




How is this different than using a Mavin “provided” scope or a gradle “compileOnly” on the engine dependency? (And why is it needed vs using provided/compileOnly?)

Hi Simon,

Falko has pointed me to this thread, so I’ll share my thoughts and I’m happy to discuss this. As a first step, could you please go into some detail in what way this separation would help you practically? Once we share an understanding of the benefits, we could then discuss the potential impact on user setups and development practice (if any), so that we can make an informed decision for or against.

As a side note: A technically (and maybe in development practice) less intrusive solution could be to create an artifact within the camunda-engine module with a different classifier that only contains the non-impl classes.


Hi Thorben,

let me make a step back and explain a little bit my requirement. For any Java API you have two sides - the implementers and the users. In case of Camunda API the default implementer is the engine and the default caller - a process application. Things change slightly if we speak about community extensions.

Let me give you some examples for implementer:

  • Camunda BPM Mockito extension aims to provide mocks for Camunda API. In doing so it serves as an alternative implementer.
  • Camunda Spring Boot REST extension aims to provide a client implementation of the API, which delegates all calls via REST to a remote engine - again, it is acting as an implementer.

Problem: in both cases you don’t need the entire Camunda Engine to be n the classpath, but only the API classes.

Most cases of being a call is not a problem, since you need the real implementer (the engine to work). Again a small example:

  • Camunda BPM Data extension aims to provide a uniform access to variables (independent of the access scope). In doing so it provides factories to read/write variables from variable map (part of common-variables library), runtime service, task service, case service, external locked task, and variable scope. In doing so the user of the extension benefits from the fact that most methods are overloaded in a way, that you can provide any of the scopes mentioned above, from which you want to access variables.

Problem: On the binary level, it is only possibly to use the library if the classes representing the upper scope are on the class path - a small but ugly problem… If you want to work on Variable Map (valid in context of DMN engine without process engine) you need the process engine on the classpath…

Generally speaking, it is easier to provide the API for the client than for the implementer.

My idea was to try to separate the required classes to a separate artifact (maybe as additional artifact with a classifier as you mention, or as a separate build). The first thing I did was to decide what should be a part of it.

My first estimate is that it consists of:

  • Service API (mainly of org/camunda/bpm/engine/* classes and everything they import directly)
  • Delegation API (mainly org/camunda/bpm/engine/delegate/**/*.java)

Especially the service API imports classes out of the first-level packages of org/camunda/bpm/engine/, but the main idea was to exclude the entire impl package from being packed.

So I ended up in the following implementation idea for the moment… I take the engine JAR, unpack it providing a white list and a black list of classes and try to compile it. As a result, I could “draw a line” between API and implementation. There are few but pretty serious violations that I could identify. Some of them are pretty easy to fix, some of them not…

From the point of view of providing a separate API, the violations mean that the class is directly referenced by parts of the API (means it is a direct import of an API class and will be required by the implementers to be on the classpath) AND is in the same time an implementation class (referencing another dozen of implementation classes). As a first estimate (I believe we need something better here), I resolved this issue with the following approach: I excluded the class from the original and replaced it by the own empty implementation (throwing an exception on a call). I tried to draw the API line as close as possible to the Service and Delegation API, to make the module compile…

I could create a list of issues identified so far and we can discuss on them further. I think there is always a solution for any particular one.

What do you think?



Hi Stephen,

See my answer to Thorben, above…



Hi folks,

I pushed it a little further and produced committed the two-artifact version with DMN and Camunda BPM engine as two separate API artifacts (with DMN Engine API required by the Camunda BPM API).

I double-checked that every class I provided to cut the track to implementation contains at least a comment from where it is referenced.

Have a look on it in github, please…



@zambrovski I am still unclear how a maven scope of “provided” or a gradle “compileOnly” on the camunda engine dependency does not provide the same result?

Your example of the VariableMap: If you use the compileOnly / provided scope then the jar will not contain the engine Dependency but you can still use the VariableMap APIs/Interfaces/classes in your code.

Hi Simon,

Thanks for your elaborate explanation and also for preparing the github project, that helps. I struggle to follow this part of your post:

I understand that these use cases currently require the entire camunda-engine jar. I also understand that this jar with all the implementation classes has a larger size and more transitive dependencies than an API-only jar would have (e.g. as shown in the readme of your github repository). However, I cannot fully follow the additional issues you point out. Could you maybe make the example more concrete by pointing to the involved classes? If we have a dedicated api jar, is there an added benefit besides the size and number of transitive dependencies?

Sorry for insisting on this. I just want to make sure I really understand all the pain points before discussing solutions.


Hi Torben,

let me put it like this… If I provide an alternative implementation of the API, why should I have the original implementation in place? This makes it a little more complicated to use.

If I’m a client library and I want to use a shallow API of 10 classes which is the API, why do I need to put the entire implementation dependency on the class path?

The main reason is probably to have a dedicated API and separate this from the implementation. If a developer gets the API classes, it clear what is the expected way of programming. Getting the entire implementation mixed up with implementation leads to dirty hacks.

I believe that in most cases Camunda API has a strong separation between the API and the implementation. Ok maybe by a few number of exceptions. I think it very valuable to push the idea of API separation to enable people to look at that API, understand it and maybe improve it. For example there might be a immutable command-like API exposed to the user instead of providing multiple overloaded methods to for the same API call. There are many code design and API design issues I want to think of, without reasoning about the underneath layers of implementation.

For example a follow-up question is, if the API includes Java service API and delegates, or is something like the internal Command API with the Command executor also an API.

Regarding the DMN use case - there is an example of running the DMN engine without the Camunda BPM negine, and there are many tricks how to make it work. Not a trivial issue, but it would be easier if we had the separation of API / impl. And finally, the API is more stable than the implementation, you could change the implementation without changing the API, so why these two needs to be bound together?




the provided/compileOnly scopes are helping you during compilation but not in runtime. In runtime, if my implementation provides an alternative implementation of the runtime service, I have to put the camunda-engine on my classpath to get the RuntimeService interface.

Just have a look on the implementation of the REST client (camunda-rest-client-spring-boot/AbstractRuntimeServiceAdapter.kt at develop · camunda-community-hub/camunda-rest-client-spring-boot · GitHub). It is implementing the API, taking the parameters of the callers, adopts them to the REST call, executes that and converts the result back. There is no need to have the command executor, the RuntimeServiceImpland everything else from the camunda-engine on the classpath for this…

Look on the implementation: camunda-rest-client-spring-boot/RemoteExternalTaskService.kt at develop · camunda-community-hub/camunda-rest-client-spring-boot · GitHub.

Still, for now there is no way to separate these dependencies…



@zambrovski is your use case about writing runtime script that uses the API?

Hi Simon,

Thanks for the elaboration, I think I understand it better now. Besides number of dependencies and size of the artifact, there is no direct technical problem but it is more about ease of understanding, working with, and potentiall evolving the API.

As a next step I will add my perspective (and the “product perspective”) on this idea. I don’t mean this to shut down the idea or discussion early, but I want to make transparent some constraints within which I would like to find a solution.

Current situation:

  • API for us is any class or interface in a package that doesn’t contain impl (as per Public API |
  • The purpose of the API is to provide users with guaranteed stability of these classes, in particular behavioral and binary backwards compatibility (as per Public API |
  • Re-implementing this API is currently not a design goal (and this is also why we are discussing it here :slight_smile: ). This is why the development team so far does not take particular care that this is possible.
  • For most users of the Java API, it is not important that re-implementing the API is possible (besides interfaces like JavaDelegate). This is of course a subjective opinion, but it is based on my experience in support, product management and community interactions.

Solution constraints:

  • Moving the API classes out of the camunda-engine module breaks strictly speaking our public API guarantees, as we say that we do not change these classes within the module (so moving them to a different module would be comparable to removing them from camunda-engine). This may not matter too much in practice, but let’s also keep in mind that this is the absolute central programming interface for our quite large userbase. I wouldn’t be surprised if there are applications that rely on the classes being in the camunda-engine jar and not in a dependency (e.g. if people create their own OSGi bundles). In addition, this could create migration effort that I would like to prevent (e.g. creating new modules in a Wildfly setup). In that sense, I would like a solution that keeps the classes in the camunda-engine jar (e.g. thinking about shading or by keeping them there in the first place and then creating a separate artifact with only the API classes)
  • Any changes should keep binary backwards compatibility (this may be important when we discuss some of the issues you spotted and documented in the github repo, such as changing abstract classes to interfaces, but I have to refresh my knowledge here)
  • Changes should not be to the disadvantage of users who do not benefit from them (i.e. the majority of people as established above). Example: In camunda-bpm-api/ at develop · holunda-io/camunda-bpm-api · GitHub you point out that some API interfaces reference this class in their Javadoc. If we consider this critical for the documentation and that people can understand the API, then I would value this higher than a strict separation of impl and non-impl classes.
  • If we make changes, the Camunda development team should not be affected too much in their daily practice. We are working with the API regularly and evolve it the most. I would like to avoid that we set ourselves somewhat artificial restrictions that limit us in what we can build and do.

I hope this is understandable. Please let me know if you disagree with points or if you have questions. Otherwise, I suggest we go into discussing solution ideas :slight_smile:


This is an iterestig discussion! I didn’t read every single post (and have not yet had the need of a separate API module), but I have an idea.

If this is true

API for us is any class or interface in a package that doesn’t contain impl

then why not generate such an API module automatically by just filtering the main camunda-engine module? The new module would contain a subset of the classes from the main module. The two would not be disjoint sets.

As a side effect, this would check whether some API classes reference non-API classes. If the new module can be compiled then everything is OK.

Such module could be created 100% automatically and be published whenever a new version of the main module is published.

Hi @fml2,

Yes, I agree that this is an attractive option, however there are some practical issues currently with fully separating api from non-api classes that Simon has documented at camunda-bpm-api/process-engine at develop · holunda-io/camunda-bpm-api · GitHub. If we can resolve those in a meaningful way (while taking into account the things I shared in my last post), I would be open to doing that.