DelegateExecution Injection

Is it possible to inject a DelegateExecution or TaskListener in an EJB?

	@Inject
DelegateTask task;

@Inject 
DelegateExecution execution;

This EJB would of course be called as an expression on service task. I’m just wondering if it would be possible to inject them instead of having to send them as arguments in every method inside the EJB…

If you’re using injection, then I recommend sticking to CDI and avoid mixing in EJBs. The difference is subtle… but noticeable (meaning you’ll see errors) when you begin using advanced features such as cdi’s applicationscoped vs ejb’s singleton (see google: cdi applicationscoped vs singleton… discussions).

Leaning towards CDI, also avoid using injection requiring EJB and then later mixing formal (i.e. java-bean management) requirements. Difference is subtle - but, it’s reasonably easy to search/replace various annotations to later clean-up the overlap.

Here’s how I do it:

I frequently pass in the “execution” parameter when calling on task delegates by using the expression value:

#{stpProcessDemo.camundaSaysHello(execution)}

And, within these delegates I typically make available Camunda’s runtimeService:

@Inject
private RuntimeService runtimeService;

Based on general requirements, most of the beans are “RequestScoped”.

Here’s a trivial example of setting a process-instance delegate:

public void camundaSaysHello(DelegateExecution execution) {
		
	LOGGER.info("*** camundaSaysHello: invoked");
		
	execution.setVariable("camundaGreeting", "Hello from Camunda");
		
}

I switch to “applicationScope” for singletons. But, I usually reference these singletons from within the java delegate as a property. This means you won’t see an “applicationScoped” session set at the top of my java-BPM delegates. I use singletons for things like centrally managed property values, timers, and sharing outbound communication (i.e. ReST) connections.

On occasions where Camunda is running as a batch-processing engine, and performance is absolutely critical, then you’ll begin figuring in “applicationScoped” delegates. But, typically BPM-batch processing implies performance allowances. This necessary to build-in all the BPM-engine goodness/features.

This requires some research - but, from memory, I think I fell back on using “static” - though being VERY MINDFUL of how these statics behave within async and server cluster configurations. Usually using “static” for utility oriented requirements such as “value conversion” (etc.).

But, most of the time, and trying to avoid later headaches… I use the default “sessionScope” delegate CDI beans. And, if there’s a need to reference “applicationScope”, it’s done via a property within a seessionScope delegate.

@garysamuelson , Thanks for your input!

Injecting a RuntimeService (@Inject) is basically identical to :
ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine();
RuntimeService runtimeService = processEngine.getRuntimeService();

and that’s not really what my problem was.

As for EJB/CDI, I tend to use CDIs for the most part, however I need EJBs whenever I look up by JNDI, and whenever I lookup a remote interface from a different WAR file, with everything being Stateless (RequestScoped).

I was debating injecting a DelegateExecution because certain methods might be used in a context and non-context scenario (CoreClasses etc…), and I would’ve wanted that flexibility if possible, instead of having 2 interfaces for every method (a Camunda and non Camunda use).

Anything else I should pay attention to?

That requires a little research…

Generally speaking though, I build for clustered-server capabilities and therefore depend on EAI-event integration patterns. I use message correlation to line-up session dependencies. This way, I can hopefully leave the server to manage session/serialization, HA, recovery requirements. I say this because I noted JNDI/EJB reference requirements.

Here’s how I visualize this metaphor…

A BPMN model is a type (generally). And, the BPMN type is instantiated in a sort-of lazy fashion - leaving the BPM-engine to do as needed while it’s calling on dependent delegates. Given BPM instances manage business-transactions (very long running), I avoid mixing business-oriented context requirements with the more formal XA dependencies.

Following this approach… I NEVER ask a delegate to behave in a process-session managed way while this delegate (process instance scoped) opens XA managed resources.

In BPM/Case, you’re essentially mixing both business and formal transaction(XA) requirements. If this isn’t done correctly, you’ll see lots of transaction time-outs and rollbacks in the logs. And, a DB rollback has tremendous effects on BPM process instance management. The server (transaction) essentially wipes away your process execution history, causing all sorts of unwanted downstream effects.

Camunda provides excellent documentation on this topic. See BPMN transactions and material relating to the “job executor”. Also, there’s good material on Camunda’s async vs sync configurations.

To add in a humorous note on this topic, avoid re-using Camunda’s process JDBC-JNDI connection pool instances! Reason is that the application server may magically extend the Camunda-engine DB context into your application code. And, when your application code hits an exception, the BPM-engine then feels this pain with damaging BPM-system managed-resource/context rollbacks.