Use same workflow for multiple business partners with overrrides for process variables

I’m looking for advice on how to migrate our current batch jobs to Camunda. We are willing to write some code and change our overall process, but would like to use Camunda in a way that is in sync with the way it is implemented.

We have a home-grown batch job system loosely on eGate / JCAPS. We handle hundreds of batch jobs daily. My boss wants to move to BPMN to simplify communication with business users and to a better known system to simplify the process of on-boarding employees and hiring consultants.

Camunda meets most of our needs, but we have a concern around configuring workflows. Right now, we define a workflow once and override various parameters to make it work for various business Partners. For example, we have a business process that sends new enrollees in health care to the appropriate carrier. We get flat files with changes in enrollment, convert them to the appropriate external file, and then post the files to servers for the carriers. At each step, we have 5 - 10 parameters we set. We then override a subset of them at the partner level.

Business Process: Update Health Enrollees

  1. Get Extract From HR
  • Kaiser {server: server1, extractDir: kaiserDir, username: kaiserUser, password: kaiserPass, …}
  • Blue Cross {server: server2, extractDir: bcDir, username: blueCrossUser, password: blueCrossPass, …}
  1. Call External service to transform files.
  •     -- Kaiser {carrierCode: kaiserCarrierCode, encryptionKeyName: kaiserEncKeyName }
    
  •     -- Blue Cross {carrierCode: bcCarrierCode, encryptionKeyName: bcEncKeyName }
    
  •     -- ......
    
  1. – Send Extract To Carrier
  •     -- Kaiser {server: kaiserServer, deliveryDir: kaiserDir, username: kaiserUser2, password: kaiserPass2, ...}
    
  •     -- Blue Cross {server: bcServer, deliveryDir: bcDir, username: bcUser2, password: bcPass2, ...}
    
  •     -- ......
    

This is a lot like how a mainframe job gets defined. The workflows would be the JCL and the Partners would be PROC.

We know we could create each of the workflows as a process and then call that process for each partner, but that would be unwieldly: for one job we would go from 3 workflows with 13 partners each to 3 “actual” workflows and 13 Partner workflows. It would be very, very difficult to sell that solution to the operations team that administers the batch job services.

Alternately, it looks like we could set up DMN tables for each stage where we need to override parameters and put all the partner specific parameters into the DMN tables. This would allow us to update the parameters on a per-partner basis, but would require a new deployment each time a business partner wants to change their encryption key or server address.

Lastly, and perhaps best, we could put the partner specific information in xml or json files and load it during the process using a script. (something like this Process variables used for Process configuration stored in JSON/YML?) This is good because our partner-specific configuration is stored outside of the model, but feels a bit like a hack.

Is there a better way to do this? Should we just use the camel integration and put together our own system for retrieving configuration? Am I missing any options?

I’d be really to happy to hear from any Camunda developers if any of the above proposals won’t work. I’d also be happy to hear from the Camunda community if they have any similar experiences.

Thank you for your time.
Tim

are you writing with delegates or scripting?

I think a little of both. Although I hadn’t thought through what those brought us before posting this.

Based on our existing processes, I think we would mostly write delegates. If I understand the documentation I just read, we should

  1. Create a delegate to retrieve values for process variables when needed.
  2. Call the delegate.
  3. Use BPMN for the rest

Am I grasping how this goes together? If so, this is good because it separates the config out of the business logic.

Thank you…

@tim once the process instance starts is there the need to allow the ops team to update the config files and the changes will take effect for the active process instances?

@StephenOTT there is no requirement to update config files after a process starts, however, I know the operations team wishes they could restart a process that is half finished after fixing either input files or variables like passwords.

So restarting a process is just a Rest API call to move the marker and to clear variations. More of a busines process/steps to follow.

So based on your details, you could just access a config file somewhere on the network (http or volume) and parse that file and load the configs into process instance variables. You could do this as an execution listener in the Start Event of the business.

So when the process starts, the very first thing it does it load the configs and then moves forward. You could do this with a small script or a java delegate.

I’ve gotten a bit further with this. I have the camel integration working to monitor ftp locations for new files. When a new file comes in it camel grabs it, adds a custom header (business_partner) to identify the business partner, and starts a camunda BPMN process.

In the BPMN process the first step is a JavaDelegate that just uses the business_partner and the process id to look up the process variable values and then continuing on through the workflow. It works like a charm. If anyone knows of issues I’ll run into, please let me know. For anyone trying to do the same thing, here is a simple example of adding the header in the Camel headers.

<camelContext id="camelContext" xmlns="http://camel.apache.org/schema/spring">
	<route id="startRouteLocalDir">
		<from uri="file:/temp/camunda/file_monitor" />
		<setHeader headerName="file_partner">
   				<constant>file_partner_is_startRouteLocalDir</constant>
		</setHeader>			
		<convertBodyTo charset="UTF-8" id="_convertBodyTo1" type="java.lang.String"/>
		<to uri="log:org.camunda.demo.camel?level=INFO&amp;showAll=true&amp;multiline=true" />
		<to uri="camunda-bpm://start?processDefinitionKey=bFtpWatcher&amp;activityId=waitForCamel&amp;copyHeaders=true" />
	</route>
	
	<route id="startRouteFtp">
		<from uri="ftp://user_here@host_here:21//projects/camunda/test_source?password=pwd_here&amp;autoCreate=true&amp;idempotent=true&amp;idempotentRepository=#fileIdempotentRepository&amp;initialDelay=10000&amp;delay=60000" />
		<setHeader headerName="file_partner">
   				<constant>file_partner_is_startRouteFtp</constant>
		</setHeader>			
		<convertBodyTo charset="UTF-8" id="_convertBodyTo2" type="java.lang.String"/>
		<to uri="log:org.camunda.demo.camel?level=INFO&amp;showAll=true&amp;multiline=true" />
		<to uri="camunda-bpm://start?processDefinitionKey=bFtpWatcher&amp;activityId=waitForCamel&amp;copyHeaders=true" />
	</route>
</camelContext>

Then in the java code

public class SampleJavaDelegate implements JavaDelegate {
	String taskId = execution.getCurrentActivityId();

	RepositoryService repositoryService = 
			execution.getProcessEngineServices().getRepositoryService();
	String bpmnId = repositoryService.createProcessDefinitionQuery()
			.processDefinitionId(execution.getProcessDefinitionId())
			.singleResult().getKey();

	logger.info("In {} --> {} --> {}: {}", 
			bpmnId, 
			taskId, 
			execution.getVariable("file_partner")
			execution.getId());
	// look up values from DB or disk here.
	// Add values to context here.
}

Thank you @StephenOTT for getting me started down this route. It looks like it is going to work really well!

Glad it is working out!