CustomBatchJobHandler with CustomIncidentHandler - How?

Hi

we have several custom batches (using the custom-batch extension) running.
Now we have the requirement that for one of them, we do not want to raise an incident and a failed batch (we are running the job every x minutes and will resolve the issue by repetition).

So I need to deal with: how can I prevent jobs identified by a custom handler type from raising an incident/failed job?

I am looking at the CustomIncidentHandler, but it seems like I have no other option here then to extend the DefaultInicidentHandler and do some “if” based filtering on the IncidentContext. But I cannot find how I can make this work only for jobs of my custom batch handler, as the IncidentContext does not seem to reveal that information?

Did someone try this before? Is there missing API? Is this not possible at all? Is there another way?

Thanks
Jan

Hi Jan,

I think you’re on the right way. You can get the job definition id from the IncidentContext. With the job definition id, you can get the job definition which holds the job type. Can you distinguish the custom batch handlers via job type?

Best regards,
Philipp

Hi Philipp

I meanwhile implemented it the way you suggest, but without success. The Batch is still failed in cockpit.
Are Jobhandlers somehow wrapped again or treated differently when used in batches?

Jan

Hi Jan,

I’m not familiar with the custom-batch-extension. Using a Camunda batch, I can provide a custom incident handler to avoid the incident creation. For example: (simplified)

public class MyIncidentHandler extends DefaultIncidentHandler implements IncidentHandler {

  public MyIncidentHandler() {
    super(Incident.FAILED_JOB_HANDLER_TYPE);
  }

  @Override
  public void handleIncident(IncidentContext context, String message) {
    ManagementService managementService = Context.getProcessEngineConfiguration().getProcessEngine().getManagementService();
    
    JobDefinition jobDefinition = managementService.createJobDefinitionQuery().jobDefinitionId(context.getJobDefinitionId()).singleResult();
    
    if (jobDefinition.getJobType().equals("instance-deletion"))
    {
      // ignore failure of this job type
    }
    else
    {
      // create incident
      super.handleIncident(context, message);
    }    
  }
}

Can you provide an example to reproduce the behavior?

Best regards,
Philipp

Hi Philipp,

here’s an example+test https://github.com/holunda-io/holunda-spike/tree/master/cf5285 i wrote some comments inside, so hopefully you can get it, if not: ask. Its using spring boot and custom batch, but since those are only wrappers around camunda api, I doubt that that is the cause for the behavior.
Thanks for your help

Jan

Hi Jan,

I looked into your example. The custom incident handler works fine and doesn’t create incidents for the given job type.

However, the batch job isn’t completed because the failed jobs of the batch still exist. One way to handle the problem is to implement a custom FailedJobCommandFactory. Instead of preventing the incident creation, you can just delete a failing job if the job definition’s type matches the given job type.

But I think the simplest solution is to avoid throwing an exception :wink:

Does this help you?

Best regards,
Philipp

not throwing the exception is not possible … spring data and mybatis do not play well, that’s why we have to use the “RequiresNewTransactionWrapper”, but in case of the job handler, it does not work … in other words: even when I do a try/catch, the transaction is rolled back and the job failed.

I managed to get this working with your CommandFactory.