Optimize: importing variables


#1

We have our engine running with spring boot. Now we are adding Optimize. Using the demo engine, when I create a report I am able to group by variables, however when I connect to my engine, I cannot see any variable(but I can create the report and see data). All the variables we’ve defined are either String or Integer, no complex structures.
I haven’t been able to figure out why this is happening. The main difference between both engines is that the demo one is a tomcat traditional webapp and our engine is a spring boot with tomcat embeeded.

I expected to just see the variables but not sure if there is anything else I need to do.

Thanks


#2

Are able to see the variables in Cockpit?


#3

Thanks @Yana for your answer.

Yes, I am able to see the variables in Cockpit.


#4

Hi @Dieppa1,

Thanks for reaching out to us!

Before I’m able to help you, can you please provide the following information:

  • Which Optimize version are you using?
  • Please attach your Optimize configuration ( the environment-config.yaml file) where you configured the connection to the engine.
  • Which engine version are you using?

Best
Johannes


#5

Thanks JoHeinem,

  • Optimize version: 2.3.0
  • Engine version: 7.10.0-ee (Using camunda-bpm-spring-boot-starter-webapp-ee:3.2.0 and camunda-
    bpm-spring-boot-starter-rest)
  • The environment-config.yaml has the extension changed to .txt to be able to upload it. You just need to change the extension back to .yaml and you will be able to visualize it nicely in most editors

environment-config.txt (7.3 KB)


#6

Hi @Dieppa1,

Thanks for providing all the information!

Optimize version: 2.3.0

Just for your information: there is already Optimize 2.4.0 out. It might make sense to switch to the new version, since it already offers a lot of additional features. Read everything about it in the dedicated Camunda Optimize 2.4.0 blogpost.

Engine version: 7.10.0-ee (Using camunda-bpm-spring-boot-starter-webapp-ee:3.2.0 and camunda-
bpm-spring-boot-starter-rest)

The engine version should be fine. Can you validate that you’re reaching the version endpoint in the engine by executing a GET request against http://localhost:8080/rest/version

  • The environment-config.yaml has the extension changed to .txt to be able to upload it. You just need to change the extension back to .yaml and you will be able to visualize it nicely in most editors

It seems to me that the endpoint to the engine is not correctly configured. Currently it is
rest: 'http://localhost:8080/engine-rest/'
but it should actually be
rest: 'http://localhost:8080/rest/'
since you’re using spring boot.

Does that help?

Best
Johannes


#7

Thanks @JoHeinem,
everything makes sense. The endpoint to the engine is ok, I just changed it temporally to connect to the demo engine provided by Camunda, but I am using the one you say “/rest”.

It seems everything is fine, I will keep trying stff.

Thanks


#8

Alright cool! Let me know if you encounter any problems :slight_smile:

Cheers
Johannes


#9

Hi @JoHeinem, I have found this error in the Optimize log when trying to create a new report. Just when opening the screen to create a new report, but before starting to create the report(choose the process, etc.).

It may be related.

optimize.log (7.1 KB)


#10

Hi @Dieppa1,

The error message is actual nothing to worry about, though I know that this is a confusing error message and therefore we fixed that with Optimize 2.4 (the respective ticket is OPT-1740). Hence, I would recommend you to directly switch to Optimize 2.4.

Does that help you?

Best
Johannes


#11

Hi @JoHeinem,

I have upgraded the engine to 2.4.0 and I am getting this in the log. Do you think it can be related to my issue? Thanks

optimize.log (10.0 KB)


#12

Hi @Dieppa1!

the first error in the log just indicates that there were two concurrent write operations to elasticsearch on the same process instance document, but that is no issue as writes are retried on these conflicts.

The second error is related to the websocket that is used to push status updates to web clients and not to the import.

To have have a closer look on what happens during the import of the dataset you could increase the log level with adding:
<logger name="org.camunda.optimize.service.engine.importing" level="debug" />
to the ./environment/environment-logback.xml log configuration.

This should give you some log entries like:

17:52:33.893 [Thread-12] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetched [52] running historic variable instances which started after set timestamp with page size [10000] within [124] ms
17:52:33.904 [Thread-12] INFO o.c.o.s.e.i.s.VariableUpdateInstanceImportService - Refuse to add variable [approverGroups] from variable import adapter plugin. Variable has no type or type is not supported.
17:52:33.904 [Thread-12] INFO o.c.o.s.e.i.s.VariableUpdateInstanceImportService - Refuse to add variable [invoiceDocument] from variable import adapter plugin. Variable has no type or type is not supported.

17:52:33.910 [ElasticsearchImportJobExecutor-pool-0] DEBUG o.c.o.s.e.w.v.VariableUpdateWriter - Writing [30] variables to elasticsearch

if Optimize is able to query variables from the engines API.

Would it be possible for you to do an import from scratch, meaning deleting the elasticsearch optimize indexes and restarting Optimize?

Best
Sebastian


#13

hello I’m working with @Dieppa1 on this issue.

We’re using Optimize 2.4.0. The elasticsearch indexes have been deleted to start from scratch.

In the Optimize logs

14:28:12.897 [Thread-16] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetched [0] running historic variable instances for set start time within [6] ms
14:28:12.897 [Thread-16] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetching historic variable instances ...
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetched [0] running historic variable instances which started after set timestamp with page size [10000] within [7] ms
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.i.h.i.VariableUpdateInstanceImportIndexHandler - Restarting import cycle for document id [variableUpdateImportIndex]
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.s.m.VariableUpdateEngineImportMediator - Was not able to produce a new job, sleeping for [30000] ms

14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.f.i.CompletedUserTaskInstanceFetcher - Fetching completed user task instances ...
....
14:28:12.921 [Thread-16] DEBUG o.c.o.s.e.i.f.i.CompletedUserTaskInstanceFetcher - Fetched [1] completed user task instances for set end time within [17] ms

This seems to indicate that Optimize can connect and fetch task instances from the engine, but cannot fetch variables. Is there any way we could debug this further ? i.e any diagnostic tests we could try and/or any log message that we should look for ?

thanks


#14

Hi @eleco
before we go into debugging further - what history level do you have set in your engine?

Best
Felix


#15

@felix-mueller - I’m not sure what history level was set by default, but after changing the history level to be HistoryLevel.FULL the variables are imported in Optimize. thanks.

I suppose the downside of FULL history level vs AUDIT or NONE is the number of events generated could negatively impact performance ?


#16

Hi @eleco

I am happy that changing the History Level to full changed your issue.
Currently Optimize requires this history level - especially for variables.

You are right that when using the level full more data is being created in the history tables of the camunda engine. In scenarios where you have large amount of process instances you eventually will notice a small impact on performance, but also the size of your history tables will increase.
If you are worried about the size of the tables, you could think about history cleanup in the engine.

Is it fine for you to run with history level full?

Best
Felix


#17

I think we’ll be good with history level = full, we don’t have that many instances running (yet). cheers.