The basic timer functionality in the conceptual sense seems to be okay to me. I plan to use them in a more advanced, "decoupled" model that I'm putting together wherein all processes are called via the REST API, so it doesn't matter where the process is located and we're not dependent upon the Call Activity to initiate a child process. The parent process execute the asynchronously (I do hope my understanding of that is correct) and then set a catch event to wait for a pre-defined amount of time for a message from the child indicating it has completed or something.
This gets complicated when you want parallel execution of multiple child processes with different handlers on each. Moreover, this fully decoupled model requires rigorous attention to how you pass, store, and evaluate process variables as they cannot be shared as would the case with a Call Activity (There's an opportunity for a new extension, Call Activity to a different Camunda instance that would function just like a "local" Call Activity".)
As to the system resource issue, I guess it would depend upon the architecture and structure of your processes. The initial start (acceptance of request by the REST API) seems very lightweight and Camunda seems to have no problem accepting very high volumes of messages. The problem is when the process starts executing. Then it depends upon a lot of factors, which we've discussed at length elsewhere.
The only way to truly load balance based upon current system resource usage would be to provide something like a closed feedback loop to the job executors that would tell them to stop picking up new jobs if they were heavily loaded or had reached a certain level of load. If the job executor had some sort of external "control" channel, then you could tell it to throttle back based upon any number of external resource constraints, including the database. For example, if you were choked on the disk I/O of your database, you could tell it to queue more work and execute fewer jobs.
Architectures like this can be complicated to both build and configure because it's hard to account for all the potential resources being used. With closed loop feedback you can get things like hysteresis and potentially unpredictable performance. That said, you could in theory control the job executor(s) such that you maintained an acceptable level of process throughput that did not result in an avalanche of rollbacks.