Some of the largest performance gains I saw on a recent project came from ‘breaking up the circles’ in TI processes, so I thought that I’d do quick write-up on it. Nothing new, just the good old ‘don’t read and write at the same time’ adage. We humans like everything circular (our lizard brain is wired to recognise other human’s eyeys and be happy about), whereas TM1 really doesn’t like it all, anything remotely circular (rule dependencies, TIs) causes cache invalidation and massive performance degradation.
The usual example is quite straight-forward:
You have a complicated cube with a number of rules and largish feeders (or even feeder chains to other cubes) and you write an innocent looking TI that would update this cube from itself or from anywhere else and use CellIncrementN
instead of CellPutN
and suddenly everything grounds to halt. My usual rule of thumb is that I want to see around 30k cells processed per second, whereas such TIs would be notoriosly slower, processing 50-100s of cells per second.
There’s a number of backend technical reasons: cache invalidation, need to reevaluate feeders and re-fire dependencies and other chickanery, but the end result is that processes run slow as molasses.
My usual solutions are:
CellIncrementN
with AsciiOutput
) and load this file (using CellIncrementN
unless you can avoid it) in epilog of the original process. This works if you are not requerying the changed values after your updates in the original TI (and you should never do that), so at the very least you’d preserve the view cache in the original TI.CellIncrementN
entirely by creating a dummy staging cube that would a copy of your target cube (all dimensions are the same), but will not have any rules. So your TIs will aggregate \ CellIncrementN
data in this cube and then call a simple TI that would copy data from this dummy cube to ‘real’ one, but using CellPutN this time as you know that there’s no aggregation to be done