We’ve started a regular newsletter here at PMsquare, so I was asked if maybe I can write a ‘not-so-technical’ how-to article. You can judge the result yourself, reposting it below. Sign-up for the newsletter, lots of good stuff there.
TM1’s in-memory calculation engine is lightning fast, but you might be wondering if your installation is performing up to the standard lately? After all, we all know that humans are extremely fast in theory (think Usain Bolt), but not everyone one of us is capable of showing such performance in the backyard right now. Fortunately, you don’t have to spend months in the gym to improve TM1. Read below for 4 simple steps to better performance and increased user happiness. And yes, you can read it on the treadmill.
I’m sorry to open out with something as obvious, but if you’re currently on good old 9.5.2 or even 10.1.x, by far the biggest performance boost you will get is upgrade to 10.2. There are 2 main things that will improve performance drastically:
There are many more things that are worth the switch:
Virtualized environments are the de-facto standard right now, so it’s always worth checking that your TM1 resides on the VM host that has ample memory (no contention, no swapping) and the fastest CPUs (by per-core speed) that you potentially have on the virtual server farm. Even in 10.2 a lot of things (all TurboIntegrator processes, for example) are still singe-threaded, so per-core performance is still very important. You have ~40% per-core speed margins on the modern Intel CPUs, so an important factor when selecting hardware for TM1 server. Simply moving TM1 within the same virtual farm could give you 20-40% boost, well worth trying out.
And test turning off hyper threading for the VM that hosts your TM1, this can potentially give you a significant performance boost as well.
Relatively few people are aware of the nuts & bolts of Stargate Cache mechanism used in TM1. In a nutshell: when you query a view TM1 tries to store the results in cache to avoid recalculating rules again if somebody else asks for the same data. It’s actually way more complex / clever, TM1 tries guessing what else you’d be interested and recalculating that as well by expanding the view.
All this magic caching is controlled by 2 parameters: * VMM * how big the cache can be for a cube * VMT * how long the query can run before selected for caching, in seconds.
So if you increase VMM you allow more results to be cached and if you decrease VMT even short queries will end up in cache.
Try adjusting these parameters for the cubes that drive your slowest reports to see if you can improve things with caching.
On a side note: Any user input invalidates caches, preserving caches is usually one of the key reasons separating cubes for input and reporting.
This advice isn’t as easy as it sounds as feeders and overfeeding are obviously the most complex / intricate part of TM1 model. But there’s at least a fairly simple and straightforward way to detect overfeeding that we commonly use to detect issues:
And you can always apply a more reliable and accurate technique of building an OverFeeds cube as described in this proven practice article to fine tune the rules.