Applied Dimensionality

Installing PA at scale

Posted at — Jun 4, 2023
Installing PA at scale

Thought I’d describe how we installed / upgraded Planning Analytics and Cognos Analytics on 100+ servers. TLDR: pretty much like in the picture above :)

We had a couple of iterations in this environment over the years, first for a 10.2 -> Planning Analytics upgrade and then for migration from on-prem to cloud, so we know the approach does work well. I won’t be divulging any trade secrets, it’s all bog-standard infrastructure as code with a slight PA twist. All this installation and configuration will all be magicked away with version 12 of Planning Analytics, this post will be a bit of nostalgia to come back to in a few years time.

Why so many servers

Well, it’s a large company (duh) with thousands of users and a great approach of separating large PA services into separate VMs for isolation of maintenance & monitoring. So you get about 10 servers for PA services, a few for TM1Web (with different instances of Tm1web application for each large PA service for the same reasons and some load balancing) and another 3-5 for Cognos Analytics (yay, load balancing here as well), so all up about 15-20 VMs that form up an ‘environment’. And there was a whole bunch of environments, so over a hundred servers to keep up with. Thankfully all on Red Hat Enterprise Linux.

Overall approach

Nothing done manually at this scale, because everything very quickly gets out of sync, so setting up everything ‘as code’ and automated deployments. This rollout was a big exercise in standartisation and cleanup as the previous installs and patching were manual, so you’d have all sorts variances.

We used 2 main components:

Both tools are infrastructure as code, so we had Git repositories with branches per environment and pushing changes from branch to branch with changes applied by Puppet / Terraform automatically.

Choice of tools is debatable, I’d probably use something else instead of Puppet if I was to do it again, most likely Ansible as IBM does for Planning Analytics Cloud, or maybe SaltStack. No particular issues with Puppet, would just pick something more standard in Ansible or more exciting in Salt. All IaC tools have a templating capability (generate me an XML file based on this template replacing these variables with whatever I pass through) that we used extensively. Terraform is lingua franca in the cloud, that would definitely stay in case you want to try a different cloud later on :)

Took us a bit before we fully split configuration values applied and installation / configuration code itself in the configuration management tool (this is done with Hiera in Puppet) as you’d be changing values a lot and installation code less frequently. Configuration value files become ‘environment’-specific, whereas code should be the same in each branch, so you’d be promoting it as part of the change process.

And a special shout to using some Secrets Manager for all passwords / sensitive information, our builds would autheticate to it, grab the required secret and apply it where needed, without humans seeing sensitive information. And having security on secrets mean that you couldn’t leak the production password to development environment.

in the end you’re commiting to repo something along the lines of

	{
		"servers":[
			{
			"serverFDQN":"tm1server1.company.com",
			"role":"planning analytics server",
			"tm1Services":["finance","sales"],
			"paVersiom":"2.0.9.12",
			},
			{
			"serverFDQN":"tm1web.company.com",
			"role":"tm1web server",
			"tm1WebServices":["eu","au"],
			"tm1WebVersion":"2.0.6.71"
			},

		]
	}

and that’s enough for the IAC tool to know what to install and how to configure things, so in 5-10 minutes you get a fully set up box or a whole environment.

Installing PA / CA with an IAC tool

Most of the other components we needed have a pre-built installation & configuration objects in IAC tools (Forge modules in Puppet parlance), but we had to do PA / CA from scratch.

We ended up scripting a slient installation (and had no XServer on the VMs to make sure we don’t have any other way) for Planning Analytics or Cognos Analytics, i.e.:

Template configuration files get a bit more fun with multiple servers and whatnot, but it’s a matter of getting comfortable with chosen template language and testing your installation a lot :)

Very similar steps with Tm1web, but you’d be generating tm1web_config.xml and pmpsvc_config.xml (if you use Tm1 applications) as well as cogstartup.xml.

Was setting all this worth the effort?

I wouldn’t go this route on a small number of servers (as in any typical environment with <20 servers total), but on a this scale the initial investment in automation paid off many times over and allowed us to focus on the realy tricky parts in both projects:

And having the fully automated configuration did remove a lot of the areas of investigation when troubleshooting issues as we were confident that each environment or service is setup in exactly the same way. And when in doubt you could just blow it all away and reinstall in a matter of minutes :)

comments powered by Disqus