IBM Business And Solution Connect Events in May

Screen Shot 2014-04-26 at 7.58.22 pmThere’ll be quite a few IBM Business / Solution connect events across Australia this May:
See dates, agenda and register.

I’ll be at the ones in Sydney & Brisbane. Maybe even a bit on stage, if I can’t convince BegaCheese to do all the talking -)

Come to PM2 booth and say hi (there will be nice people there as well, not just myself). Beer afterwards is also an option.

Did I mention it’s free? Cya there.

TM1 Puzzles and Diversions. Consolidated Feeders

I’m running our interpactice (we’ve got Singpapore and Sydney offices) TM1 sessions. Currently playing with the “quick / easy / conceptual puzzles” approach. Beats my lecturing so far -)
I’ll publish some old ones in this blog, post the answers in comments or in an email on If you can answer the question #2 for this one correctly and explain it — beer/wine is on me, I failed it.

TM1 model is the set up as the screenshot below.
Question 1) : What will be displayed in Target, D, Jan cell?
Question 2) Same question, feeder changed
Question 3) Same question, feeder changed

Cognos TM1 Application Server JMX Monitoring

As some of you already noticed, TM1 applications server (the java engine that drives your Contributor Applications, Operations Console and (from 10.2) TM1Web), is quite a sensitive beast. And where there’s sensitivity, there’s a need for care monitoring. In this post I’ll show how to attach JConsole to TM1 Application Server for monitoring.

TM1 apps server (by default) is the good old Tomcat in the background, so you can use JMX to monitor and gather quite detailed stats about it. There are multiple different JMX monitors (see Nagios, Moskito or JavaMelody () for example), but for the simplicity’s sake I’ll just show the standard JConsole that is packaged into any Java Development Kit.

Looks like this, that’s a freshly ran TM1 app server with just a few web sheets open (you can see spikes in memory usage). I can’t show you the production one, trust me it’s more impressive )
Screen Shot 2014-03-31 at 4.18.45 pm

By default, TM1 application server JMX requires SSL connection and SSL can be quite cumbersome (you need to pick up and register applix certificate as in this instruction). An easier workaround is to disable SSL, so that you can use any JMX monitor without messing with certificate files.

To do that, you need to:

  • open service_pmpsvc.bat file in cognos_install/bin64 directory (create a copy beforehand)
  • find the JMX related line and change from true to false Screen Shot 2014-04-01 at 4.03.16 pm You can change any Tomcat setting in this file (memory, perm size, etc)
  • open command line and run service_pmpsvc.bat uninstall — this will remove your TM1 application server service
  • run service_pmpsvc.bat install
  • run service_pmpsvc.bat start
  • run your JConsole and point to tm1_application_server:7999 and voila

    Enjoy your new server with an open JMX port, you can connect any JMX monitoring tool to it without any issues.

    What to look for in monitoring? If you have really heavy spikes when certain web sheets are open then it might be worth redesigning them (less conditional formatting & other Excel magic). See performance section of this note, anything not IIS related still applies.

    You can even couple it with some proper stress testing using JMeter as per this Best Practice article.

  • TM1 Applications and cross dimensional security or Access tables with NoData in TM1

    Got questions about cross dimensional security in TM1 Applications aka Contributor quite a few times, so it’s obviously time to write something up and put out here for linking in future ) Cross dimensional security is quite a mouthful, so for us ex-Enterprise Planning people: this is a post on how to do something like Access Tables in TM1. Don’t get overexcited, though, there’s no “good-enough” solution that I’m aware of, more like a bunch of workarounds. I’ll list them here with the pro’s and con’s of each and typical usage scenarios. I’ll pay a bit more attention to one of them (dynamic MDX-subsets with an auxiliary flag cube), cause it’s not very intuitive (for non-hardcore TM1 developers) and is actually quite useful.

    Problem overview

    Let’s set up the scene. I’ll use the SData server that comes as a sample with your TM1 installation.

    This model is about a company selling cars worldwide. I’ve mocked up a very simple TM1 contributor application to let them collect their sales budgets by countries:

    Screen Shot 2013-12-01 at 4.09.14 pm

    The workflow tree to left lists countries rolling up to regions. Opening a node for specific country (Denmark, in this case) allows us to input sales for a particular Model (L Series 2WD) and get our gross margin easily. Country managers can submit their budget, region managers review it and etc.

    All good so far but what if we don’t sell all models in all countries? Only “L Series” in Denmark, while we’re at it.

    Pretty typical scenario, but quite a pain to implement in TM1. And a big shock to all EP consultants so used to access tables (just say Denmark is eligible for this set of Models, Norway for this and etc). And a puzzle for TM1 old-timers who prefer Excel-built user-forms with TM1Web to Contributor — they just don’t see the problem at all )

    A bit of back-end discussion

    Hopefully it’s just a matter of time before this functionality gets implemented in TM1 Contributor, but it actually requires quite a bit of additional engine work.
    Current TM1 security works on the “user”-basis, so you know what a person can see in the cube. Unfortunately, that’s not enough, to fully implement access tables we need “who can see what where” component, essentially adding this approval hierarchy dimension to whole security process, so while it’s definitely doable, it might take a while for the smart folks in labs to carve it out. I’d guess that it’ll require another set of control cubes (very similar to the one I’ll describe later) and an additional view configuration each time an approval hierarchy node is selected.
    On the other hand if they just add dynamic subset refresh to title subsets — that’ll solve everything straight away!

    So back to the issue, what options do we have now? Again, if you know a better solution — please don’t hide it )


    TM1 dimension element security

    So we say that Lucas from Denmark has access only to L Series in model dimension. And it’ll work perfectly, he won’t see any other models when he logs in.
    Up until it turns out that he’s also responsible for Norway while Norway’s country manager is on leave. And Norway sells only S Series, but our poor Lucas will see both when logs into either country, because he now has access to those elements in the dimension. And he can even add data to the “wrong” model, making the whole thing quite shaky.

    Pro’s: works perfectly
    Con’s: only if you have a rigid 1 person – 1 approval node relationship and I’m yet to see this in real life.

    Ok, you’d think, we’ll just configure an additional step of

    Cell security

    Adding these country-model relationship as a input restriction rule. Only L Series in Denmark, S Series in Norway, easy as.
    The problem is that Lucas will still see both set of models in either country when he logs in, he just won’t be able to input data in the wrong country – model combination. Which is an improvement, but he’ll still complain about models from other country cluttering his input form.

    Pro’s: your data is correct
    Con’s: user interface is far from ideal

    So you’d think ok, now we just need to hide those unneeded models by introducing


    Zero-suppression can be used to hide all unneeded combinations. Two caveats:
    1) you’d need a nonzero element on the needed intersections, so you add a “flag” element to row or column and then apply rules to set it in needed intersections. Looks tad ugly, but not a major issue
    2) zero suppression works only on rows or columns, so you need to drag your model dimension to rows and fix the view. This is a major limitation.


  • will work and works for higher levels of approval hierarchy (flags will be summed up) so Europe manager will see both Norway and Denmark car sets
  • you can use rules to set flags, so they can be quite flexible
  • Con’s: see above and

  • performance can seriously degrade, because you’d need to populate and feed all “potential” input scenarios, causing overfeeding (there are no real cells, only flags). You’ll essentially turn off the whole sparse processing algorithm, can be quite a show stopper
  • Let’s assume that putting the secured dimension on rows / columns isn’t a problem flags are actually a comfortable solution. We can avoid the ugliness of needing a “flag” element using MDX subsets method described below, so if rows / columns requirement is not critical I’d consider that method instead. We’re introducing other kind of ugliness there, so it’s not a clear cut.

    Let’s explore other approaches to the problem before we jump to MDX.

    Conditional Pick-lists

    A very powerful feature to limit one dimension based on another is using the IF condition in picklist rule to pick a corresponding subset. So if your data is already in some sort of flat / line-item input, conditional pick lists are an ideal solution. You’d most likely need another cube to transform this pick-list dimension to a real one for analysis, but you’d need it anyway when you go the line-item way. It’s quite a common scenario with employee / contract management applications, but for the Sales application we’re discussing here it’ll be quite painful, imagine selecting a “model” for each of the input rows.


  • works as described, with various subsets pregenerated in TIs
  • Cons:

  • requires additional cube for analysis
  • not suitable for all types of applications
  • Worksheets

    I’ll just briefly touch this: since you can add worksheets into TM1 applications, you can add an Active Form that will be fully generic based your requirements. I never did this due to quite a lot of reasons (maintenance, performance, development just to name a few), but worth listing it here.
    Pro’s: fully flexible

  • different development approach (MDX in Perspectives)
  • can be hard to maintain
  • another interface can be hard to explain to users
  • MDX subsets

    Dynamic subsets are a quite useful concept, they are MDX expressions returning set of dimension elements. Each time user requests a view with a dynamic subset, it’s expression is evaluated and results are returned for display. Other currently selected dimension elements (so called context) can be passed into dynamic expression making it quite flexible. Biggest downside is that this subset needs to be in columns or rows to be reevaluated when context changes.

    If that’s not a problem, then we can build quite a flexible solution. Key idea is to use an additional cube to hold our flags (show or not) and query it to define what should be shown.

    Let’s look at it for the car sales example we’re discussing:
    1) Let’s create a “flag” cube with 2 dimensions, Approval Hierarchy (region) and the one we want to filter (model)
    Screen Shot 2013-12-01 at 9.45.10 pm
    1s in intersection are “show” the model for country. Note how the higher levels of approval hierarchy (Scandinavia, Europe) get their values automagically by consolidation.
    2) Let’s create a new subset in model dimension. We’ll use the following MDX expression.


    Walking through it from inside out:

  • TM1SUBSETALL to select all models
  • TM1FilterByLevel to select only leaf level elements
  • [SalesCubeFlag].([region].CurrentMember) select value for [SalesCubeFlag] for currently selected region in context
  • return only the models having value greater or equal to 1
  • Screen Shot 2013-12-01 at 10.28.15 pm

    Let’s assign it to view and see how it works in the application:
    I’ve added models in rows of the cube view:
    Screen Shot 2013-12-01 at 11.03.05 pm
    Let’s log on as Denmark:
    Screen Shot 2013-12-01 at 10.58.29 pm
    As Norway:
    Screen Shot 2013-12-01 at 10.59.27 pm

    As Europe:

    Screen Shot 2013-12-01 at 11.00.01 pm

    So you can see this working for various detail and total nodes.


  • You can use rules or TIs to populate the flag cube according to your requirements
  • You can have multiple dimensions and dynamically filter them depending on the context
  • You can add a measure dimension to store multiple different flag logic for different applications
  • Cons:

  • Double check if this works in your TM1 version. I had issues in CX 10.1 and no issues in enterprise TM1 10.1, so it might be version / fix pack quite sensitive.
  • MDX subsets used to lock down the whole server on refresh and were avoided like plague (you just ran a TI to convert subset to static), but shouldn’t do so anymore after 10.1. I’d guess that performance would be at least as good as flags + zero suppression, worth testing if this will be an issue for you.
  • Be careful with hierarchy levels, unlike flags this approach will display all children (even the ones you want to avoid showing), that’s why I’m selecting the lowest level of hierarchy.
  • I’ll leave you with a link to excellent MDX Primer, describing much more “fun” things you can invent in MDX based world. Though I’ll say that I’m quite a proponent of their extensive use, moderation is the key ) I wrote a small MDX query wrapper for TM1 a couple years ago to play with TM1 MDX syntax.


    I usually select conditional pick lists, flags or MDX based on application type I’m working with. Would love to hear better options.

    Some interesting links

    Access Tables in TM1:
    MDX with attributes

    And every post is way more interesting with a cat in it, don’t you think?

    Cognos and OpenStreetMap

    Got a chance to play with maps in Cognos reports for a recent PoC and learned quite a lot in the process. Most notably, I integrated OpenStreetMap in Cognos report, so I’ll demonstrate how to do it. Looks like this:
    Screen Shot 2013-10-19 at 11.18.16 PM

    Cognos and maps overview

    So you’re up to doing some maps in Cognos reports? I was always convinced (and still mostly) am maps are rarely a good graphic medium apart from “wow”-effect, but it’s quite magical to look at the same data on a map, get’s even to me -)

    And my general feeling is that if location data is really important to your organisation, you would definitely have a serious GIS system in-house and wouldn’t be looking for “just a map report” in Cognos -).

    I’ve met some strikingly well used location analytics, for example one of my retail customers was using a dataset with car purchase history and using owners addresses & car price to colour suburbs for selecting retail outlet positions, that was very impressive feat. You should contact guys from ESRI if you want to hear a proper location analytics pitch, but I’m still thinking that it’s just a quick way to score some points with users, but then again, if it’s a quick way and an easy one, why avoid it?

    There are quite a few options you can go with if you want to embed maps in your reports:

    • Serious solutions
      • Built-in MapInfo maps
      • ESRI
    • Javascript maps in reports, most notably:
      • Google Maps
      • Bing Maps
      • OpenStreetMap (OSM)

    And since it takes time for these things to sink in: “Google Maps and Bing Maps are not free to use”. OSM maps are ;)

    Serious solutions

    Let’s start with heavy-weights. Go this way only if you really have the itching mapping need (if so you probably have one of them in-house already) and a bit of a budget, both come with a price tag. And that’s the reason I don’t have any experience with them apart for demos -)


    Cognos has a built-in mapping capability based on MapInfo maps ever since Visualizer days. It ships with a few default maps (world map, US states) and you can use MapInfo Professional to create gst files for additional maps or you can buy them from MapInfo directly.


    ESRI has a full-blown over-the-top Cognos Integration (you get ESRI maps in Report Studio Toolbox and in most other studios) with no code-writing required. Good client and easy to use once you’ve set it up, if you plan to do a lot with location-based reports, ask for a demo and a quote.
    Heaps more on their site

    Javascript maps

    More affordable (to a certain degree) solution is to use one of the web mapping service providers and integrate Cognos report data with a map on-flight using Javascript.
    There are heaps of examples for both Google Maps and Bing (see links below), but I must repeat that both of them are not-free-to-use. You can check the legal clauses yourself (Bing and Google Maps), but the only way to qualify for free usage is to have Free and Public access under a specified quota of map calls. You need to contact Google or Microsoft for correct pricing, but judging by internet rumours it start with 10k$ per year.

    Google Maps

    Google maps have an interesting story, they were free for about 6 years and became the de-facto standard for maps in web applications, but then started charging developers from January 2012, causing massive upheaval, even after price was significantly cut-down (80% drop is sure sign of original error in my book). This lead to the rise of OpenStreetMap, massive projects like Foursquare made the switch to OSM instead of Google.

    While Google maps were free, a lot of Google Maps and Cognos examples got published, here are some of them:
    With coordinates set in latitude and longitude:
    With client-side GeoCoding:
    With polygons
    With embedded report inside by using CMS:


    OpenStreetMap is a crowd-sourced map of the whole world, meaning that it’s updated by users all the time (I just saw a bypass in remote town show up on OSM before Google Maps were updated), which is free to use and licensed so that it will be free to use in the future as well.  There’s nothing fully free in this world, so there are some caveats I’ll highlight along the way, but, in general, you can use OSM without paying a dime and you can even host your map server inside your network (making it available in case your users don’t have internet access / you want to limit web traffic / or are security paranoid).

    Sadly, there’s no Cognos + OSM recipes out there yet, so I’ll try to correct this.

    Before we proceed, we’ll need a bit of common map related-lingo.



    In the world of maps you operate with geographic coordinates (latitude and longitude) of a point, but in your data it’s usually street address or a postcode. Converting  addresses to geographic coordinates is called geocoding. It’s absolutely vital to geocode all locations you want to show before you do any report development. You can do online geocoding, but it’ll be terribly ineffective (geocoding will be called by every client running the report), slow (image the time it takes to convert you 1000 stores to points) and wouldn’t work at all (most online geocoders limit the load, Google geocoder wouldn’t allow more than ~10 requests per second so geocoding 1000 points would be just impossible).

    It’s best to do your geocoding during ETL and store results in the dimension so you can easily use them in map reports you require.
    And another licensing warning, you can legally display google geocoder results only on google maps, so it’s best to use one Nominatim (unsurprisingly, based on OSM data) if you want to use the coordinates anywhere else later.


    When you see a map in a webpage, it’s actually composed of multiple small square images called tiles. Map servers know what set of tiles to show user based on level of detail and map position. Here’s a fantastic explanation from Bing team.


    You can imagine the amount of traffic tile serving requires, so there are only a few “free” tile servers (most notably, MapQuest I’m using in this example). You can register for free at CloudMade and try out their tiles, some people like them better.

    If you have really high usage (or are absolutely paranoid), you can build your own tile server from OSM absolutely for free. If you do this, outside connection for maps will no longer be required, your maps will render from this server. Then you set up weekly updates to get map changes and you’re absolutely set to go.

    You can even make your own tiles using free Tilemill and draw your own region / country borders or anything you would like. Game developers use this to draw “zombie” haunted city, overlaying it with real addresses and streets.
    And while we’re at it GADM looks like a fantastic collection of official region definitions, there’s definitely the most detailed (60 Mb of shape files) “official” map of Vietnam that I could find for free.


    Markers are the points that you put on the map and it’s quite easy to use non-standard icons, making the map a lot more appealing. You can conditionally change marker icons / colours.
    Here’s a good collection of free markers.

    OSM example

    And, finally, the main dish.

    I’ll base this example on “sales and marketing” cube from standard cognos samples and what we’ll do will resemble Ironside recipe, but without geocoding, we’ll use predefined coordinates.

    We’ll build a list of countries and draw a marker for each of the countries capital. I’m using Leaflet.js for mapping, a really nice mapping library. OpenLayers would be another option, but leaflet looked easier )

    How to do the report (or just grab the report definition xml).

    • start with 2 column table
    • create a list to the right and dragging  Retailer country level to it
    • define latitude and longitude with following expressions
      Calculated latitute of country capitals
      case (caption([Retailer country]))
      when ('Australia') then (133)
      when ('Austria') then (13.3333)
      when ('Belgium') then (4)
      when ('Brazil') then (-55)
      when ('Canada') then (-95)
      when ('China') then (105)
      when ('Denmark') then (10)
      when ('Finland') then (26)
      when ('France') then (2)
      when ('Germany') then (9)
      when ('Italy') then (12.8333)
      when ('Japan') then (138)
      when ('Mexico') then (-102)
      when ('Netherlands') then (5.75)
      when ('Singapore') then (103.8)
      when ('Spain') then (-4)
      when ('Sweden') then (15)
      when ('Switzerland') then (8)
      when ('United Kingdom') then (-2)
      when ('United States') then (-97)
      Calculated latitude of country capitals
      case (caption([Retailer country]))
      when ('Australia') then (-27)
      when ('Austria') then (47.3333)
      when ('Belgium') then (50.8333)
      when ('Brazil') then (-10)
      when ('Canada') then (60)
      when ('China') then (35)
      when ('Denmark') then (56)
      when ('Finland') then (64)
      when ('France') then (46)
      when ('Germany') then (51)
      when ('Italy') then (42.8333)
      when ('Japan') then (36)
      when ('Mexico') then (23)
      when ('Netherlands') then (52.5)
      when ('Singapore') then (1.3667)
      when ('Spain') then (40)
      when ('Sweden') then (62)
      when ('Switzerland') then (47)
      when ('United Kingdom') then (54)
      when ('United States') then (38)
    • drag an html item to the list with the following html code. This html will register your countries capitals on the maps (by calling the function) and will centre map on the country you select when clicking “Show on the map“
      '<a href="#" onClick="displayInfoLatLng( ''' + number2string ([q_RegionData].[Latitude]) + ''', ''' + number2string ([q_RegionData].[Longitude]) + ''', ''' + [q_RegionData].[Retailer country] +''')"> Show on Map</a>
      <script> displayLocationLatLng('''  + number2string ([q_RegionData].[Latitude]) + ''', ''' + number2string ([q_RegionData].[Longitude]) + ''', ''' + [q_RegionData].[Retailer country] + ''' ); </script>'
    • Drag an html item to the right column with following code. This is you map and you can play with tile servers too (change the lines in the middle):
      <link rel="stylesheet" href="" />
       <!--[if lte IE 8]>
           <link rel="stylesheet" href="" />
        <script src=""></script>
       <div id="map" style="height: 400px; width: 550px;"></div>
      //Center map somewhere, Australia is a good starting point ;-) 
      var map ='map').setView([-24.766785, 134.824219], 2);
      // Choose a map provider below
      // CLOUDMADE
      L.tileLayer('http://{s}{z}/{x}/{y}.png', {
          attribution: 'Map data &copy; <a href="">OpenStreetMap</a> contributors, <a href="">CC-BY-SA</a>, Imagery © <a href="">CloudMade</a>',
          maxZoom: 18
      // try replacing map with osm or sat in url below, it'll change the tiles
      var mapquestUrl = 'http://{s}{z}/{x}/{y}.png',subDomains = ['otile1','otile2','otile3','otile4'],mapquestAttrib = 'Data, imagery and map information provided by <a href="" target="_blank">MapQuest</a>,<a href="" target="_blank">OpenStreetMap</a> and contributors.';
      var mapquest = new L.TileLayer(mapquestUrl, {maxZoom: 18, attribution: mapquestAttrib, subdomains: subDomains});
      function displayInfoLatLng(lat, lng, countryName) {
      	displayMapLatLng(lat, lng, countryName, 1);
      function displayLocationLatLng( lat, lng,   countryName)
      displayMapLatLng(lat, lng, countryName, 0);
      function displayMapLatLng(lat, lng, countryName,  displayInfo) {
      			var latlng = new L.LatLng(lat, lng);
      			var contentString = '<div id="content">'+
            				'<b>' + countryName + ' </b> <br>'
      //				Link to report with parameter
      //				+'<a href="http://cognosserver:80/ibmcognos/cgi-bin/cognosisapi.dll?b_action=cognosViewer&ui.action=run&ui.object=%2fcontent%2fpackage%5b%40name%3d%27sales_and_marketing%27%5d%2freport%5b%40name%3d%27drillDown_Report%27%5d&' + countryName + '" target="_blank">Show report</a>'
      			var marker = new L.Marker( latlng);
      			if (displayInfo == 1) {

    If everything is correct, you’ll see result like this:
    simple OSM in Cognos

    Here’s the report specification (10.2.1, but you can change version in first line of XML, 11.0 to 10.0 or anything and import it in earlier versions).

    Take a look at Leaflet examples to see what else is easily achievable with it. I enabled mobile device location finder in the PoC with 1 line of code, but truth be said, that impressed only myself -)

    Anything doable with Google maps can be converted to OSM, so if you’re interested to see how one of the existing recipes translates to OSM write me a comment and I’ll write about it. Potentially:

    • Conditional markers
    • Opening a report from a marker click in a new page (that’s already in the code above)
    • Opening a report from a marker click in an iframe
    • Drawing regionsOverall, that was an interesting dive in a new area!

    Cognos BI 10.2.1 3 charting engines: The Good, The Bad and The Ugly

    Feature updates went largely unnoticed with BI 10.2.1 FP1 release.  It’s highly unusual to have major feature updates in fix packs, they’re usually held up to releases. But in this case we’ve got Rapid Analytical Visualisation Engine (RAVE) available in all reports, not only Active Reports (as per core 10.2.1). And that’s big news worth spreading!
    So now you have 3 different charting engines 10.2.1 BI:

    • the “old” / legacy Cognos 8 charting
    • the ”new” Cognos 10 charts
    • RAVE visualisations

    I couldn’t resist and dropped them  together in one report:
    Screen Shot 2013-09-29 at 3.17.51 PM
    It’s now a good time to say that film title reference in post name is completely accidental, as non-native speaker I just can’t walk away from obvious word play.
    But jokes aside, which one do you like best? I intentionally used default settings in all charts (didn’t have much choice with RAVE, though).

    • In my opinion, 8 charts win by all rules, palette is easily distinguishable, chart looks clean and simple.
    • Palette in Cognos 10 charts is really misleading, blue bars on the left seem darker than on the right because they “blend” in with orange. It looks like there’s more than 2 colours in the bars. I understand the idea of trying to make palette softer, but it looks bad anyway. And box borders don’t help either.
    • RAVE — please just shoot me, it’s joke )

    It’s all customisable, but defaults got way worse along the way, imho.

    More about RAVE

    RAVE is the graphical engine that powers ManyEyes, an online visualisation engine launched in about 2008. Back then visualisation + web 2.0 collaboration sounded like future, so there were a few projects that were aimed on providing online collaborative data exploration tools (anyone remembers Swivel?). As it turns out, most of the people just wanted a place to quickly build basic visualisations to embed in their posts/news and didn’t care so much about others (surprise), which lead to eventual demise of most such projects.

    Here’s a wonderful post about ManyEyes IBM story, read it, it’s really worth it. I especially like this quote:

    Looking back, the hardest and most frustrating part of this development exercise turned out to convince IBM legal to approve a public (!) website where people could upload any data they want (!) for anybody else to see (!).

    ManyEyes eventually got stacked into IBM’s Business Analytic division and shelved somewhere, up until competition from “visualisation” vendors such as Tableau and Qlickview forced IBM to find some suitable answers. We’ve got Cognos Insight’s viz capabilities at first and then somebody just realised that they already had a viable viz engine just around the corner, so rendering engine from ManyEyes got dusted and repacked into Cognos stack. And that’s wonderful, because it’s easily extendable and customisable, so creating new visualisation types should be heaps easier than adding them to “core” charting engine. And it’s obviously scalable ;)
    Right now, though, RAVE customisations are severely limited, you can’t change a chart’s palette, label colour, anything, even your initial chart type options are quite limited, there are no combination charts / dual axises. Check the list of available visualisations at AnalyticsZone.

    All changes / additional viz requirements have to go through Business Analytics support.  But there’ll eventually be some tooling to enable RAVE customisations. I would also speculate that there’s a huge number of visualisation types that are not generally available for Cognos yet and will surely come out eventually.


    Update: The Visualisation customisation tool is now available at

    Another remark on default chart settings and visualisation

    If you have missed it — there’s a new edition of Stephen Few’s Information Dashboard Design book. A must have and first edition is by far most worn-out Few’s book in my collection. Almost at a point where I’d be embarrassed to ask him to sign it,just had to buy a new one ) Not that we’ll meet any time soon ( Anyways, fantastic read so far!

     Stephen is always repeating that BI vendors should step up and start embedding best practices in their products, not just make proper visualisations theoretically possible, but actively enforce them.

    As a Cognos BI example, let’s look at a couple of charts.
    If you like the right one (it’s ”scientifically” better from data/junk ratio), there’s a setting to turn off those black box borders in bar charts in Report Studio right here.
     Screen Shot 2013-09-19 at 5.49.10 PM 
    But by default it’s on and just tell me, how many times did you switch it off?

    Cognos TM1 10.2 released, Cognos BI 10.2.1 got FP1

    A short product update notice:

    Cognos TM1 10.2 is out

    TM1Tutorials are the first with the list of new features, congrats!

    A whole bag of candy, notably:

    • Performance enhancements: given that TM1 in the cloud will become real with this release, there must’ve been another major update to locking model. I would doubt it’s possible to have server-wide locking operations and think TM1 is cloud-ready, so changes must’ve been massive. So new locks — new contention bugs, yahoo!
    • TM1Web totally rewritten to Java: Finally you don’t need Windows at all and can use AIX/Linux for all components. Go blue! Did I mention bugs already?
    • Contributor iPad app: for whatever reason was the main thing I’ve heard rumours about for year at least
    • Data flow diagramming in Performance Modeler: looks like I can finally put tm1mn to rest. And Performance Modeler is becoming the main modelling tool, I’d guess we’ll all be reminiscing about Architect over beers in just a few years )


    Cognos 10.2.1 Fp1 is out

    Just randomly found it at Fix Central. Am I the only one who wants an ability to subscribe to fix central updates instead of searching every now and then?

    Fix list is usual IBM’s “Page unavailable” right now, so I’m not really sure what’s changed. I’m sure that we’ll know soon.

    I’m having issues with 10.1.1 DQM on TM1 reports after 10.2.1 upgrade, hope that this FP will alleviate them )

    Update: FP1 didn’t help, but now I at least have an idea of what’s happening with my reports:

    It’s a 10.2.1 BI new feature called LOLAP (I’m not joking you, it’s really called Local OLAP, honestly) and I’ve already seen underfed cells causing report issues where there were none previously. And that’s not all of it, MDX parsing changed as well. Am I happy? Oh, I’m absolutely blown away, new parser, DQM reports not working after upgrade, data missing where it was before — it’s just like Christmas came early!

    Learning@Coursera: Introduction To Data Science

    Screen Shot 2013-08-25 at 4.22.32 PMGot my first Coursera certificate a while ago, hooray! Got it by mistake (more on it later), but it doesn’t matter, I won’t tell anyone, whoops )

    Introduction to Data Science was taught by Bill Howe from University of Washington and lasted for 8 weeks. As usual, I was very active in the first 5 weeks and dropped the ball later, when it came to peer assessed exercises. But it looks like they had some issues with grading algorithm, so it kinda said that 100% on 5 weeks was enough to get a certificate, which is fine by me.
    The course itself was quite interesting and I really appreciate the breadth of the coverage they tried to provide.
    The course outline can be roughly described as follows:
    * show a plethora of data sources that surround us now. This is where we did some basic sentiment analysis on live Twitter stream in Python
    * demonstrate techniques to capture and digest that data. This is where relational algebra was introduced and there were quite a few SQL exercises (matrix manipulation in SQL, some Celko for beginners stuff)
    * Map Reduce concepts with some exercises in Python again
    * SQL / NoSQL debate and key-value systems landscape was discussed with some relational approaches re-used in exercises with HBase
    * Stats and Machine Learning with a Kaggle competition as an exercise
    * Visualisation with exercises in Tableau
    There were a couple of optional assignments if main track wasn’t enough:
    * massive AWS exercise, with a 0,5 Petabyte graph analysis on 20 servers — I surely did that, that was yum. Inner geek was very happy to run 20 server jobs while teaching some Cognos workshops.
    * real-world data science project. The idea was that some organisations will describe their problems and students will undertake a 4-5 week project to help them. I looked over the forum a few times, but saw no interesting problems stated and a definite over-supply of eager students, so skipped it.

    Overall,  as you can see, it’s a pretty packed course. It also has a lot of prerequisites, I don’t really think I would get a lot of NoSQL part if I wasn’t reading about the area for the last 5-6 years and the same for Map Reduce, relational algebra/SQL, ML — there was only a week for each the topics, so it was more like “here are the main concepts distilled, off you go to the exercises”. Quite good for me, but I could understand a lot of complaints in the forums, you definitely had no chance to learn from scratch.
    Since all the topics were not new, I adopted a quite funny approach to the course: I did the exercises first and listened to the lectures after (finished last one yesterday ))) That actually saved me from all the massive frustration everybody had with automated python submissions grader. For twitter assignments, you had to submit the python code and it was run against some new set of tweets and your script results (number of sad tweets or tweets from Ohio etc) was compared to correct result. As it turns out, nobody expected that there would be some many students (around 40k), so this auto-checker became the bottleneck and people waited hours and hours before getting their results. Everybody was obviously upset and it was especially funny on the course about massively parallel systems and scalability )

    I wouldn’t recommend this course to anybody new to any of it’s areas (Python, SQL, MapReduce, data visualisation), but as recap with a good structure it was really helpful and interesting.

    Kudos to prof. Bill and his TAs for pulling this through.

    Cognos BI reporting on TM1

    6843495776_a7b40784de_zWe’re about 70% done in the project that took 110% of my time lately and it’s all about using Cognos BI on top of TM1 (with a bit of DWH in Cognos DataManager,  that’ll be a separate post). Good time to write down some notes on how to develop efficient reports with TM1 as a data source.

    There’s already quite a lot of material around, so I’ll add as many links as possible. Unfortunately, dimensional reporting isn’t covered that well in both official documentation and trainings, so I’ll try to cover some gaps.

    Reporting on top of TM1 is dimensional reporting, so main approaches are the same for any OLAP datasource (MDX became some sort of standard nowadays).

    It’s always quite hard to switch from relational reporting to dimensional, but once you do relational reports (SQL) will look quite cumbersome compared to dimensional ones.

    Developing dimensional reports usually requires a few common approaches:

    MUN construction

    Member Unique Name (or MUN) is the identifier Cognos uses to address any dimension element. It always contains reference to cube you’re using, dimension and, most likely, hierarchy and element parent. Why do you need to know this?

    Because using Cognos macro functions (## notation), you can “construct” these MUNs at runtime. By far the most common example is using #prompt()# function, that will return you the value of given prompt control. Using this function you can create the MUN of the selected dimension element (selected Month, Product, Department, etc) directly without using any filtering in the query. This is the very essence of dimensional approach, you’re explicitly selecting a slice of cube you need to return instead of searching / filtering it.

    MUN construction is thoroughly covered in following posts: — Paul wrote heaps on the topic both on his blog and on cognoise, read it up )

    Note the substitute macro function, it allows you to avoid using if then else statements in MUN construction.

    Slicers instead of filters

    The same “select a portion of cube” instead of searching / filtering for it. You should always use slicers instead of detail / summary filters.

    Filtering should be only used in the form of filter function that selects only members of set that satisfy the condition you defined. Very useful for zero suppression or any other flag-based custom show/hide scenarios. TM1 rules are pretty cool in this aspect, I define quite tricky flags in cube rules themselves and then just use filter(members flag =1)

    Paul wrote about detail filters vs slicers here and here.

    No string manipulations, only report expressions

    Very true for TM1 (especially due to  this APAR), you can rarely do any string manipulations over OLAP datasources. But most of the time report expression would be quite enough to do the trick (cut a few characters out of member caption, rename something).

    See Paul’s post.

    Use dimensional functions

    Children, members, ParallelPeriod, PeriodsToDate and etc are one of the best things ever and can make your life very easy, make sure you’ve at least glanced through the list.

    TM1 adds a quite painful twist: all TM1 dimension hierarchies are treated as unbalanced / parent-child by default, so any level based function (PeriodsToDate, ParallelPeriod, Cousins, etc) wouldn’t work unless you set up }HierarchyPropertiesCube “levelling” the dimension. It’s possible to do in time dimensions, but unlikely applicable in any other, most TM1 dimensions are unbalanced.

    On the other hand, TM1 is very good at calculations, so you’ll most likely have all the YTD, MTD and other time-based calculations defined in the cube itself (if not, try adding them there, it’s easier and will work faster), so you’ll just need to do your MUN construction properly to reference July YTD when user picks July as month.

    Use “advanced” dimensional functions / approaches

    I’ll  try to highlight some techniques I’m using to illustrate dimensional approach to report design.

    I have a requirement for report-calculated measure that would be one formula (A) for actual months and something else (B) for forecasted ones.

    If A and B are calculated data items, I use intersect with current month to make sure that I get proper results for each month, so the formula becomes something like:

    item (
    INTERSECT (currentMember([Cube].[Months]), [actualMonths])

    currentMember returns current month in formula context

    intersecting it with [actualMonths] set will return currentMonth if it’s actual and none if not, zeroing out results for forecast month.

    item function converts a single element set (current month for actual months) into a member that we can use in tuple expression.

    If you add B formula for forecast months as

    item (
    INTERSECT (currentMember([Cube].[Months]), [forecastMonths])

    you can just sum them up and get desired result.


    This approach sometimes doesn’t work with Dynamic Query Mode over TM1 (see below for more DQM caveats), so another, even more “funny” approach is to construct a measure dimension member that always equals 1 and then intersect it with required months set (so that it’ll always return 1 for actual months and 0 for forecast) and multiply required formula on this “flag” member.

    Formulas look like this:

    member(1, 'a', 'a',[Cube].[Cube Measure])

    member function in this formula creates a new measure called ‘a’ with 1 as member formula.

    Some TM1 specific notes:

    A very good general do’s and don’ts  list:

    Just a remark: You can combine 2 cubes as a list, but not as cross tab or chart

    You can do a bit of TM1 attribute manipulation: display attributes, filter on them, but you can’t use an elements attribute to get another member (at least I couldn’t) and you can’t do any string manipulation with attributes.

    DQM vs CQM

    Another major question is whether to use new and fancy Dynamic Query Mode (DQM) or good old proven Compatible Query Mode (CQM).

    Pro’s for using DQM are:

    • performance. Complex reports are about 20-30% faster. Small ones (<5s) perform roughly the same with or without DQM
    • caching. CQM doesn’t cache any results, whereas you can notice a bit of caching happening with DQM (second user opening the same report will get slightly faster results with DQM)
    • bright new future. DQM is the way to go and should get only better, so it’s more reliable to develop in DQM to avoid converting reports in future

    At some point in the project I’ve converted about 85% of reports to DQM, leaving only the most complex formula & fast enough ones on DQM.

    Con’s are more numerous and really painful and are mostly due to  way-way-way more strict parser (as of 10.1.1 fp1):

    • nested tuple expressions are impossible with DQM. CQM allowed you to tuple your tuples even more ), making expressions heaps neater and more readable (you can start with base tuple formula and then add layers of tuples on it to get proper filtering). DQM doesn’t allow that, so I had to unwrap a lot of complex formulas and “flatten” them into a single monstrous expression to achieve the same results as in CQM
    • if then else formulas returning MUNs can’t be used in tuple statements. A lot more tricky MUN construction is required to cater for this (substitute macro function saves the day)
    • nested dimensional functions return unpredictable results. I had an expression like children(firstChild(member)) in CQM report and while converting to DQM it didn’t give any error, just returned some completely messed up items. Again, MUN construction allows to avoid nesting functions, I’ve constructed this “firstChild” element.
    • general error handling. If something doesn’t work in DQM, errors won’t tell you what or you wouldn’t even get the error (as in above), which is really maddening.


    Oook, that was a long post, but will do as a starting point. I might return and clarify it as this project moves along.