• Skip to main content

IBM Blueview

Cognos Analytics and all things IBM

  • The Blog
  • Cognos Glossary
  • Cognos Resources
  • About Me
  • Categories
    • Cognos
      • Data Modules
      • Administration
      • Framework Manager
      • Dashboards
    • Opinion
    • Community Spotlight
  • PMsquare
  • Subscribe

The Blog

What are Cognos Data Modules?

April 8, 2020 by Ryan Dolley 23 Comments

Cognos Data Modules are a web-based data acquisition, blending and modeling feature available in Cognos Analytics. They first hit the scene as part of Cognos 11 and are meant to supplement and eventually replace Framework Manager for both self-service and IT data modeling needs. I’ll pause for a second to let you long-time Cognoids hyperventilate a little… is everyone back? Good. Through this and subsequent posts I’ll try to dispel misconceptions about this awesome feature of Cognos while making you comfortable and – dare I say – excited to use them.

Cognos data module in action
Data modules – the wave of the future

Data Module Features

Imagine a data modeling solution that has the following features:

  • Easy to install and manage
  • Join dozens or hundreds of tables across multiple databases
  • Execute cross-grain fact queries
  • Build simple or complex calculations and filters
  • Build alias, view, union and join virtual tables
  • Secure tables by groups, roles and data elements
  • Create OLAP-like dimensional hierarchies
  • Enterprise governance, auditability and security

 ‘Okay easy, I’m imagining Framework Manager’ you’re thinking right now. Yes! But, add in:

  • Natural-language and AI powered auto-modeling
  • Automatic join detection
  • Easy integration of excel data
  • Automatic extraction of year, month, day from date data types
  • Automatic creation of relative time filters (YTD, MTD, etc..) and measures (YTD Actuals, MTC Actuals, etc…)
  • In-memory materialized views (data sets)
  • In-memory query cache
  • Direct access to members for relational sources!

‘Well that’s not Framework Manager… it must be Tableau, right!?’ No, in fact Tableau doesn’t offer even half of these capabilities. This is what every Cognos Analytics customer gets out-of-the-box in data modules today, with more features being added all the time.

Who are Data Modules for?

Many of my longtime customers have the misconception that data modules are for ‘end users’ only and that real data modeling can only be accomplished in Framework Manager. Conversely my new customers have built entire BI practices while having no idea what Framework Manager is. Clearly something is out of sync here, so let me make it very clear: Who are data modules for? If you’re reading this, the answer is you.

The Business User

The line between ‘end users’ and the BI team has gotten fuzzy in the last few years as increasingly complex models are built by people outside the IT department. Data modules are ideal for someone who wants to quickly and easily combine enterprise data with departmental data or excel spreadsheets and cannot wait for IT to build an FM package or SSAS cube. The interface is clean and easy to use and the ease of creating custom groups and building relative time calcs makes data modules an ideal place to combine data – even easier than Excel in many cases. As an added bonus, it’s very simple for the IT team to take a ‘self-service’ data module and incorporate into enterprise reporting without significant development work.

The Cognos Pro

Many Cognos pros kicked the tires in 2016 and could only see the yawning chasm of functionality that separated data modules from Framework Manager, myself included – for years I encouraged my clients to consider them for niche applications but to rely on FM for anything important or difficult. No longer! As of the 11.1 release, data modules have reached feature parity with Framework Manager is almost all respects and even surpassed FM in important modeling automation tasks like relative time automation. It is no longer the obvious choice to default to Framework Manager for new Cognos development.

Data Modules vs Framework Manager

Given the enhancements to data modules, which should you choose? As of the 11.1 release my recommendation is to do all new development in data modules for the following reasons:

  • Significantly easier and faster to create
  • Great features like relative time, date column splitting, grouping
  • Target of all future development
  • Unlock modern BI workflow

These points are explored in detail here – for now I’ll leave you with a final thought. My new clients use the same ol’ Cognos to deliver with the speed and scale you’d expect from Tableau or Power BI – my friend Vijay can tell you all about it. The key differentiation between them and legacy Cognos installations with orders of magnitude more resources is the embrace of data modules and the iterative, build-it-in-prod approach to BI delivery that data modules enable.


  • Cognos Union Queries in Reports
  • Cognos Relative Dates in 11.2
  • The 2021 Gartner BI Magic Quadrant is Broken for Cognos Analytics
  • Data Modeling for Success: BACon 2020
  • Cognos Analytics 11.1.6 What’s New

What Are Cognos Data Sets?

April 7, 2020 by Ryan Dolley 16 Comments

I’ve explored Data Modules in depth on this blog over the last year with the hope of showing you how awesome data modeling in Cognos Analytics can be if you really embrace it. There is, however, an additional piece of the Cognos data puzzle that you need to understand to unlock the full potential of the platform – the Data Set. So let’s answer the question – just what are Cognos Data Sets?


This video introduction to Data Sets covers everything you need to know!

The IBM Blueview Data Set Series

What are Cognos Data Sets?
When to use Cognos Data Sets


What is a Data Set in Cognos?

The Cognos Data Set screen is easy to understand and use.
Data Sets offer an in-memory data processing option for Cognos Analytics

Simply put, a Data Set is data source type in Cognos Analytics that contains data extracted from one or more sources and stored within the Cognos system itself as an Apache parquet file. The parquet file is then loaded into application server memory at run-time on an as-needed basis. This (usually) greatly enhances interactive performance for end users while reducing load on source databases. When combined with Data Modules, Data Sets offer incredible out-of-the-box capabilities like automatic relative time, easy data prep and custom table creation.

Data Sets are also extremely easy to build from your existing Framework Manager or Transformer packages making them an excellent option for getting the most out of your legacy Cognos 10 models. In fact this is probably the #1 use case for the Data Set technology and is the absolute fastest way to modernize your environment and turn Cognos into a rapid-fire data prep and visualization machine.

I’m going to write a full blog post about the exact situations that suggest a Data Set solution, but in short you should consider using Data Sets whenever:

  • Excellent interactive performance is a critical part of your deliverable
  • You wish to limit extremely costly SQL queries by re-using results
  • You must join multiple data sources together or accomplish other ETL tasks within Cognos rather than source systems
  • Existing Framework Manager or Transformer models are too complex or too slow for self-service
  • Someone tells you Cognos is slow but Tableau or Power BI are fast (those tools use Data Set-like technologies to enhance interactive performance)
  • You just want to do something really cool

Which Features can use a Data Set

There is one small limitation to Data Sets – while they function as a data source for all Cognos Analytics features they cannot be used directly to author reports. The solution to this is simple – wrap them in a Data Module and import the Data Module to Report Authoring. You should be doing this anyway for all Data Sets as it provides maximum deployment flexibility and ease of upkeep. I will cover best practice topics like this in a future article.

How to Build a Data Set

The 'create data set' capability is pretty well hidden in Cognos Analytics
The ‘Create data set’ capability is hidden among model options

Building a Data Set is simple, especially if you have existing Framework Manager or Transformer models available in Cognos. In fact Data Sets can only be built on top of existing models or Data Modules- not directly on data servers. IBM has helpfully hidden the ‘Create data set’ capability in the ‘more’ menu of model objects in the environment, so it’s surprisingly easy to miss.

Cognos Data Set Creation

Creating a Data Set is a straightforward process, especially for experienced Cognoids. The UI is actually a re-skinned version of Report Authoring and many of your favorite tricks will work here. Building a Data Set is as simple as dragging columns into the list object, saving and loading data. Of course there are additional options you can take advantage of.

The Cognos Analytics Data Set creation screen shares many features with the Report Authoring interface
  1. Source View: Browse the tables and fields in your data source exactly as you would in Report Authoring
  2. Data List: The data table shows a live view of the Data Set as you build it. It queries new data as you make changes
  3. On Demand Toolbar: The on demand toolbar appears when you click on a column, giving you the ability to filter and sort.
    1. Filtering: Filters help you focus the data in your Data Set to just what you need. Fewer rows = better performance.
    2. Sorting: Sorting by the columns most used in report or dashboard filters (for example, time data) can greatly improve performance
  4. Query Item Definition: The query item appears when you double click a column header. You have access to query item functionality from Report Authoring, which means you can really accomplish a lot from this popup.
  5. Preview: Unchecking the preview button switches the data table into preview mode which turns off automatic data query as you make adjustments to your Data Set.
  6. Summarize and Row Suppression: The summarize function rolls your data up to the highest level of granularity, for example rolling daily data up to the month. Row suppression is honestly a mystery to me Special thanks to Jason Tavoularis at IBM for an explanation – row suppression in data sets only applies to dimensional data sources and does the same thing as using row suppression in Report Authoring.

Once you’ve imported your desired data, set your filters, sorts and summaries and maybe added a few calculations for good measure it’s time to save, load and deploy your Data Set.

Saving and Loading a Data Set

Data Set save options include Save, Save As and Save and Load Data
Data Sets must be saved and loaded to be available

When you save a Data Set you will see the option to ‘Save and load data.’ This will allow you to select a directory in Cognos to house the Data Set object. It will also issue one or more queries to retrieve data and populate a parquet file. This file is stored in Cognos and loaded into memory upon request when users access the Data Set. Check out the ‘Flint’ section of this in depth article to understand what happens under the hood during Data Set creation and Query

Scheduling and Managing Data Sets

Data Sets only contain data from their last load; it is good practice to get in the habit of scheduling and monitoring Data Sets to ensure they contain relevant data and continue to perform well.

Data Set Scheduling Options

Data Sets and Reports have all the same scheduling options
Data Sets have the same scheduling options as reports

The easiest way to schedule Data Sets is via the ‘schedules’ tab in Data Set properties. Data Sets and Reports share all the same scheduling options, including the ‘by trigger’ option. Scheduling via a trigger makes it easy to ensure Data Sets only load after your ETLs complete. This works great for simple or one-off scheduling tasks.

For more complex schedules, Data Sets are available in the Job feature. Again, they function as if they were reports as far as building Jobs is concerned.

Data Set Management

The Advanced Properties view contains the statistics you need to manage Data Set performance.
Manage Data Sets using their advanced properties

The Data Set properties screen contains the info you need to effectively maintain fresh and performant data for your end users. At the top of window you can see the last load date of the Data Set, while expanding the ‘advanced’ exposes the following:

  • Size: The compressed size of the parquet file on disk
  • Number of rows: The number of rows in your data set. Keep this under ~8 million for best performance
  • Number of columns: The number of columns in your data set. No hard limit here, just don’t include columns you don’t need
  • Time to refresh: The time it takes for the Data Set to load
  • Refreshed by: The name of the person who last refreshed the data set

I will write a longer post about Data Set tuning and troubleshooting. For now it’s key to keep in mind the row and column suggestions above. And while ‘Time to refresh’ is important, this represents the time it takes to load data and has no impact on the performance end users will experience. The beauty of Cognos Data Sets is that by front-loading the processing, you can create a complex result set that takes hours to load but offers sub-second response time to end users.

A Real World Example of Data Sets in Action

I have used Data Sets in many successful client engagements to greatly improve performance, simplify presentation or accomplish ETL tasks in an afternoon that their DW team had put off for years. Here is a simple example for you.

The Problem: Metrics, metrics everywhere!

This customer came to us with a very, very common problem. The sales support team had identified a need for some new advanced metrics and built out a prototype dashboard. However, the underlying data divided between two Microsoft SSAS cubes and a handful of tables in the EDW. The data warehouse had given an estimate of many months to create the necessary tables and cubes.

The Solution: Cognos Analytics Data Sets

The customer brought in PMsquare on a 40 hour contract to make this happen. If your initial reaction to that contract length is skepticism I don’t blame you. In Cognos 10 this would have been impossible. However thanks to Data Sets I was able to do the following:

  • Extract the needed data from each SSAS cube and the EDW into a Data Set. There were 3 Data Sets total, one from each data source.
  • Join the Data Sets together into a Data Module and add in all the Data Module goodies like relative time
  • Create a new, final polished Data Set from that Data Module to simplify presentation and improve performance
  • Build out the customer’s dashboard

The customer was extremely satisfied with the end result, which looked something like this:

A real world data flow from a project I successfully completed.
A cavalcade of awesomeness awaits you with Data Sets

Cognos Analytics Data Sets in Summary

As you can see, I really was able to accomplish months of work in a single week using Data Sets. Obviously this technology cannot replace all ETL tasks however Cognos Analytics is now an option for low to medium complexity transformations. And you now have a slam-dunk option for rapidly simplifying presentation or improving performance vs even the simplest database view.

Be sure to check back next Tuesday, 4/14/2020 for part two of this series: When To Use A Data Set!


Catch up on all things Cognos:

  • Cognos Union Queries in Reports
  • Cognos Relative Dates in 11.2
  • The 2021 Gartner BI Magic Quadrant is Broken for Cognos Analytics
  • Data Modeling for Success: BACon 2020
  • Cognos Analytics 11.1.6 What’s New

The Gartner Magic Quadrant is Worthless: Cognos Edition

February 19, 2020 by Ryan Dolley 12 Comments

Be sure to check out an updated version of this blog post for the 2021 Magic Quadrant here!

Another February brings another edition of everyone’s favorite yearly head scratcher – the Gartner Magic Quadrant for Business Intelligence! I have expressed some strong opinions on the worth of the Magic Quadrant as a tool for decision makers in the past and this year will be no different. As always I strongly urge you to read the report rather than just rely on the picture as (once again) vendor positioning on the scatter plot feels extremely disconnected from the analysis contained below it. So let’s take a look at the 2020 Magic Quadrant as it relates to Cognos Analytics.

The 2020 Gartner magic quadrant for BI  is much harsher to Cognos than their written analysis
IBM’s position and Gartner’s written analysis are out of sync

A big change to the Magic Quadrant this year is the return of enterprise reporting as key differentiator for what they are now calling ‘ABI platforms.’ The second differentiator is ‘augmented analytics’, which is integrated ML and AI assisted data prep and insight generation. Gartner is now calling visualization capabilities a commodity. What’s old is new again.

The return of enterprise reporting

This should be great news for Cognos Analytics! Cognos is a recognized leader in enterprise reporting. In fact Cognos’ reliance on enterprise reporting was the raison d’etre for knocking it out of the leaders quadrant to begin with.

It’s extremely curious, then, that IBM’s positioning on this quadrant was not more markedly improved. It’s even more curious that Gartner writes of enterprise reporting, ‘At present, these needs are commonly met by older BI products from vendors like…IBM (Cognos, pre-version 11)’. It’s almost as if Gartner is unaware that Cognos 11 meets the same enterprise reporting needs as previous versions. At the very least they seem unwilling to give IBM credit for it on the chart. The write-up tells a different story.

Augmented analytics gain steam

The second differentiator on the Magic Quadrant is also good news for Cognos. The platform’s augmented analytics capabilities have seen tremendous investment in the 11.1 release stream and are legitimately ahead of most vendors I have hands on experience with (Power BI, Tableau, Domo, Incorta being the primary ones.) Observe:

  • Automated ML driven forecasts
  • Chatbot for NLQ and visualization creation
  • An entire AI driven augmented analytics interface
  • AI driven data prep
  • Integrated jupyter notebooks that write to and read from Cognos data

That’s a lot. If you want a comprehensive set of powerful, modern augmented analytics capabilities Cognos is a great choice.

The fact is that Cognos’ strength lines up perfectly with Gartner’s 2020 market differentiators, while it’s only ‘weakness’ – self-service visualization – is now considered a commodity. Again, why are they so poorly represented in the MQ image, and does the actual analysis tell a different story?

What does Gartner say about Cognos Analytics?

This write up is a lot rosier for IBM than the dismal MQ image suggests. I’ve summarized Gartner’s written analysis of IBM for you below:

Strengths

  • Cognos is one of the few offerings that offers all critical capabilities and differentiators in a single platform
  • IBM’s roadmap includes AI driven data prep, social media analytics and a long term goal of unifying self-service, enterprise reporting and planning (think Planning Analytics) in a single platform
  • Cognos can be deployed on-prem or in any cloud, unlike many other vendors

These significant strengths seem totally disconnected from where they have IBM placed on the quadrant. If enterprise reporting and augmented analytics are key differentiators between ABI platforms and Cognos is one of the only offerings that has it in a unified platform, how are they not better represented on the completeness of vision axis? Baffling!

Cautions

  • It is not often the sole enterprise standard
  • We think it costs more than other vendors
  • People don’t call us as much as they used to about Cognos

That last point is the key to unlocking the reality of how the Gartner Magic Quadrant for business intelligence really works. Let’s see why.

Gartner is a self-driven feedback loop

A huge component of ranking on the Gartner Magic Quadrant for Business Intelligence is straight up how often prospective customers call them about various tools. They don’t call asking about Cognos very much, ergo Cognos has a poor ranking. Don’t believe me? Look at my analysis of their MQ for planning platforms to see how survey scores seemed to have no impact on their ranking of Oracle as the market leader – Oracle’s survey scores were horrible!

Ask yourself, would you call Gartner to discuss a BI tool they rank in the bottom third of vendors? You wouldn’t. You call Gartner to talk about Microsoft, Tableau, Qlik and (bafflingly) Thoughtspot. Otherwise you call someone else. As long as this remains a major criteria for ranking Gartner will remain a market distorting self-feedback mechanism.

By this same logic, Cognos is the world’s #1 business intelligence tool in the Ryan Dolley Magic Quadrant as it represents 90% of my calls!

Why the 2020 Magic Quadrant should make you feel good about IBM Cognos Analytics

The BI market is shifting once again. Visualization is a commodity, enterprise reporting is king and augmented analytics is on the rise. As I’ve outlined above, IBM Cognos Analytics’ feature set is extremely well positioned to thrive in the landscape Gartner describes, whether or not they recognize it. There simply is no platform that offers the total package of mode 1, mode 2 and augmented analytics like Cognos.

Want to further the conversation? Connect with me on LinkedIn and check out PMsquare’s website for help getting the most out of Cognos.


Read on to learn how to modernize Cognos and become your own leaders quadrant!

  • Cognos Union Queries in Reports
  • Cognos Relative Dates in 11.2
  • The 2021 Gartner BI Magic Quadrant is Broken for Cognos Analytics
  • Data Modeling for Success: BACon 2020
  • Cognos Analytics 11.1.6 What’s New

The 3 Cognos Query Modes

January 29, 2020 by Ryan Dolley 12 Comments

This is actually a sequel one of the very first blog posts I ever wrote for blueview. While a lot has changed since 2015 understanding the difference between Compatible Query Mode and Dynamic Query Mode is still crucial. With the addition of data sets running on the compute service (aka ‘Flint’) things are even more complicated. My goal for this article is the run back the clock, peel back the onion and give you a historical, technical and practical understanding of the 3 Cognos query modes. Buckle up folks, this is going to get wild.

All 3 cognos query modes are available in Cognos 11.1
There are many query engines hidden beneath the hood of your 2020 model Cognos

Compatible Query Mode

Compatible Query Mode is the query mode introduced in ReportNet (AKA, my junior year of college…). It is a 32-bit C++ query engine that runs on the Cognos application server as part of the BIBusTKServerMain process. CQM was the default query mode for new models created in Framework Manager up to Cognos 10.2.x, after which Dynamic Query Mode became the default. The majority of FM models I encounter were built in CQM and thus the majority of queries processed by Cognos are CQM. It remains a workhorse.

CQM relies exclusively on the report service to deliver query results
CQM resides within the Report Service

It is, however, an aging workhorse. Query speed is hampered by the limitations of 32-bit processes, particularly as it relates to RAM utilization. CQM does have a query cache but it runs on a per session, per user basis and in my experience causes more problems than it’s worth. Furthermore, Cognos 11 features either don’t work with CQM (data modules) or must simulate DQM when using CQM-based models (dashboards). This almost always works but of course fails whenever you need it most…

CQM works just fine and moving to DQM is not urgent, however I strongly advise you to do all new Framework Manager modeling in DQM (or even better, build data modules) and start seriously considering what a migration might look like.

Dynamic Query Mode and the Query Service

Dynamic Query Mode is the query mode introduced in Cognos 10.1. It is a 64-bit java query engine that runs as one or many java.exe process on the Cognos application server and is managed by the query service. The terms ‘DQM’, ‘query service’ and ‘XQE’ all essentially refer to this java process. All native Cognos Analytics features utilize DQM only – CQM queries execute in simulated DQM as mentioned above. You can see the criteria necessary for this to work here. DQM is both very powerful and very controversial among long time Cognoids. Let’s take a look at why.

DQM uses the query service to deliver results
DQM features dramatically improved query performance

What’s great about DQM?

DQM has a ton going for it. As a 64-bit process it can handle vastly greater amounts of data before dumping to disk. If configured and modeled properly, it features a shared in-memory data and member cache that dramatically improves interactive query performance for all users on the Cognos platform. It even filters cached query results by applying your security rules at run time.

DQM is tuned via Cognos administration and by a large number of governors in Framework manager to optimize join execution, aggregation and sorting. It handles extremely large data volumes, especially when combined with the basically defunct Dynamic Cubes feature. It even combines cached results with live SQL executed against a database on the fly. On its own. Like you don’t have to tell it to do that, it just does. Magic!

What’s not great about DQM?

Unfortunately given the list of excellent attributes above, DQM has some problems. It is very complex to understand, manage and tune and requires DMR models to fully utilize the all the caching features – consider that the DQM Redbook produced by IBM is 106 pages. A standalone tool exists called Query Analyzer dedicated to help you understand what the heck DQM is even doing as it plans and executes queries.

Migrating from CQM to DQM is often a complex project to evaluate and execute. I once provided a customer an LEO estimate of 8 – 32 weeks to complete a migration project. I have seen migrations take almost a year. I’ve seen things you people wouldn’t believe…

The purpose of this blog is not to push professional services but this is one instance where I think you really should contact PMsquare for help. But let’s say you have a ton of CQM models and don’t have the time to migrate them all. Is there a shortcut to high performance on large(ish) data volumes? Why yes, yes there is.

Data Sets and the Compute Service (aka ‘Flint’)

Data sets are an in-memory data processing engine first introduced in Cognos 11.0 and greatly enhanced in 11.1. Cognos 11.1 data sets run on the compute service aka ‘Flint’. The compute service is a 64-bit spark-sql process that is created and managed by the same query service that manages DQM, so it’s not really an independent cognos query mode. I will write a more in-depth article about data sets and Flint in the future, but let’s take a super quick look at how they work before we get into why they are amazing.

The compute service uses Spark SQL to deliver results
The compute service is a modern in-memory compute engine

How do data sets and the compute service work?

Data sets are not live connections to the underlying data like CQM or DQM – rather, they are a data extraction that is stored in a parquet file and loaded into the Cognos application server memory when needed for query processing. It works like this:

  • An end user creates a data set from an existing package, cube or data module OR uploads an excel file (the process is the same!)
  • Cognos fetches the necessary data and loads it into an Apache parquet file
  • The parquet file persists in the content store and is available to all application servers
  • When the query service on an application server requires a data set for query processing, it first checks to see if it has a local and up-to-date copy of the parquet file
  • If not, it fetches one
  • In either case, the parquet file is then loaded into the memory of the application server
  • Data is processed by the compute service using Spark SQL and results are returned to the query service
  • The query service receives results from the compute service and may perform additional processing if necessary
  • The results are then passed to the report service or batch report service for presentation

What makes data sets great?

They’re easy to build, easy to join and manipulate in data modules, easy to schedule and the performance is great. Once loaded into memory a data set is shared between users on the same application server. I have done multiple projects where I accomplish weeks or even months of ETL by getting fancy with data sets and data modules. No wonder they are my favorite of the Cognos query modes.

What’s even better is how data sets provide a radically shorter path to high performance, DQM and Spark based queries for your existing CQM models without having to commit to a full conversion. You simply use a CQM FM package as the basis for a data set, then utilize that data set as a source in a data module. Once complete, you’ve unlocked the full set of incredible data module and dashboard capabilities like forecasting without having to do an 8 to 32 week project.

Which Cognos Query Mode is right for me?

Okay that was a ton of data, some of it pretty technical. Which of the Cognos query modes should you choose and how do you learn more?

TLDR

  • Immediately cease all development of new Framework Manager models using CQM
  • Consider migrating existing CQM Framework Manager models to DQM models or to data modules (PMsquare can help with this)
  • Data sets are your ‘get out of CQM free’ card; they vastly improve the performance of most CQM queries and simplify presentaiton for end users

References

  • Dynamic Query Mode Redbook
  • Cognos DQM vs CQM explainer
  • Queries on uploaded files and data sets
  • Configuring the Compute service

Read on to up your Cognos game

  • Cognos Union Queries in Reports
  • Cognos Relative Dates in 11.2
  • The 2021 Gartner BI Magic Quadrant is Broken for Cognos Analytics
  • Data Modeling for Success: BACon 2020
  • Cognos Analytics 11.1.6 What’s New

What being an IBM Champion means to me

January 16, 2020 by Ryan Dolley Leave a Comment

I received some great news on Tuesday from IBM – I have been selected as an IBM Champion again in 2020! And not only that, but my colleagues Cognos Paul Mendelson, Sonya Fournier and Mike DeGeus were selected as well. This brings our total IBM Champion count at PMsquare to 4! I’m extremely proud of our excellent team. We hire only the absolute best Cognos resources and IBM recognizes it. This post isn’t just for back slapping though, lets discuss what being an IBM Champion means to me.

What is an IBM Champion

The IBM Champion Logo

Directly from the IBM website:

IBM Champions demonstrate both expertise in and extraordinary support and advocacy for IBM technology, communities, and solutions.

The IBM Champion program recognizes these innovative thought leaders in the technical community and rewards these contributions by amplifying their voice and increasing their sphere of influence. IBM Champions are enthusiasts and advocates: IT professionals, business leaders, developers, executives, educators, and influencers who support and mentor others to help them get the most out of IBM software, solutions, and services.

The program runs on a yearly cycle and members must re-apply each year.

What does it mean to me?

Being an IBM champion means more than just a pat on the back and some SWAG. First and foremost it is a recognition of the work I do to feed the Cognos community with this blog, by moderating the Cognos subreddit and contributing to the IBM Analytics community site. Building our community is incredibly important to me and I am so very grateful for the feedback I get from you and from IBM. It keeps me going.

The program is also about being part of a huge community of IBM advocates. The slack is extremely active, extremely helpful and features the top experts on all IBM technologies in the world. The camaraderie in the program is quite amazing and I’ve made friends from all over the world. Meeting fellow champs is the most rewarding part of the program.

Finally, it’s about promoting your professional passion and working with IBM to improve their offerings. Champions receive next level access to the product teams for the respective technologies. If you’ve ever wondered how I know so much about the direction of the product this is a big part of it.

And SWAG, don’t forget the SWAG…


  • Cognos Union Queries in Reports
  • Cognos Relative Dates in 11.2
  • The 2021 Gartner BI Magic Quadrant is Broken for Cognos Analytics
  • Data Modeling for Success: BACon 2020
  • Cognos Analytics 11.1.6 What’s New

Relative Time in Framework Manager

December 31, 2019 by Ryan Dolley 6 Comments

Kamil asks an excellent question about relative time in framework manager in response to my Framework Manager vs Data Modules article:

Great article. I have one question, is it possible to use relative dates with packages from framework manager?

– Kamil

Like all questions in Cognos, the short answer is ‘no’ and the long answer is ‘yes!’ Let’s take a quick dive into relative time in Framework Manaager.

Relative time in FM? No!

The enrich package screen of Cognos
The Enrich Package interface

There is no ‘easy button’ for using the new relative time functionality with Framework Manager, unfortunately. I was briefly hopeful that this is possible using the enrich package functionality but it’s not there.

Enrich package is an important piece of the Cognos pie so it’s worth talking about briefly. Enriching a package provides needed context that allows Dashboards, Explore and the AI Assistant to do their thing. Most notably, enriching a package will allow Cognos to properly display time and geographic data types and will collect the sample data that allows the AI assistant and Explore to function properly. Enriching a package taxes your system so consider restricting the query subject or running it off hours.

If easy relative time does come to FM this is where I’d expect it to go. It’s worth reiterating that FM itself will receive no changes going forward so it’s time to start making the change to data modules. It’s easier than you think!

Relative time in FM? Yes!

Here’s where things get a little trickier and using relative time with your FM model becomes possible. To make this work we’re going to need to use https://ibmblueview.com/what-are-cognos-data-modules/Data Modules, custom tables and the lookup reference feature.

Step 1: Add a package source to a data module

Adding a package as a source to data modules
A package has been added as a data source to this data module
  1. Click the ‘new’ icon and select data module
  2. Navigate to your package in the folder structure, click on it and click ‘OK’
  3. The data module screen will open with the package visible in the data tray

Step 2: Create a custom time table

Building a custom table in Cognos
Building a custom time table from a package
  1. Click the ‘Custom tables’ tab and click ‘Create custom table’
  2. Click ‘select tables’ and click the package source. All the tables in the package are displayed on the left.
  3. Click ‘create a view of tables’ and click ‘Next’
  4. Don’t forget to give your custom table a new name!
  5. Click ‘invert’ then select only the table with which you want to use relative time
  6. Click finish. Your custom time table will appear in the data tray

Step 3: Add relative time functionality

Adding relative time to the custom table in Cognos
Relative time can be added to your custom time table
  1. Click the ‘Add sources and tables’ button and select ‘Add more sources’
  2. Navigate to the ‘Calendars’ folder in the samples and select the ‘Fiscal calendar’ data module
  3. Click ‘OK’. The FiscalCalendar table will appear in the data tray, hidden by default
  4. Expand your custom time table, click the date you wish to use for relative time and click the ‘properties’ button in the upper right. The properties window will open.
  5. In the properties window, select ‘FiscalCalendar’ in the ‘Lookup reference’ drop down menu.
  6. You now have relative time functionality in your data module!
  7. Rinse and repeat for any additional time or measure fields that require this functionality

Step 4: Join the custom time table to the package

Joining a custom table to a framework manager package
The custom time table can now be joined to the package
  1. Click your custom time table and choose ‘New… Relationship’ in the pop up menu
  2. Select the appropriate table to relate the custom time table to the rest of the package. Oftentimes this is a fact table.
  3. Select the appropriate field(s), cardinality and join type for the join.
  4. Click ‘OK’

There you have it! Relative time in Framework Manager (sort of)

At this point you can save and use your data module, which is made of your pre-existing package plus one or more custom tables. This doesn’t solve the obvious problem that your existing content references the package and not the new data module, but new content can be built off this module. The module will even inherit changes made to the package. And there you have it – relative time in Framework Manager… sort of!


Read on to level up your Cognos skills!

  • Cognos Union Queries in Reports
  • Cognos Relative Dates in 11.2
  • The 2021 Gartner BI Magic Quadrant is Broken for Cognos Analytics
  • Data Modeling for Success: BACon 2020
  • Cognos Analytics 11.1.6 What’s New
  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Copyright © 2023 · Atmosphere Pro on Genesis Framework · WordPress · Log in