This is actually a sequel one of the very first blog posts I ever wrote for blueview. While a lot has changed since 2015 understanding the difference between Compatible Query Mode and Dynamic Query Mode is still crucial. With the addition of data sets running on the compute service (aka ‘Flint’) things are even more complicated. My goal for this article is the run back the clock, peel back the onion and give you a historical, technical and practical understanding of the 3 Cognos query modes. Buckle up folks, this is going to get wild.

Compatible Query Mode
Compatible Query Mode is the query mode introduced in ReportNet (AKA, my junior year of college…). It is a 32-bit C++ query engine that runs on the Cognos application server as part of the BIBusTKServerMain process. CQM was the default query mode for new models created in Framework Manager up to Cognos 10.2.x, after which Dynamic Query Mode became the default. The majority of FM models I encounter were built in CQM and thus the majority of queries processed by Cognos are CQM. It remains a workhorse.

It is, however, an aging workhorse. Query speed is hampered by the limitations of 32-bit processes, particularly as it relates to RAM utilization. CQM does have a query cache but it runs on a per session, per user basis and in my experience causes more problems than it’s worth. Furthermore, Cognos 11 features either don’t work with CQM (data modules) or must simulate DQM when using CQM-based models (dashboards). This almost always works but of course fails whenever you need it most…
CQM works just fine and moving to DQM is not urgent, however I strongly advise you to do all new Framework Manager modeling in DQM (or even better, build data modules) and start seriously considering what a migration might look like.
Dynamic Query Mode and the Query Service
Dynamic Query Mode is the query mode introduced in Cognos 10.1. It is a 64-bit java query engine that runs as one or many java.exe process on the Cognos application server and is managed by the query service. The terms ‘DQM’, ‘query service’ and ‘XQE’ all essentially refer to this java process. All native Cognos Analytics features utilize DQM only – CQM queries execute in simulated DQM as mentioned above. You can see the criteria necessary for this to work here. DQM is both very powerful and very controversial among long time Cognoids. Let’s take a look at why.

What’s great about DQM?
DQM has a ton going for it. As a 64-bit process it can handle vastly greater amounts of data before dumping to disk. If configured and modeled properly, it features a shared in-memory data and member cache that dramatically improves interactive query performance for all users on the Cognos platform. It even filters cached query results by applying your security rules at run time.
DQM is tuned via Cognos administration and by a large number of governors in Framework manager to optimize join execution, aggregation and sorting. It handles extremely large data volumes, especially when combined with the basically defunct Dynamic Cubes feature. It even combines cached results with live SQL executed against a database on the fly. On its own. Like you don’t have to tell it to do that, it just does. Magic!
What’s not great about DQM?
Unfortunately given the list of excellent attributes above, DQM has some problems. It is very complex to understand, manage and tune and requires DMR models to fully utilize the all the caching features – consider that the DQM Redbook produced by IBM is 106 pages. A standalone tool exists called Query Analyzer dedicated to help you understand what the heck DQM is even doing as it plans and executes queries.
Migrating from CQM to DQM is often a complex project to evaluate and execute. I once provided a customer an LEO estimate of 8 – 32 weeks to complete a migration project. I have seen migrations take almost a year. I’ve seen things you people wouldn’t believe…
The purpose of this blog is not to push professional services but this is one instance where I think you really should contact PMsquare for help. But let’s say you have a ton of CQM models and don’t have the time to migrate them all. Is there a shortcut to high performance on large(ish) data volumes? Why yes, yes there is.
Data Sets and the Compute Service (aka ‘Flint’)
Data sets are an in-memory data processing engine first introduced in Cognos 11.0 and greatly enhanced in 11.1. Cognos 11.1 data sets run on the compute service aka ‘Flint’. The compute service is a 64-bit spark-sql process that is created and managed by the same query service that manages DQM, so it’s not really an independent cognos query mode. I will write a more in-depth article about data sets and Flint in the future, but let’s take a super quick look at how they work before we get into why they are amazing.

How do data sets and the compute service work?
Data sets are not live connections to the underlying data like CQM or DQM – rather, they are a data extraction that is stored in a parquet file and loaded into the Cognos application server memory when needed for query processing. It works like this:
- An end user creates a data set from an existing package, cube or data module OR uploads an excel file (the process is the same!)
- Cognos fetches the necessary data and loads it into an Apache parquet file
- The parquet file persists in the content store and is available to all application servers
- When the query service on an application server requires a data set for query processing, it first checks to see if it has a local and up-to-date copy of the parquet file
- If not, it fetches one
- In either case, the parquet file is then loaded into the memory of the application server
- Data is processed by the compute service using Spark SQL and results are returned to the query service
- The query service receives results from the compute service and may perform additional processing if necessary
- The results are then passed to the report service or batch report service for presentation
What makes data sets great?
They’re easy to build, easy to join and manipulate in data modules, easy to schedule and the performance is great. Once loaded into memory a data set is shared between users on the same application server. I have done multiple projects where I accomplish weeks or even months of ETL by getting fancy with data sets and data modules. No wonder they are my favorite of the Cognos query modes.
What’s even better is how data sets provide a radically shorter path to high performance, DQM and Spark based queries for your existing CQM models without having to commit to a full conversion. You simply use a CQM FM package as the basis for a data set, then utilize that data set as a source in a data module. Once complete, you’ve unlocked the full set of incredible data module and dashboard capabilities like forecasting without having to do an 8 to 32 week project.
Which Cognos Query Mode is right for me?
Okay that was a ton of data, some of it pretty technical. Which of the Cognos query modes should you choose and how do you learn more?
TLDR
- Immediately cease all development of new Framework Manager models using CQM
- Consider migrating existing CQM Framework Manager models to DQM models or to data modules (PMsquare can help with this)
- Data sets are your ‘get out of CQM free’ card; they vastly improve the performance of most CQM queries and simplify presentaiton for end users
References
- Dynamic Query Mode Redbook
- Cognos DQM vs CQM explainer
- Queries on uploaded files and data sets
- Configuring the Compute service
ReportNet 1.0 was when the C++ query engine (aka CQM) first appeared.
Article has been edited.
Thank you Ryan for a nice explanation. You really fancy data modules. 🙂
Can I ask for your opinion?
Let’s assume there is a new BI project. The report Database is modified daily with snapshots of operational source database plus some service timestamp/status fields (minimum ETL effort just to keep the history)
So the reporting source is basically OLTP data on Oracle Exadata.
Potentially transaction tables will have billions of rows.
Some reporting requirements will be operational but also there is a need for Dashboards with drill-up/down functionality.
What would be your suggestion for Cognos Models? Build an FM model (both Relational and DMR layers) or use data modules?
Thanks!
I would use data modules in this situation. You haven’t listed a requirement that can’t be done in data modules and at this point I do all new development in data modules unless there’s a high priority requirement that can’t be met there.
I like the Data sets. However, until the data level security is available for loaded Data sets it cannot be used in corporate reporting.
Use the Cognos object/folder security to restrict access to the data set to the appropriate community.
Hello, may I use your rich experience? We’re in the process of moving to DQM. We do not change FM models, we only publish selected packages in DQM. We need to keep packages for Transformer in CQM (both CQM and DQM packages use common FM model). I made an attempt: I published several DQM packages with different governors settings. Above each package was a report with the same specification (and no prompt). I ran each report under different users and also all reports under one user (in quick succession, we have set cache by default to 5 minutes). Then I evaluated the cognos queries into the database. However, two users never used a common cache, even if they had the same user classes. Chache was used only when running 2 reports by one user, even not always. The least restrictive setting I used:
– Cache is sensitive to connection command blocks: NO
– Cache is sensitive to DB info: NONE
– Cache is sensitive to model security governors: NONE
plus of course Allow usage of local cache governor: YES, Use local cache in report YES).
Can you guess why the cache is not shared for different users? Thank you very much
What you’re seeing is expected behavior for relational FM packages running in DQM. Almost all of the DQM coolness – member cache, etc etc – only works if you have a DMR. Unfortunately. Otherwise the DQM cache is not shared, it’s per user session. You can learn more from the DQM cookbook and redbook. If you’re on an 11.1 release data sets have something much closer to the functionality you’re looking for in terms of shared in-memory processing. Of course they require batch loading rather than on-the-fly query to database.
So basically, switching to DQM will enhance query performance within a user session but not across user sessions for relational packages, while it will enhance both for DMR packages.
Theoretically.
Hi Ryan,
I see in IBM page (https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.0.0/com.ibm.swg.ba.cognos.ug_fm.doc/c_dqm_dyn_query.html) statement: “For relational data sources, the dynamic query mode offers: JDBC connectivity, 64-bit connectivity, and in-memory caching.”
So the in-memory caching is only within user session for relational data?
Or has anything change with 11.1 R7?
Thank you.
I don’t believe anything has changed, so it’s still only user-session caching for relational data as far as I know.
Are there plans for these Data Modules to support row-level security similar to Data Security and FM models?
Data modules do support row level security today, you assign Groups or Roles to different values in the data and it will filter at runtime. Check out this article: https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_mdlg.doc/t_ca_mdlg_secure_data.html