MSSQL cache hit ratio greater than 100%


When running the query: SELECT average(buffer.cacheHitRatio), latest(entityKey), latest(entityName) FROM MssqlInstanceSample FACET entityId, the “buffer.cacheHitRatio” metric has value much greater than 100% for some DBs (for example: 322266.35714285716).

Seems like this is a known issue for MSSQL:

Is there a plan to fix the way that buffer.cacheHitRatio is calculated?


Hi @efrat.hazani,

The MSSQL integration doesn’t calculate the buffer.cacheHitRatio, it returns a single value from the MSSQL instance as seen in this query. This would be a bug upstream of the integration, unless you have another method in mind for how that metric could be captured?

1 Like

According to this:, to get the actual ratio we should divide “Buffer cache hit ratio” by “Buffer cache hit ratio base”.

In the SQL query that you shared (and please correct me if I am wrong) you are using only the “Buffer cache hit ratio” value for the “buffer.cacheHitRatio” metric, but you do use the correct calculation for the “system.bufferPoolHit” metric.

So it seems to me that the actual ratio is available at “system.bufferPoolHit”, am I correct?

Thank you,

1 Like

Hi @efrat.hazani,

I’m double-checking with our product engineers to get their perspective. I’ll write back once I’ve learned more.

Hi @efrat.hazani ,

I heard back from our product engineers and they confirmed that you are correct, and system.bufferPoolHit is the actual ratio. We have this registered as a defect with the integration which has been entered into our Product Development teams backlog. Once the work has been done, the for the MSSQL integration will indicate when this fix has been made available.


Thank you @sellefson

Hello. I am also experiencing a system.bufferPoolHit column value of MssqlInstanceSample table over 100%. The nri-mssql/ has not been updated yet. Can I wait for this?