Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon

Help Calculating CPU Hours for Kubernetes Containers


New to NRQL, our team has a requirement to produce reports that show the total cpu hours used per container across namespaces in our K8s clusters. The report time scale defaults to the current month.

Our initial attempt at calculating this:

SELECT average(cpuUsedCores) FROM K8sContainerSample facet containerName SINCE 1 MONTH AGO WHERE clusterName = 'clustername' and namespace = 'namespace' TIMESERIES

However, this produces inaccurate results. For example, if a container was only active for 1 hour and consumed .5 cpu cores, the average comes back as .5 cores over 1 day.

Then we attempted to get more fine-grained data back, like so:

SELECT average(cpuUsedCores) as 'Average CPU Cores Used' from K8sContainerSample facet containerName SINCE '2020-01-01 12:00:00' until '2020-01-15 11:59:59' where clusterName = 'clustername' and namespace = 'namespace' TIMESERIES 1 hour

This does bring back the per-hour average but since we are limited to 366 buckets per query we would have to make 2 calls to get back data for the whole month.

We realize that taking this approach also requires us to parse the results, tally up the core usage count and multiply by the current number of hours in the report

Is there a better way to handle this, ideally in a single query?


Hi Justin, We’re not able to assist with building NRQL queries but fellow community members like @stefan_garnham may be able to help you.


Changing the query to use sum(cpuUsedCores) will give the total number of cores used over the period.


Thank you @stefan_garnham for providing an answer! Always great to have fellow explorers like you share their knowledge :slight_smile:


Appreciate the feedback - the issue with using SUM is that it overcounts. It will tally up every cpu usage that the Agent sends. To get an accurate result, we ended up making several Timeseries queries and aggregated the results in our application.


Thanks for confirming what worked for you :smiley: