We are using NewRelic Agents on all of our AWS EC2 instances.
I am doing some performance analysis looking for underutilized instances that can be right-sized.
To get data I am using the Insights module and using the following query.
SELECT max(cpuPercent) FROM SystemSample WHERE ec2Tag_Name LIKE ‘MMWV_PROD_MMGSSA_prod_AutoScalingGroup’ SINCE 30 days ago
This returns a value of 2.68. The problem is that if I do a time series chart of just the last day using this query
SELECT average(memoryUsedBytes) FROM SystemSample TIMESERIES FACET hostname WHERE ec2Tag_Name LIKE ‘MMWV_PROD_MMGSSA_prod_AutoScalingGroup’
SINCE 1 day ago
I can clearly see that my max CPU is much higher. Using the first query (max function) and make it span 1 day instead of 30 days the query returns a value of 17.77.
Now naive me is questioning why the max over the last day is 17.77 but the max over the last 30 days is 2.68.
Now I was under the impression that using the max function would examine every minute of every hour between now and 30 days ago and return a single value representing the highest CPU utilization. Instead, it appears that it is aggregating data and presenting me with an average of some sorts.
Does anyone have a clue what is happening?