Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon

CPU on server doesnt seem to match data


When I go to the bottom of the overview page in my ap server, its that my server is using 63% CPU, but when i go to that server its says 7% with 20% Ram and a load average 0f 1.7

I have 16 Processors and 31GB Ram

Any ideas


For web applications, multiple instances of a service running on more than one server or a multi-core server environment can produce CPU percentages that seem very high, in some cases well above 100%, representing the total amount of CPU time used across all CPU’s. This number ensures that adding CPU’s to your setup doesn’t cause the CPU usage for processes to suddenly drop. This effect illustrates the fact that adding more instances does not make your code more efficient.

Our server monitor, on the other hand, generates a normalized metric based on the percentage of time when your CPU is busy, so adding cores to that server will cause the available CPU capacity to increase, but the total is still graphed from 0-100%.

This is also described in the documentation:


I think this difference may have to do with how we calculate CPU usage when dealing with multiple cores / instances. In the context of the application monitor, we don’t scale for this and just calculate this in terms of application instances, adding up the usage for a given time.

For applications:

CPU usage = (instance + instance + instance + instance) / time

This is an important distinction on why we do this:

Example: If you upgrade from a dual processor to a quad processor under the same server architecture, you should see roughly the same CPU numbers for the same loads and applications. If New Relic normalized the calculation, the upgrade would appear to produce an abrupt decrease in your CPU usage, even if the number of cycles you are using would be the same. Adding more instances does not make your code more efficient.

As you can see, this may have a larger figure than what you would normally expect. But the server monitor has a different way of expressing this data:

For the server monitor:

CPU usage = (CPU + CPU + CPU + CPU) / (time * number of CPUs)

As we can see here, this percentage will change with the amount of cores, and could explain why your server CPU shows this lower percentage.


Any chance of getting a normalized load average as default graph?

For example, we could implement and maintain fewer alerts if we are working off of normalized load. Instead, we have to have a monitor/alert for each class of server (and the number of vCPUs). A load of 12 on a 16 vCPU server is ok, but so great on a server with 4 vCPUs.


Hey @ives.stoddard - Since @bcollins’ answer is from 2014, that must relate to the legacy server monitor, which is now deprecated. and not the Infrastructure agent, which is current. Is that what you are using?

Can you clarify if you are using the Infrastructure agent? It may make better sense to separate this into a new topic if so.