Relic Solution: PHP Agent CPU Overhead Tips

While we don’t expect the PHP Agent to cause any serious performance impact on your system, there are situations where we’ve seen higher than normal overhead. Here are some things to try and check to reduce that overhead.

What clock sources are available, and what is currently being used? You can run the following commands to find out:

What clock sources are available on the system?
cat /sys/devices/system/clocksource/clocksource0/available_clocksource

What clock source is currently in use on the system?
cat /sys/devices/system/clocksource/clocksource0/current_clocksource

What kind of timestamp counter does the CPU support?
cat /proc/cpuinfo | grep tsc

Is vDSO enabled?
strace php -m 2>&1 | grep gettimeofday

Why are we checking this?

The PHP Agent easily relies on gettimeofday() to get the start and stop times of your PHP functions. If your default clocksource on your system doesn’t support vDSO, this could play a role in seeing higher overhead. We recommend using a clocksource that supports vDSO, such as tsc.

To change your clocksource to tsc, you’ll want to run the following command:

echo tsc > /sys/devices/system/cl*/cl*/current_clocksource

If you’re unable to change your clocksource, the next step would be to try to disable some of the agents most CPU intensive settings. To do so, you’ll want to edit the following lines in your newrelic.ini

newrelic.transaction_tracer.enabled = false
newrelic.transaction_tracer.detail = 0

Please note you’ll need to restart your PHP dispatcher to ensure those changes take place.

If these options don’t show any positive impact on your application’s performance, the next step would be to dig deeper into your application to see if something else may be causing high overhead. Let us know, and the Support team at New Relic can help

More resources about gettimeofday() and vDSO:


I faced the cpu issue on an ec2 instance,and I tried the optimizations above without success, turns out restarting the instance solves the problem, and to confirm this I reproduced the same thing on another instances.

Glad to hear that rebooting has resolved this for you! Thanks for sharing