Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon

Feature Idea: Add config "" to allow non-localhost connections



@Linds Any updates on this?

I’m using the following workaround in my docker entrypoint but this adds a 6sec delay to the startup. While this is not an issue with php-fpm apps, it is for our cronjobs.

[ ! -z "${PHP_NEWRELIC_LICENSE}" ] && php -r '$count=0;while(!newrelic_set_appname(ini_get("newrelic.appname")) && $count < 10){ $count++; echo "Waiting for NewRelic Agent to be responsive. ($count)" . PHP_EOL; sleep(1); }'

Is there a better way to integrate NewRelic/PHP/Docker with each other, e.g. running APM in its own container and share the socket across containers via volume?

UPDATE: There is an issue with cronjobs. The docker container is being closed too fast, hence the newrelic daemon is not able to send the information back. Is there a different way to set this up?


New Relic edit

  • I want this, too
  • I have more info to share (reply below)
  • I have a solution for this

0 voters

We take feature ideas seriously and our product managers review every one when plotting their roadmaps. However, there is no guarantee this feature will be implemented. This post ensures the idea is put on the table and discussed though. So please vote and share your extra details with our team.

PHP agents tries to connect before daemon is ready
PHP agent not recording data within some of our containers

Any update on this? The solution to kill the daemon doesn’t work reliably.

The go agent got a flush feature. Can we get this for PHP too?


@enricostahn a flush feature for the daemon is something that definitely doesn’t exist at this moment in time and so may be a good idea going forward, @Linds can we turn Enrico’s post into a feature request for this?

One thing I’m thinking here is that you’ve sort of stumbled onto perhaps the better solution. Running a docker container with just the newrelic-daemon running in it removes the race condition. It also pushes your containers back towards the best practice of Docker being 1 process per container.

When a PHP command line script spins up, if the docker binds on the same port (maybe a TCP port or volume) then the daemon should be alive and ready to accept the call.


@acuffe Thanks for your response! I can trigger a feature request via our partner if that helps to move things along.

I’ve been trying a couple of things with flushing and it’s not working reliably. We also tried sharing the socket via volume, which was unsuccessful.

The thing I’m currently trying is separating the newrelic-daemon from the container (like you suggest). The main issue here is that the newrelic-daemon only communicates on localhost same as the newrelic-php-module. My workaround here will be to use socat in both the newrelic-daemon container as well as the php-fpm container.

My feature request would be to allow to set a host ( similar to (newrelic.daemon.port).


@acuffe I second this request. While it is possible to to work around the localhost-only limitation, some of the options (like --net=host) are not suitable for production.

I did want to add one more request to this one, I noticed that at one time there was a limit of 250 apps per daemon and wonder if this limit has been or could be increased.


Just for everyone’s benefit. The issue was raised with NewRelic, and their internal ID for it is PHP-I-63.

We came up with a workaround through socat. This works quite well in our Kubernetes cluster so far and we will monitor the situation.

The solution looks as follow:

Our docker base containers with the workaround can be found here:


Are there any plans to make the daemon host configurable? Having it hard coded to localhost is creating some issues for me at the moment.


I second the initial feature idee and I do not see this as solved.
I´d compare a possible sollution to the setup blackfire provides:

there is a container with the agent (daemon in newrelic) and the client sends traces to the agent.
Maybe a central daemon, compatible to all apm agents would be easyier to maintain.
Our background:
We run kubernetes cronjobs and no trace of the cronjobs is ever reaching newrelic apm, cause the cronjobs are terminated directly after the jobs are finished.


Thanks @enricostahn for this - it is super interesting for me.

Are you running the NR daemon and the PHP container in the same pod? I have been playing with running the NR daemon in a Kubernetes daemonset and running only the NR agent in the PHP container, which communicates to the NR daemon via the NR sock file.

I thought I would have to use Kubernetes daemonset to reduce the cost of NR (so there is only one NR daemon per Kubernetes node). If you are running a NR daemon per PHP container, does that impact the cost of your NR service?



Hi @dom2,

No, we’re not running NR daemon and PHP in the same pod. I see how the picture may suggest this.

We’re running NR Daemon as a separate deployment with a service. So, we’re basically treating the NR Daemon as a microservice if you will. All pods communicate with the NR Damon service endpoint (e.g. newrelic-php-daemon.utils.svc.cluster.local).

You can see the configuration on GitHub.

This has been running successfully for 6 months now.

We’re using this configuration only for CronJobs. Regular PHP-FPM uses the built-in NR Daemon.


Hi @enricostahn - ahh I see.

Perhaps I should ask NR support about the cost impacts when we are running the built-in NR daemon in a large number of containers.

Great solution for the crons! - thanks again for sharing.


@enricostahn Thanks again we have now redone our containers to use the standard install of NR daemon and things are working well.

NR support mentioned that we wouldn’t be charged per container but by host, which was my hope.


Hey @enricostahn, thanks for the great input. I will definitely try your socat solution. Is there any specific reason why you don’t use it with php-fpm?
I would like to use one solution for both cronjobs and php-fpm pods, which makes a bit more sense looking at it from a maintenance standpoint (this way all NR setup would be the same, agent running in the container and the daemon as a separate service/daemonset).

Thanks already for a starting point on this.
I think @acuffe might have referred to this exact post when I explained our issue to him, glad I found it.


@yorick Hey! :wave: long time no chat :slight_smile:

Indeed this is what I was referring to, the SoCat option would work if the need is to have everything inside a container.

If the daemon was running on the root host machine, I believe in theory no SoCat would be needed as the container can be mapped to expose to the root machine via the port that the daemon is configured to listen on.


Where the container then tells communication on the container to occur locally to 12345 and then out to 12346 on the root machine. Then the PHP Agent configuration tells it, Hey communicate via port 12345, it should exit the container via 12346 and reach the daemon which is configured to listen on that TCP port.

The above is just spitballing on an additional method of achieving the goal. Both methods are subject to an upper limit of 250 unique appNames that @dsix highlighted. However if it’s all the same appName then it’s just instances of PHP connecting to our daemon reporting in additional data. So unless your dockerised system is spinning out containers for an app set that has 250 pieces on the same machine, or a mini hosting company where each customer app is powered by its own container and you report their data in with a unique appName each time, you likely will never hit that limit.


Hi @yorick,

We’re living in Kubernetes world, so concepts I mention may be different in yours.

It’s probably a good idea to have a single NewRelic Daemon “Service” (Traffic -> Service -> Pods -> Containers). The 2 reasons I didn’t use socat for everything is:

  1. That PHP-FPM + Daemon already works fine and I didn’t have enough time to properly load test the socat solution
  2. I was expecting NewRelic to have this feature request resolved by now.

It has been working fine for “CronJobs”. It would be necessary to test if the forking of socat connections reaches any limits.



As we also live a in Kubernetes world, so It’s not that far from our world though :slight_smile:
Anyhow, thanks for the insights. I might take this further and start using it for all the agents.
Definitely starting with a load test or looking where the limits are.

If NewRelic would pull through with a nice HOSTNAME option to set on the agent this complexity would all go away. @acuffe :wink: any news on the issue?


I’ve stumbled on this thread looking for solution for short-lived containers and this seems like it could work very well.

While investigating it was also clear that the same problem can occur during an update as we can’t guarantee that NR has sent data prior to the container being shut down.

We’re currently using Docker Swarm so it will be interesting to see how this solution works for us as we should be able to drop socat.

I’ll let you know what we find…


Hi @markh1,

We realised the same thing today in Kubernetes-world. We see a drop in throughput on NewRelic with each rolling deploy, but it doesn’t correlate with the traffic data from Prometheus. So, while Prometheus shows no change in traffic the NewRelic agent must drop the traffic/data somewhere (well we know where :slight_smile: ).

My ideal solution would be a newrelic “agent” / “server” as a Kubernetes services (not sure how that translates into Swarm) and all our containers send data to this service. The service can then persist data and send it at an appropriate time to the NewRelic collector.



Hi all,

New Relic support staff led me to this thread and it’s been very helpful. Thanks for posting updates and sharing info!

We run some PHP applications – WordPress, mostly – in a Google Kubernetes Engine (GKE) cluster and were looking to add the New Relic PHP Agent to instrument those apps. The features requested here for the New Relic PHP Agent haven’t been implemented yet, unfortunately, so I followed the post by @enricostahn to use socat as a workaround.

I ended up turning my proof of concept into a walk through shared on github:

Maybe others may find it useful.

I’ve deployed the PHP Agent to a cluster with production workloads using the same scheme in the repo above, and it’s working well so far.



@mlauer Nice write-up. Are you using the socat solution with production traffic or just cronjobs?