Your data. Anywhere you go.

New Relic for iOS or Android

Download on the App Store    Android App on Google play

New Relic Insights App for iOS

Download on the App Store

Learn more

Close icon

APM metrics not linked in Infrastructure tab


I have New Relic deployed into a GKE k8s cluster, set up by following the instructions here.

I see Infrastructure metrics show up fine for all nodes in the cluster, and APM metrics for a number of services that are deployed into the namespace labeled to enable metadata injection.

However, there are a number of pods for which I do not see a link to the APM metrics in the Infrastructure tab. The services for which the links are missing do show up in the APM tab.

The “Health map” view also shows that some pods are enabled (green boxes in “enabled hosts”), but others have the message “Install the Infrastructure agent to see hosts here”. For some of these, I see metrics, along with the message:


Looking inside the pods that are missing in the Infrastructure tab, I see that the metadata injector is functioning correctly, and I see the environment variables after running env in the container.

Thoughts on how I might link up the pods from the Infrastructure tab back to the APM tab?


Hello Nick,

Thank you for reaching out.

To better assist you with this issue, could you please provide permalinks/URL to the issues your are seeing? Don’t worry, only New Relic employees and users with the proper access will have visibility into a permalink included. Screenshots of what and where you are expecting pods to show up would also be helpful.

If you don’t feel comfortable sharing such information on a public forum, we can definitely pull in your post into a ticket.


I’m having almost exactly the same issue. Was there ever a resolution?

In my case, the APM data is visible in the app in the APM tab, and the Kubernetes host and pod/deployment/node metrics are available within the Infrastructure tab, but there seems to be no correlation between the two. I’m not seeing APM charts or links in the Kubernetes Cluster Explorer as expected (and shown here Nor am I seeing Kubernetes metadata in transaction attributes in APM, even though the metadata injection is deployed in the cluster. I’ve confirmed that the containers have the expected NEWRELIC_KUBERNETES_METADATA_* environment variables.


@nick.travers and @mdavis21 - ultimately, what has been known to fix this problem a vast majority of the time is specifically setting identical hostnames in both the APM and Infrastructure config files for each affected host. This is discussed in the following document:

Here’s a table to help make it totally clear what that document is describing:

Infra config (newrelic-infra.yml) APM config (varies)
Hostname setting: display_name: Change display name of hosts

So the exact same hostname would be placed in each config file’s hostname setting. This is usually something along the lines of display_name but it varies by agent language. This allows the agents to form a handshake and creates the linkage you are looking for between the APM and Infrastructure UIs.


This seems like it applies to classic use-cases of APM/Infrastructure agent where the user is managing the config for each host. With a Kubernetes implementation where the Infrastructure agent is deployed as a DaemonSet (my case) and the APM agent is running alongside each containerized application process, I can’t really hard set a hostname (nor would I want to). In this type of k8s implementation, the APM agent and Infrastructure agent should be picking up the hostname automagically. Kubernetes changes the classic host-application relationship we’re familiar with from VMs. The host could be any worker in the node, and could change at any time, so we can’t hard set hostname in the APM agent config.

That said, I think you are on to something as to what the cause is, because the Kubernetes Cluster Explorer (which is getting its data from the infrastructure agent) has the Kubernetes worker node as the host for each pod. But the APM agents lists the deployment name as the host.

Is there some variable that needs to be set in “display_name” in the APM agent config for agents running on Kubernetes in order to get that to output the worker nodes hostname rather than the deployment name?


Hi @mdavis21.

To get some background on how the metadata injection works, essentially all that’s happening is new pods are created with environment variables that the supported APM agents know to look for and decorate their metrics with so that the collected data can be linked together in the New Relic backend. The primary way this might fail is if those variables aren’t being added to the environments of newly launched containers, meaning there aren’t common attributes to link the data when after it arrives.

The node name is derived from the kubelet running on the worker; you should be able to determine what is being detected for it by running through the validation steps and launching a busybox pod on the worker running your application:

kubectl create -f
kubectl exec busybox0 -- env | grep NEW_RELIC_METADATA_KUBERNETES


Hi @sellefson,

Thanks for the reply. I was aware of how that metadata is injected and did validate that new pods that are started on the cluster are getting the metadata injected properly. It is working as expected and the NEW_RELIC_METADATA_KUBERNETES_NODE_NAME is being populated with the correct worker node hostname.

As I stated before, though, the APM agent lists the host as the actual Kubernetes deployment name. So if my app is named, AppX, the APM agent reports appx_deployment_<some_uid> as the host the app is running on. I don’t know if this intended behavior or not, or if it’s even relevant to the discussion.

Is there something else I should check to figure out what’s going on?


Hi @mdavis21,

I don’t believe that’s intended behavior. Can you tell me which APM agent and what version your container is running that’s showing the deployment name rather than the worker? Is this true for all of your APM instrumented apps, or are you seeing it on some subset?

Lastly, could you provide me to a link where you’re seeing this? I’d like to review this with our product team.


Hey @sellefson, given @mdavis21 is leaving our company I will be taking this up from him.

This is a link to an application subaccount as Michael described above where we have the APM running. In this env we are using the newrelic-agent-5.8.0.jar

This is a link to our k8s subaccount where we are collecting infrastructure

for the metadata injection we are using k8s-metadata-injection 1.1.1

for the k8s deployment we are using infrastructure-k8s 1.10.1

for the k8s-webhook-cert-manager 1.1.1

From this link

As he noted we do not see the injected env variables from the pods in the kubernetes infrastructure console screen ( ). Also we are not seeing the application within a specific pod details as noted ( )

Thank you

Matt Kirkevold


Hello @mkirkevold,

Just noticed here that your JAVA APM application is on a different account than your K8 cluster explorer. These all need to to be on the same account for the linkup to occur.

To note on the meta injection, the metadata won’t apply to any pods that existed before the injection was setup, but would apply to any pods created after. Do you get the metadata when you do the busybox test?


Hi @peraut I have restarted my application APM agents to the same account as the k8s cluster now (link below). I do have the metadata injected, which has always worked. What I still don’t see the injected variables from the pods in the kubernetes infrastructure console screen or the application within a specific pod details.

APM link:
Infrastructure Link:{"namespace"%3A["wdls"]%2C"deployment"%3A[]%2C"node"%3A[]}


@peraut @sellefson I’ve been looking into this and what’s really interesting is that the metadata is being injected into the APM transactions.'Transaction%20Name',%20clusterName,%20nodeName,%20podName,%20containerName%20where%20appName%20%3D%20'WDLS-DEV-K8S'