Your data. Anywhere you go.

New Relic for iOS or Android


Download on the App Store    Android App on Google play


New Relic Insights App for iOS


Download on the App Store


Learn more

Close icon

APM metrics not linked in Infrastructure tab


#1

I have New Relic deployed into a GKE k8s cluster, set up by following the instructions here.

I see Infrastructure metrics show up fine for all nodes in the cluster, and APM metrics for a number of services that are deployed into the namespace labeled to enable metadata injection.

However, there are a number of pods for which I do not see a link to the APM metrics in the Infrastructure tab. The services for which the links are missing do show up in the APM tab.

The “Health map” view also shows that some pods are enabled (green boxes in “enabled hosts”), but others have the message “Install the Infrastructure agent to see hosts here”. For some of these, I see metrics, along with the message:

39%20PM

Looking inside the pods that are missing in the Infrastructure tab, I see that the metadata injector is functioning correctly, and I see the environment variables after running env in the container.

Thoughts on how I might link up the pods from the Infrastructure tab back to the APM tab?


#2

Hello Nick,

Thank you for reaching out.

To better assist you with this issue, could you please provide permalinks/URL to the issues your are seeing? Don’t worry, only New Relic employees and users with the proper access will have visibility into a permalink included. Screenshots of what and where you are expecting pods to show up would also be helpful.

If you don’t feel comfortable sharing such information on a public forum, we can definitely pull in your post into a ticket.


#3

I’m having almost exactly the same issue. Was there ever a resolution?

In my case, the APM data is visible in the app in the APM tab, and the Kubernetes host and pod/deployment/node metrics are available within the Infrastructure tab, but there seems to be no correlation between the two. I’m not seeing APM charts or links in the Kubernetes Cluster Explorer as expected (and shown here https://blog.newrelic.com/engineering/monitoring-application-performance-in-kubernetes/). Nor am I seeing Kubernetes metadata in transaction attributes in APM, even though the metadata injection is deployed in the cluster. I’ve confirmed that the containers have the expected NEWRELIC_KUBERNETES_METADATA_* environment variables.


#4

@nick.travers and @mdavis21 - ultimately, what has been known to fix this problem a vast majority of the time is specifically setting identical hostnames in both the APM and Infrastructure config files for each affected host. This is discussed in the following document:

Here’s a table to help make it totally clear what that document is describing:

Infra config (newrelic-infra.yml) APM config (varies)
Hostname setting: display_name: myhost1.example.com Change display name of hosts

So the exact same hostname would be placed in each config file’s hostname setting. This is usually something along the lines of display_name but it varies by agent language. This allows the agents to form a handshake and creates the linkage you are looking for between the APM and Infrastructure UIs.


#5

This seems like it applies to classic use-cases of APM/Infrastructure agent where the user is managing the config for each host. With a Kubernetes implementation where the Infrastructure agent is deployed as a DaemonSet (my case) and the APM agent is running alongside each containerized application process, I can’t really hard set a hostname (nor would I want to). In this type of k8s implementation, the APM agent and Infrastructure agent should be picking up the hostname automagically. Kubernetes changes the classic host-application relationship we’re familiar with from VMs. The host could be any worker in the node, and could change at any time, so we can’t hard set hostname in the APM agent config.

That said, I think you are on to something as to what the cause is, because the Kubernetes Cluster Explorer (which is getting its data from the infrastructure agent) has the Kubernetes worker node as the host for each pod. But the APM agents lists the deployment name as the host.

Is there some variable that needs to be set in “display_name” in the APM agent config for agents running on Kubernetes in order to get that to output the worker nodes hostname rather than the deployment name?


#6

Hi @mdavis21.

To get some background on how the metadata injection works, essentially all that’s happening is new pods are created with environment variables that the supported APM agents know to look for and decorate their metrics with so that the collected data can be linked together in the New Relic backend. The primary way this might fail is if those variables aren’t being added to the environments of newly launched containers, meaning there aren’t common attributes to link the data when after it arrives.

The node name is derived from the kubelet running on the worker; you should be able to determine what is being detected for it by running through the validation steps and launching a busybox pod on the worker running your application:

kubectl create -f https://git.io/vPieo
kubectl exec busybox0 -- env | grep NEW_RELIC_METADATA_KUBERNETES

#7

Hi @sellefson,

Thanks for the reply. I was aware of how that metadata is injected and did validate that new pods that are started on the cluster are getting the metadata injected properly. It is working as expected and the NEW_RELIC_METADATA_KUBERNETES_NODE_NAME is being populated with the correct worker node hostname.

As I stated before, though, the APM agent lists the host as the actual Kubernetes deployment name. So if my app is named, AppX, the APM agent reports appx_deployment_<some_uid> as the host the app is running on. I don’t know if this intended behavior or not, or if it’s even relevant to the discussion.

Is there something else I should check to figure out what’s going on?