I have my application deployed on Kubernetes, say on some 10 nodes, but I want to use only 5 nodes ( i.e. install new-relic agent on only 5 nodes and send data from them), how can I achieve this?
Instead of running it as a DaemonSet you could run a startup script that installs the agent on the host OS with similar rights, and this script is only executed on the hosts that you want to monitor.
Alternative option is to create two node groups and restrict the DaemonSet to be installed on one node group only.
Hi @frankdornberger, thanks a lot.
No, so we are not running currently as a deamonset, its a ruby rails application and we are using new-relic gem in our application.
But since we have autoscaling enabled, then nodes keep going up and down, so just wanted to know if there is a way to restrict nodes? I also tried to understand the new-relic networking but could not track the network calls from kubernetes pods and nodes. Could you please explain that as well?
Could you share an example for the alternative option that you shared?
If I understand you correctly, you’d like to achieve that your app replicas only land on a few nodes instead of all of them. Is that correct? This is possible but can be dangerous… Imagine all of the nodes you have foreseen to receive your app are suddenly terminated by autoscaling, what will happen to your app then? If you have frequent scalings, this is a very likely scenario in a rather short period of time.
What you can do is the following based on this documentation https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ and this blog post https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes/
You can write an (optional) schedule preference, this means as long as it can be honored, a Pod would be scheduled on a node that matches your labeling but if the cluster is unable to satisfy your requirement, it would schedule your app on any other available node. If you have a hard requirement, e.g. your EC2 instance has an EBS volume attached that you need to have available, then you may want to have it as a hard requirement. My example has the hard requirement as a comment.
Here’s how you could write such a Deployment file:
apiVersion: apps/v1 kind: Deployment metadata: name: myApp spec: selector: matchLabels: app: myApp replicas: 3 template: metadata: labels: app: myApp spec: affinity: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: preferredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: schedule operator: In values: - myApp containers: - name: myApp image: myApp:1.2.3-alpha
What’s the reason your app should be scheduled on a few nodes only instead of all of them?
Yes, that’s my use-case, just to elaborate a bit more on that, we are following a microservice architecture, with few applications running on dedicated nodes.
In case of autoscaling as well, all the nodes won’t go down because we are using on-demand ones.
The requirement is such that for one application which is currently running on 9 nodes (dedicated nodes, using node-selector say ‘abc-node-group’), then out of these 9 node groups I just want to run new-relic agent on say 5 nodes. Is there a configuration that I can set in new-relic-config.yaml and in K8s deployment.yaml?
I think the approach that you are suggesting is to have 2 node-selectors in deployment yaml, but not sure how to set this in new-relic-config?
@disha.singhal As far as I understood, you’re using the APM product to monitor your app, not the Infrastructure agent, to monitor your cluster. Is that assumption correct?
If so, I’d assume these nodes that you want to monitor (and the running apps) somehow have a different behaviour than the rest of your apps but they share the same codebase. Is that still correct?
If so, are some of your apps sort of a background worker where you run rake tasks, and the rest are handling the app’s actual work? What you could do is write two different Deployments in k8s, one for your background workers and another one for your payloads. The type of Deployment you want to monitor grabs the NR license key from the k8s secrets, and the other doesn’t. The agent would still be deployed on both apps but on one it would not be able to report. That would mean you’ll have to remove the license key from your Dockerfile should you have it there at present.
Such a config can be achieved by adding the NR license key as an environment variable:
spec: containers: - name: myApp image: myApp:1.2.3-alpha env: - name: NEW_RELIC_LICENSE_KEY valueFrom: secretKeyRef: name: myApp-secrets key: new-relic-license-key
In case I missed your use-case, can you elaborate a bit more on why you want to monitor a few nodes only (what’s special about them?) and why you don’t want the other nodes to be monitored at all?
As far as I understood, you’re using the APM product to monitor your app, not the Infrastructure agent, to monitor your cluster. Is that assumption correct? - yes, this is correct.
In case I missed your use-case, can you elaborate a bit more on why you want to monitor a few nodes only (what’s special about them?) and why you don’t want the other nodes to be monitored at all? – to give you more clarity on this, I can say this is business constraint, lets say my application (one single web-application) requires 10 nodes currently because of large resources and no of replicas, but I have lets say license for only 5 hosts, hence I just want to run my new-relic agent on only 5 nodes out of 10.
Hope this helps?
@frankdornberger , any update on this?
I tried this approach, but the problem with using Downward API is I am not getting the host name, just getting the DNS entry.
Is there any approach that you can think of?
Hey @disha.singhal Please excuse my late reply. I haven’t noticed that you responded once more.
In a Kubernetes environment, the host from an APM perspective is not the actual machine on which your Pod is running, but the Pod itself. So if you have a license for 5 hosts, it doesn’t matter on which physical machines your Pods run, but the number of Pod counts. If you have a doubt on that, please reach out to your account manager, since by cloud-based knowledge may not be in line with on-premise solutions if that’s your case. But if licensing of hosts is your concern, I think you should be good, as long as you don’t let your app scale to more Replicas than you own licenses.
Hi @rdouglas, yes it helped, but there is still a blocker.
@frankdornberger thanks a lot. Just need one more help, is there an option to selectively initialize the new-relic.yml.
Consider, I have a ruby application and I want to initialize new-relic agent based on some condition check. But since its a yml, I can’t add the conditions there, so is there any place/middleware that I can use for this use-case?
@disha.singhal Glad, you’re one step further. Can you elaborate what’s the problem to solve? If you already consider the yaml to be the goal you’re excluding all other options right away. Without further context it will be hard for any other community member to chime in and give guidance.
@frankdornberger , so basically I want to selectively initialize my new-relic gem.
I am not considering yaml as my goal, but from what I know, I think for a ruby application, new-relic actually looks for a new-relic.yml to work… Correct me if I am wrong?
@disha.singhal Can you just have two builds, one including NR, the other without?
@frankdornberger Can you give me an example for the same, considering my use-case (my application is deployed on kubernetes and I don’t want to create a separate deployment)?
Is there no option to tweak this setting, of initializing new-relic agent based on some env variable?
@disha.singhal Not sure if you can achieve all of that easily in one go, and have it pretty as well. A Deployment in Kubernetes is in its sense a collection of identical application replications. There is no way that I can think of that allows you to have them spin up differently. You can create two deployments though and they vary by what difference you need (e.g. a worker-deployment, and a cronjob-deployment) but then you could also go ahead and use the Type cronjob instead of a regular pod.
To be honest, what you seem to achieve is violating the core principles of what the used technology is meant to be, and unless I got that wrong, strongly advise against it. I still failed to figure out why you want to configure your (one?) app in two different versions. If the problem is licensing, you better talk to your management to get that sorted out. If the two different variants are supposed to do something different, simply treat them as two apps.
Sorry that I didn’t get any further so far. Maybe I still didn’t figure out what you actually try to achieve.
You’re most welcome, @disha.singhal If I can be of further help, please let me know. Else you may want to mark that topic as resolved by marking the most helpful answer as the “solution” that helped you most.