I just installed New Relic on one of our test Kubernetes clusters and it’s awesome as always (used APM before). But when I checked the usage stats yesterday I noticed we had blown through our quota and then some in just one day (actually 565GB). According to the usage dashboard most of this was metrics (70%).
What would be “normal” volume of data ingested from a Kubernetes cluster? I guess we have quite a few pods (330), but 140GB per day seems excessive? We are running on Azure with AKS.
Any way to for me to diagnose this further?
K8s(Volume|Container|Pod|Replica|Deployment|Endpoint)Sample metrics is (for each) about 50k entries for a 30 minute time period. So they seem to be the biggest at least.
Is there a good default limiting of metrics for a Kubernetes cluster? Does it make sense to just accept metrics from the “kube-system” namespace? Will that give the basic functionality that the Kubernetes dashboard and navigator gives?
Regards,
Anders,