Hi, We recently noticed a spike in data out in New Relic which started to spike from mid march and we want to drill down so as to which services are responsible for this spike. Is there any way to calculated the data ingested per service? I was not able to find any documentation for the same in the NRQL docs. This would greatly help us drill down on optimising faulty services which lead to high data ingestion
I hope you are well, thanks for reaching out!
I see this is your first post here in the community, congrats!
Great question, it can be very useful to understand your data ingest. We have a great doc Query and alert on usage data | New Relic Documentation which can help guide you though better understanding and measuring your data ingest. I would advise paying special attention to the section " Data ingest usage queries".
I hope this was helpful for you, please feel free to reach out to let us know! Also if this did prove helpful, you can also mark the Solution option on the bottom of this post. As it can be used as an indicator for success for other community members.
Should you have any questions, please do reach out!
Thanks for the reply, hope you are doing great.
I checked the document shared by you and went through the queries listed under Data ingest usage queries section.
The queries mentioned in the above section seem to be returning data ingested based on metrics and data platform, what i really wanted to drill down was the data ingested per service.
For example: If we have integrated NR in 5 of our microservices I want to know the contribution of each micro service in the total data ingested. So that i can drill down on microservice level and figure out what lead to the spike in data ingested due to that service and fix the same.
Hey @engla there are a couple of things you can do.
Firstly - check out the Data Management Hub - you can get a breakdown per service:
Alternatively, you can run some NRQL queries.
Firstly - understand what data type is contributing most from your APM Apps:
SELECT sum(GigabytesIngested) as 'GB Ingested from APM' FROM NrConsumption WHERE usageMetric in ('MetricsBytes','ApmEventsBytes','TracingBytes') FACET usageMetric SINCE this month
Something like this for metrics,
SELECT bytecountestimate()/10e8 as 'APM Metrics GB Estimate' FROM Metric WHERE agent.type = 'apm' SINCE this month FACET appName LIMIT MAX
Or this for APM events,
SELECT bytecountestimate()/10e8 as 'APM GB Estimate' FROM Transaction, TransactionError SINCE this month limit max FACET appName
Or this for Traces,
SELECT bytecountestimate()/10e8 as 'Tracing GB Estimate' FROM Span, ErrorTrace, SqlTrace SINCE 8 days ago FACET appName
Thanks for your support we were able to figure out the reason for the increase in data usage. Turns out in the last release of New Relic enabled “Distributed Tracing by default” and well as increased distributed Tracing span reservoir size (Reference). This has added over 3.5 TB to our monthly data usage.
Humble Feedback: We use NR agent in 10’s of services and it is hard to keep track of every release being made in New Relic, so you should probably let the users decide which features they want to use and up to what extent.
I would request you to help me out in raising a support ticket with the accounts team for the additional 3.5TB data usage.
Thank you for the response here, its greatly appreciated.
I have created a case with the accounts team as requested. Note they will reach out via email with their findings/updates.
Should you have additional questions please do reach out.