[Python] Sampling transaction ingestion

I’m looking for some very clear instruction on how to sample transaction ingestion on APM (using the ini file in a python project). The docs used to be great and clear, and they have become anything but with self-referential loops of example-less content.

Hello @stormstrike. Welcome to the Explorers Hub.

While I am not a support engineer and unable to provide you with a clear example on this, I am looping in a support engineer to assist. I would also like to leave you with some documentation surrounding sampling: https://docs.newrelic.com/docs/data-apis/understand-data/event-data/new-relic-event-limits-sampling/

Hi @stormstrike

If you would like to control the sampling of transaction data using the newrelic.ini file, I believe you’ll want to use event_harvest_config.harvest_limits.analytic_event_data. Analytics event data refers to Transaction event data. For more information, please see Event Harvest Configuration.

I understand that the configuration setting analytic_event_data might not be clear and have brought this up with our teams.

Hope this helps!

1 Like

@ntierney - thank you for the response. I’ve tried that setting both in the ini and the env variable equivalent to no avail. Perhaps I am not describing what I am trying to lower correctly:


The Metrics is what’s currently and consistently pushing us over our quota. Any help to sample or limit that would be appreciated.

Hi, @stormstrike: I am confused (by New Relic, not you :slight_smile:): I tried to use these queries to figure out what is generating so much data. This query shows that you are sending about 11 GB of metric data per day:

But when I break them down by metric name, they don‘t total close to 11 GB:

Also, I have no idea what newrelic.internal.usage is. Sorry that is not much help; I wanted to provide some additional detail for Support.

1 Like

Aha, found it!

Now, how can you reduce it? I don‘t know.


@philweber you are awesome for digging in - I appreciate it. Indeed how to reduce it is now the question hah!

This query will show which parts of your applications are generating the most timeslice data:

SELECT count(newrelic.timeslice.value) AS 'Timeslices' 
FROM Metric WHERE appName LIKE '%' 
  AND newrelic.timeslice.value IS NOT NULL 
FACET metricTimesliceName 
SINCE 1 day ago 
1 Like

Any update about my first reply of lowering metrics? This is an issue that is likely to push us to Datadog as they have dead simple sampling control.

Curious if you know this - can I just disable metric ingestion all together?

No, the whole point of APM agents is to gather metrics. If you want to turn off metrics, you may simply uninstall the agent.

1 Like

I see. Thank you - I feel like I’m at a complete loss on how to lower my metric ingestion and am unable to control how much our bill is ending up at, and basically being forced off NR. Shame.

Hey @stormstrike - thanks so much for working with us on this one. While you can’t turn off ALL metrics, you drop unwanted data. I wanted to make sure you knew this was possible, and share the documentation that might help you with that. Please let me know if that does not help.

Thanks for your response. It feels like Metric normalization rules (https://docs.newrelic.com/docs/new-relic-solutions/new-relic-one/ui-data/metric-normalization-rules/) might be what I want, but I’m still unclear if this will lower my ingestion and associated costs. I’ve set up a few rules for high bandwidth routes and will check back in a couple of hours to see if ingestion is lower.

Hey @stormstrike, I hope you are well.

I just wanted to reach out to see if you had any luck lowering your ingest. Please let us know if we can help with anything.