Alert Policies not working

Hiya, can’t seem to get these alerts to actually trigger within the policy. The condition, policy, and channel all seem to be set correctly. This is a secondary policy from our primary policy.



Minimum total latency (aggregation window x offset) must be 3 minutes. Yours is currently only at 2.

Hi Chris!

Thanks for writing in to New Relic!

Thanks as well for providing those links. According to our AWS Lambda monitoring integration the aggregation window and offset evaluation might be too low. Depending on your configuration, the default New Relic polling interval for Lambda is 5 minutes and the default data interval from Amazon is 1 minute. I believe that your 30 second aggregation window might be throwing off the alert.

I would try increasing those such that your total supported latency (aggregation window x offset evaluation) encompasses that time span or adjusting your polling frequency.

I hope this helps!

Thanks for the reply @dlesniak! I’ve been trying out different setting on one of the conditions, still no dice:

The lambda in question, “HelloWorld”, invokes itself every 5 minutes, we’ve set it to fail for at least 25% of the runs but haven’t yet been able to create alerts via NR. We want to create an alert upon failure. Is there a better way to go about doing this?

Currently the evaluation offset is 1 min.

Looking at the timeliness of the arrival of the data on the backend the data is arriving around 400-414 seconds later than the current window. This amounts to about 6.9 seconds later than the window.

If you change the offset to 8 (6.9 + the current offset) min you should see incidents open as expected.

New Relic is a consumer of Cloud Integrations data types – that is, we reach out via API and receive data back from a third party provider. These data types are thus prone to unavoidable data latency, since we have no control over how quickly we receive the data to ingest it. This article goes into some detail about this “data latency” and how it can affect NRQL alert conditions, especially when they query Cloud Integrations data types.

We usually recommend setting Evaluation Offset to 15 minutes in these cases (mentioned in the article I linked above and in this documentation) but I do believe you do not need it to be that high since the data is pretty consistent in its latency.

Making this change will not result in a retroactive violation being opened. However, allowing for the data latency that is inherent with cloud data will let the alert evaluation system see all the incoming data and properly open violations moving forward.

I hope this helps.


This topic was automatically closed after 365 days. New replies are no longer allowed.