Synthetics Monitor Check Failure no longer triggering Email and Slack channel sends

###Alerts Question Template

  • Please describe what are you seeing:

We have an extremely simple setup - one Alert policy with two channels, email and Slack.
We have several Synthetics monitors that simply ping certain endpoints.
These have worked as expected in the past and we have not made any changes to our NewRelic setup, however, two of our monitors recorded “Check Failures” and incidents today and we received no emails or Slack sends.

  • How does this differ from what were you expecting to see?
    We expected email and slack notifications for our synthetics monitor check failures.

  • If you aren’t seeing expected alert or data, please provide a link to the incident or violation (policy, condition, data app etc.)

https://alerts.newrelic.com/accounts/864945/incidents/79242226/violations?id=495578073

Helpful Resources
Relic Solution: The key to consistent alert notifications

Troubleshooting downtime document

Hey @lukeburden! Looks like a couple things are happening here to cause this to happen.

First, the Incident Preference setting on the policy for the incident you linked is By Policy, meaning that New Relic will create a new incident for the first violation in that policy, but if any other violations occur while the first incident is still unresolved, New Relic won’t send further notifications.

Second, there hasn’t been a time on this incident where it had an opportunity to close. The culprit here is this Synthetics monitor that didn’t close until 10:40am:

By the time that monitor went back to normal at 10:40am, another violation had occurred at 10:01am that is still open:

So from New Relic Alerts perspective, this is all part of the same incident because of the overlap, and that there hasn’t been a moment for this policy where there hasn’t been a violation occurring since it opened this morning.

Getting more emails about these kind of situations is an easy fix though! Now we loop back to the Incident Preference setting; if we use By Condition instead of By Policy, New Relic will send notifications and start incidents for each individual condition you have inside that policy. If Condition 1 violates and then Condition 2 violates, you’ll get an email for both. If during that time a different entity that Condition 1 monitors also violates(think one condition monitoring multiple APM apps), you would not get a third notification.

I recommend taking a look through the Incident Preference guide linked here to read about the options available to you and for best practices on what would be the best option for you in your situation. Let us know if you have any questions!

2 Likes

That’s a helpful breakdown, thanks @sschneider. I’ll take a look at the incident preferences and probably switch to by condition, which matches the actual mental model I had built up when setting all this up.

1 Like

Actually, By condition and entity matches what I had expected. Will see how that goes - thanks again.

2 Likes

Happy to help @lukeburden! That setting can get a bit noisy and could lead to alert fatigue for your teams, so I really recommend reading our How To Avoid Alert Fatigue blog post and the Effective Alerting In Practice guide when you have time to make sure everything is configured that works best for you your team.

You’re right - “By condition and entity” was too noisy. “By condition” seems to suit our needs, though.