Alert has not triggered from a certain point

Hello.

Recently We noticed that we did not receive any slack notifications when violating critical threshold for instances count of our production AWS ECS Cluster.

So, We investigated incident dashboard and found that all alerts in our alert policy have not been triggered since 2020/11/26

It looks lile APM and Infrastructure (AWS integration)dashboards shows correct metrics so We thought its not agent or integration issue.

Is there any workaroud?

Thank you.

@yasuno Sorry you have been waiting awhile for a response from our community. I’m going to bring this back to the attention of our support team. Thanks for your patience!

Neal Mc

Hi there @yasuno -

The first thing I would do is to make sure your incident preferences are set correctly. This is the most common reason people do not receive alert notifications.

If that does not seem to be the issue, then can you please provide a link to the policy that you are not receiving notifications on? That way I can take a closer look. You’ll find an icon to provide a link int he upper right hand corner of your screen.

@hross
Thank you for the reply.
We think our preference is set properly and definitely the alert condition happened .

Here is our alert policy.
We set an alert on specific ECS service. Its condition is tasks count above 3 at least 6 minutes.
https://one.nr/0JBQrqbZEjZ

And this is the cluster’s metrics. This shows tasks count is 6 for 3 hours(18:00JST-21:00JST)every day, but no alerts has not been triggered in this period.
https://chart-embed.service.newrelic.com/herald/8847e1f9-12e6-4535-a351-2697a5d63879?height=400px&timepicker=true

Hi @yasuno - I took a look at the Infrastructure condition you referenced and it does look like it should have opened an alert violation. I went ahead and opened a support ticket for this so we can have Engineering investigate. Someone will be in touch with you soon.

Happy New Year!