Why does Total/Read/Write Utilization % values change depending on time period of view?

I’m trying to monitor storage to determine whether we are IOPS constrained (or approaching it) and need to look at ways to improve storage performance.

When looking at infrastructure > hosts > storage there are total/read/write utilization percentages that look like they should help with this. However the values seem to change as the timespan being looked at changes which means I’m not sure if it is showing what I think it is. The magnitude seems meaningly if the values is 80% at a point in time in a 24 hour view but 50% at the same point in time in a 7 day view. Am I misunderstanding this? Is there a way to find what I’m looking for or do I need to find a completely different solution?

Hi, @robert.carrington: It sounds like you are experiencing this behavior:

The only workaround I can suggest is to look at data in 6-hour windows (or less) for the past 30 days, while the one-minute data is still available.

1 Like

Thanks I guess that is it.

Can I also get some clarification on the meaning of utilization %. Does anything below 100% mean it is not maxed out and there is still capacity for more IO or does it mean something else?

@robert.carrington That’s basically correct. The exact technical definitions for the utilization percent attributes can be found here: