Disk I/O and virtfs on CentOS 7 and WHM

Please paste the permalink to the page in question below:

https://rpm.newrelic.com/accounts/650096/servers/19393297?tw[end]=1461019347&tw[start]=1460932947

Then go to the “Disks” section… I can’t load it to get the permalink…

Please share your agent and other relevant versions below:

CentOS Linux release 7.2.1511 (Core)
Linux 3.10.0-327.4.5.el7.x86_64 x86_64
New Relic agent
2.3.0.129
WHM 11.54

Please share your question/describe your issue below. Include any screenshots that may help us understand your question:

In one of my servers (out of 4) when I go to the Disks section under Server Monitor, I get a very long list of partitions, one per each virtfs… WHM creates a virtfs for each virtual domain I install on these servers, so not only it takes for ever for the Disks section to load - that is when it loads at all - but its always saying my I/O usage is very low which seems too good to be true… Is there a way for me to configure which partitions to take into account for this analysis?

Normal server:

Server with problems:

@agudiel wow, I have to say that’s the biggest disk page I’ve ever seen in New Relic.

I wasn’t aware New Relic Server Monitor would even detect VirtFS,

I haven’t a WHM install to hand to take a look myself, but I’m wondering as my understanding is that VirtFS is like a jailed SSH shell for your customers. So it obviously creates a mount. New Relic LSM should only detect mounts that are the following type

btrfs, ext3/ext4, gfs, hpfs, jfs, ocfs, psfs, reiserfs, vzfs, xfs, zfs, cvfs, msdos, minix, hpfs, vxfs, vfat.
Additionally, the network filesystems we support include: nfs2, nfs3.

Can you share what the VirtFS file systems look like in /etc/mtab or /etc/fstab as I’m intrigued. If you don’t want to share this publically, just tell me and I can bring this into a private support ticket.

1 Like

Hi acuffe.

Sorry I took so long but I was on a business trip. I believe you are right about the jailed SSH, even though I have disabled SSH for my customers as I manage everything.

I checked both files.

This is fstab:

/etc/fstab

Created by anaconda on Tue Feb 9 22:08:39 2016

Accessible filesystems, by reference, are maintained under ‘/dev/disk’

See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

/dev/mapper/VolGroup-lv_root / xfs defaults,uquota 0 0
UUID=eeda6b6f-2675-49a8-b0bd-fb7d2c10f279 /boot xfs defaults 0 0
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
/usr/tmpDSK /tmp ext3 defaults,noauto 0 0


mtab is quite large… I’ll send that in private.

I can’t find an option to send you a private message… How do I do that?

Hey Agudiel

I’ll create a private support ticket for you now. Watch your inbox and we’ll have a chat about it and see if we can find out what’s happening and then I can report back here with our finding.

Hi @acuffe,

Thank you for your help.

When I click on the link you sent me I get a “Ticket not found” page…

That’s odd, you should be able to either access via support.newrelic.com once logged into your New Relic account or you can reply via email, that will get to me too.

As always when we move to a private ticket we like to return with the answer we came to having investigated the issue with the Original Poster.

In this scenario it looks to be a combination of CentOS 7 + Cpanel/WHM which creates VirtFS SSH jails which happens to mount disks in fstab as an XFS mount. This lead to our Linux Server Monitor, correctly, detecting it as a disk, because, well WHM mounted it as a disk.

This resulted in hundreds of disks, an unusable page for this particular customer. Both WHM/Cpanel and New Relic Linux Server Monitor as actually both working as expected in this scenario.

As such we have filed a feature request to our product managers to suggest the idea of a method to ignore disks, which in this scenario might help with controlling the volume of disks showing on the disk page (which due to the volume of disks was making it unusable).

This would only affect customers using WHM + CentOS 7 + having hundreds of accounts with SSH Jailed shells which would result in hundreds of disks and lead to this issue. If you have this issue, contact New Relic support and we will investigate a potential solution for you.

Hi, luckily I found this topic, I just installed New Relic on Centos 7 with WHM and had exactly the same problem as above, many virtfs disk and a page which takes ages to load. Could you please share with me any solution, I’m in the process of filling the server with accounts from another server (which was a Centos 6.5) and I’m just puzzled by this new finding. Thanks in advance.

Hi @davide_giangiordano -

I think @acuffe summed it up in the post above - this is a feature request. Although we did file the feature request, it was not resolved. Additionally, as the Servers product will be reaching end of life, we are not currently doing active development of new features.

Sorry I can’t give you better news, but I hope this at least clears things up.

1 Like

What a pity, New Relic server has been a good help.

Thanks anyway.