Portal Home > Knowledgebase > Articles Database > SSD HW Raid 10 or ZFS Raid 10 SSD
SSD HW Raid 10 or ZFS Raid 10 SSD
Posted by SAHostKing, 03-13-2016, 03:51 PM |
Hi Guys
We currently have a few VPS servers on Proxmox with HW RAID 10 SATA Enterprise disks ( x 6 ).
We want to move them to same server specs but use SSDs instead.
Do you think ZFS is better option with SSDs for cPanel VPS servers hosting around 500 accounts on each?
or should we stick to good ol HW RAID Cards.
Also should we stick to what we have been using for years RAID 10?
Just want some advice and ideas
|
Posted by CloudToko, 03-13-2016, 04:21 PM |
our experince with zfs is with openstack, so cannot say how you plan to implement it with cpanel.
one pool for all or multiple pools? you still need to backup your pools.
I would stick with raid10 + cpanel offisite/remote backups.
|
Posted by SAHostKing, 03-19-2016, 05:39 AM |
I see alot of people on Proxmox forums are saying ZFS.
Currently we use HW RAID 10 with BBU writeback and have 64GB ECC RAM and cpu Intel Xeon E5-2620 2.4GHz which has 12cores.
6 x 2 TB SATA Enterprise Disks Western Digital in RAID 10 are used on the HW raid controller.
Now we have 3 x cPanel KVM vms on here with 8 VCpus and 12GB ram set each. But we noticing some disk iowait increases from time to time. We host around just around 500 accounts on each but with 800 or so sites on each server and it seems to run fine.
But we trying to improve server response times as at times it gets a bit slow and uptimerobot and nagios shows some slowness occuring.
When we check iowait is around 40.0 or so on average which is not good as usually its under 1. When I look deeper its when customers run backups via cpanel or restore or unzip large files. Eventually when iowait stays like this for some time CPU goes up and up from between 0.8 to 2.5 to high amounts like 12 or under 20 usually.
Will SSDs in ZFS RAID 10 help with this? We are really considering moving over to it as we have many servers to pllay with but do not want to waste time in migrating servers across to them if it won't help that much.
Just want to get everyones opinion on this as to what they do.
Also notice alot of hosts are moving to SSD is it probably due to the slight lag when hosting many accounts and sites from time to time.
|
Posted by bhsp, 03-21-2016, 04:39 PM |
Why don't you look into distributed storage solutions? Your service will be faster and more reliable.
|
Posted by media-hosts_com, 03-21-2016, 05:38 PM |
If you want to stay with your current architecture and not enable distributed/ha style solutions, you might as well use the HW raid cards you already have. No sense ditching the investment, HW raid does have advantages (like simplicity, easier re-building, physical hard drive identifying etc...).
If you're running proxmox and have the ability to use a distributed storage platform like CEPH or GlusterFS, you can get better availability and the storage pool grows as you need it. Then you don't need the HW raid cards and can save that investment when you scale out your infrastructure.
|
Add to Favourites Print this Article
Also Read