I see no problem putting all the machines into a single cluster. By the way, what are you using for shared storage?
I see no problem putting all the machines into a single cluster. By the way, what are you using for shared storage?
As mentioned, if a drive dies, you just need to take it out and replace with a new one and start RAID rebuild. The vendor should have a guide on this with detailed steps.
Well, on Windows, for bit rot prevention there is ReFS but the problem with it is that can go RAW for no reason. Happened to me several times. As to RSTe (Intel vROC), poor performance and also not reliable. Plus, not sure how the migration would go if you want to transfer to another system.
I think that should be possible but I would prefer the second option mentioned - Hyper-V role with a NAS OS Vm and drives passed through to it. Then collect in RAID inside a VM.
That’s a very decent setup. What are you running on it?
Hmm, I guess the most IOPs and latency cut will come from a storage protocol use. I mean, with 10GbE and iSCSI or NFS, you might not feel the benefits of NVMe. Especially in terms of latency. And as far as i know, there is no NVMe-oF support yet.
Depends on the amount of data you are writing and the DWPD of an SSD. Also, take into the account parity if you’re doing RAID: https://support.liveoptics.com/hc/en-us/articles/360000498588-Average-Daily-Writes
Very nice and clean setup. Looks great!
Well, R720 is quite old. I would look into R730/R630 options. Or ideally, use some hardware that you already have. An old laptop with Proxmox might very well be a start.
Looks like a really cool setup! Nicely done.
At that capacity, SSDs won’t be significantly more expensive than HDDs. I would get an external SSD plus an HDD for backups. For backups, any drive should be fine. You could also make another backup to Backblaze B2 withh rclone.
I would look into something of HP G9 or Dell R730/R630 range. These will be more power-efficient and you should find something within that price.
That actually looks cool. Are you looking to make sort of an article or blog post and publish the results?
As mentioned, you could get a free license. Or, if you like ESXi, VMUG advantage: https://www.vmug.com/membership/vmug-advantage-membership/
That’s a nice and clean setup. Well done!
Option 1: VMware vSAN (plus witness on some other machine): https://core.vmware.com/resource/vsan-2-node-cluster-guide or Starwinds vSAN: https://www.starwindsoftware.com/vsan that has a free option if I’m not mistaken with just two R630s and decommission R620. Lower power consumption and you get proper HA.
Option 2: Use R620 as a TrueNAS system providing storage over NFS or iSCSI to ESXi cluster. That R620, however, becomes a single point of failure and consumes more power.
Cool rack! The setup looks very neat and clean. Definitely a big step forward. Nicely done!
What’s gonna be the workload? I mean, there is caching of course but you could put some most performance-demanding VMs on NVMe drives in ZFS mirror, some slower VMs on 2x4TB drives in mirror and the rest (file, media server on 8x8TB drives in RAIDZ2).
Sorry if I missed that but what’s gonna be the OS? I mean, on Linux, you can just use Linux Software RAID which is old but gold and will have better performance than ZFS. Otherwise, there are tri-mode NVMe/SAS/SATA controllers.
That’s still a good usage of the hardware. Main thing is that it does the job.