Need help optimising a server at work
I've been lurking on OCN for years. Someone suggested I post here for help. I generally don't use forums but I may stick around!
Ok, so here's the situation and a bit of history:
I'm a PC nerd through and through. At work we were moving premises to a larger unit and needed some new server equipment for the office. I convinced my boss to give me £4,000 budget on a new server which would host some of our internal services and eventually (when we got a fibre line installed) some externally facing ones. The fibre finally came just recently, and after nearly 12 months of use there's some issues with the server setup.
- 2 x Intel Xeon E5-2620 v4 (Broadwell-EP)
- 2 x 32GB Crucial ECC 2133MHz RAM
- 1 x Gigabyte MD60-SC0 Motherboard
- 5 x 4TB Western Digital Enterprise HDDs (RAID 6 - 12TB in total)
- 1 x Intel 750 480GB NVMe PCI-E SSD
- 1 x LSi MegaRAID 9271-8I
- 1 x eVGA PSU (Gold, 10 year warranty)
- 2 x Noctua NH-U9DX i4 coolers
- 1 x Generic crappy server case
- Main OS is Ubuntu 16.04 Server. This hosts all the VMs
- 2 x (VM) Ubuntu 16.04 Server configured for Bind9 DNS (one is master, one is slave)
- (VM) Ubuntu 16.04 Server configured for OpenVPN
- (VM) OpenMediaVaultDebian Jessie (NAS)
- (VM) Ubuntu 16.04 Server hosting Jira and Confluence web services
- (VM) Windows 10
- (VM) Windows 8.1
- (VM) Windows 7
Ok, so first off I think new hardware is totally out of the picture. My boss already thinks this is massively overspeced for our needs, but in my defence I didn't think it'd take 12 months for lawyers to wrangle the Fibre Optic installation out.
There's a few issues:
- It gets too hot. The server case is some generic crap with 5 HDD bays. We were very space contrained and still are. The LSI card is super hot and would burn skin on contact if left too long. The CPU's are nice and chilly under the Noctua coolers. They're doing a great job. But the HDD's and RAID controller are too hot and have been for a while.
- Everything is hosted on one main Ubuntu Server installation. If our IT guy messes something up, and boy does he do that often, the whole thing could go down and it'll be up to me to fix
- All the VM's host their storage space on the Intel 750. Yep. I know. If that fails all the VMs are dead. There's no redundancy. This terrifies me. I wanted to use the SSD as cache for the RAID but, through my own stupidity, the LSI RAID controller can only use SSD's as cache that are plugged into it. The Intel card is a PCI-e NVMe card :/
- Network transfers are slow. If just one person was connected to our whole network, and we have SPF fibre in some parts, speeds would be 80MB/s, rather than the, roughly, 130MB/s. Now compound that with 30 users and potentially 6-7 using it simultaneously
I have been reading about ZFS. I was guessing I could use ZFS to create a single pool from the RAID (not using ZFS to create a RAID, the controller would still do that) then adding the Intel SSD as a cache to the pool.
Using EXSi to manage the whole system and create VMs. I looked into this last year but I needed time to learn it and I didn't have it, so went for the solution we have now. If I remember right, I can run it from a USB stick. I would then create ALL the VM's I need on the ZFS pool which would be cached and redundancy in case a drive fails.
I'm going to work some magic and ask my boss for another £200 or so to get a case. This time I'll get a proper PC level case as they're actually cheaper than server cases, have better cooling and many more features.
What do I need from OCN?
To be gentle. I know I have made some poor choices in the build. But I'm here now looking for the right way to do things. I dived into this thinking I know PC hardware so this should be easy. It's most definitely not
What would you do?
Are my ideas good? Crap?
Did I do anything right? '); }
Read responses in overclock.net