Opened 15 months ago
Last modified 4 weeks ago
#414 assigned enhancement
Get a NAS in the rack
| Reported by: | Owned by: | ||
|---|---|---|---|
| Priority: | minor | Milestone: | |
| Component: | Hardware | Keywords: | Infra |
| Cc: |
Description
Look through current hardware or acquire hardware to put together a NAS.
Change History (7)
comment:1 by , 15 months ago
comment:2 by , 15 months ago
| Type: | task → enhancement |
|---|
comment:3 by , 15 months ago
| Keywords: | Infra added |
|---|
comment:4 by , 7 months ago
Per discussion in #Retro
I've got a barebones 4U Supermicro(?) server chassis that can take standard ATX/E-ATX motherboards and fit 24 3.5" drives
Looking at ServerPartDeals at time of writing, we could purchase 24x 14TB drives for $3600.
Combined with the existing "spare parts" in the server room (Xeon v4 era motherboard/CPU/RAM, 40GB QSFP NIC's) this could make for quite a cost-effective NAS solution.
With 24 14TB drives in a RaidZ-3 (triple parity) system we would have 235TB(!) of usable capacity for a fairly reasonable cost.
This machine would also likely want to be situated in the upcoming "new server room" rather than the current server room for security reasons
comment:5 by , 7 months ago
I have the remains of an X99 system with an i7-5820K and some amount of RAM. This would be a good base for a simple NAS, even if it is energy hungry.
comment:6 by , 5 weeks ago
I'd like to revive this, what kind of power budget should we allocate to the nas? Disks generally use 8/16w idle/loaded, maybe double that for total power? Its relatively easy these days to just put bigger disks in the server to expand storage (though not always cost efficient) so I think having a power budget and going from there is the way to go.
comment:7 by , 4 weeks ago
we have two roughly 30 amp 120v circuits going into the server room, one outlet on the roof and one on the wall. as per https://homeassistant.devhack.net/energy it looks like we're pulling 10amps on one and 2.5amps on the other
(ideally this would all be documented somewhere but alas. I'm also looking at https://wiki.devhack.net/Gelb_Building#Electricity)
of note is that we currently do have a ceph cluster distributed amongst three of our servers: https://wiki.devhack.net/Proxmox#Storage
although there's no direct documentation about conventions surrounding it, how to expand it or what it should be used for. there's a little bit of documentation about how to use it, specifically with regards to mounting/using disks within proxmox VMs. ceph can also expose s3 endpoints and such.
maybe what's actually needed is a decent interface? and decent documentation? If you don't want to use ceph, adding disks to the servers and just having a VM get a direct passthrough to the hardware is probably your best bet?
I'm going to start a little bit of documentation at https://wiki.devhack.net/Ceph
On Thursday, October 17th during QOHN I investigated the possibility of using an unused 1U Hyve server and the 2U storage server that are sitting unused in the server room.
The 1U Hyve server throws POST errors that seem to show a BIOS issue. Need to investigate the possibility of flashing a new BIOS ROM onto the recovery EEPROM.
The 2U storage server seems the current easiest to implement solution if we need to rush a build. It needs HBAs and drives at the minimum.
Only concern is that this server is much too powerful to just use on a NAS; it is a dual CPU C216 chipset server with 128GB of RAM which gives it many more uses than just hosting an array or JBOD.