Ceph
de-facto disk clustering/redundancy software we're using
deployed from within proxmox: https://smallpox.int.devhack.net:8006/#v1:0:18:4:::::::2
for instructions on use within proxmox / creating VMs, see: proxmox#storage
Ceph is set up to use all of the 1TB ssds in the [proxmox] hosts (except one on each host because ellie is a fool and a jester) and right now has 2 pools available for your consumption:
can also be accessed via s3 api: s3
ceph has other mechanisms of access but I don't really understand them and haven't figured it out yet.
https://docs.ceph.com/en/octopus/man/8/mount.ceph/
rbd vs cephfs vs rados/s3-like thing
core-infra k8s cluster, anarchy k8s, and maybe even members cluster all support ceph-backed persistentvolumestores?
https://docs.ceph.com/en/reef/architecture/
conventions for use
what kind of data can be stored on it? can personal member data be stored on it? should personal member data be stored on it? or only data storage for commons use. What's the reliability expectations of the cluster? what's the redundancy/backup guarantees?
SSDs vs HDDs?
adding new disks
what are the conventions of adding new disks? I'm pretty sure ceph doesn't like small disks being added to it. should only new/reliable disks be added? or does anything go?