I diagrammed out my home lab/home server setup, mostly to keep a complete overview of how everything connects. I didn’t want to get bogged down in aesthetics around colour scheme, or layout – as you can no doubt tell. After a while diagramming it started to feel like a meme where I was trying to convey some crazy conspiracy theory on a wall of pinned paperwork and connecting threads. I think I am done documenting everything. But now I am wondering how obsessive I should be about detailing every little thing and VLANs and IP assignments. I don’t really care if it looks like a dog’s dinner, I really just care about “okay, where does this wire go to?” Is that the right approach?
That’s mostly semantics, for me at least.
I have only one NAS, and one Proxmox host that is up 24/7, so they are in production.
I regularly tinker with those two as well, it’s all part of my lab.
This is how it works for me. I am using the homelab to learn new things. Part of that learning process is getting things into production and maintaining them. Because managing a production environment is one of the things I want to learn.
A homelab is whatever you use to tinker and try things out A homeserver is whatever you use for stable workloads
Both can coexist at the time
Next level is a home datacenter, and that’s where you have a 24 U rack or something that shouldn’t fit in an apartment You have a homedatacenter!
I’m if you can post more about the hardware software and network config really curious about your setup, it looks well thought
Ummmm my 375 TB array and 256 GB of GPU is a home lab thank you very much. I’ve only got 18U of 24 filled!
Side note: how should we brag about gpu power? What is the proper metric/terminology?
I think kWh is the correct unit :)
Those are rookie numbers, I’m measuring GPUs by the amount of nuclear reactors required to power my setup.
So far it’s at 12 and I’ve made Jensen’s christmas card list.
The growth has been purely organic. I cannot say any of it is really planned ahead of time. I use 16U vertical rails for each rack, and then build a cabinet around them that works for the space it is in, e.g. 32U in the cat bathroom rack, which is 16U side-by-side with another 16U. The arcade cabinet rack is 16U technically, but I only have 6U of rails in there, as the other space is pull out drawers to make it easier to work on the workstations without having to deal with cabling issues. 16U at the RV.
For permanent infra, I tend to buy new, because I want that extended warranty and am not interested in buying somebody elses problem. For projects, it is a mix of ebay finds and road-side or ewaste center salvage. I don’t watch TV, but I probably own more 55" 4K TVs than any one person I know, because I salvage them (people in big cities throw out all sorts of stuff with minor electrical faults) and then turn them into personal projects, e.g. a touchscreen cat toy, a waterfall ring toss game in the door of an art gallery, a virtual window.
Some days it feels like everything is held together with string and chewing gum.
I was wondering on the sheer amount of monitors you had in your diagram…that helps explain it. Tip of the hat to you and your setup!
It stops being homelab when the focus goes from labbing to production, when it becomes a homeprod enviroment instead.
My take too.
A lab is a testing space, a playground, something that can be brought up and down and broken and fixed at will. It will be destroyed and rebuilt frequently.
As soon as it stops being possible to do that without someone (even if just yourself) getting annoyed that a service or functionality isn’t working, then you’ve graduated to homeproduction/homeserver/homedatacentre (depending on its size!).
It’s not truly prod unless you’re messing with it, though.
90% of the posts here are homeprod.
Nothing more permanent than a temporary fix as well?
homelab and homeservices should be 2 different things and separated as much as possible but can share some like a network. Not sure why you to trace where connected where, use UniFi network diagram and some IPAM solution to track VLANS an IP addresses.
A bit off topic but I want to see some IRL pictures of all that stuff lol
Came here to comment this. OP can we get a house tour please?
When you have weekly change control meetings
are you running 9" displays in place of physical photos in frames? Curious how this is setup. Is there a write-up somewhere?
edit: same for “the wall” with the 6x 55" screens.
If you can turn it off and still do things, it’s a homelab. If you run services on it that are vital to your home, then it’s a home server.
Stops being a homelab when you have SLA’s
You, diagram? I just keep throwing crap into the mix and trying to remember which vlan and ip scheme its supposed to use and which device has access. Order is for work, Chaos is for personal enjoyment.
The meaning of “homelab” has changed over the years. Originally it was literally just having the hardware you’d find in the lab at home. e.g. you were taking classes for a CCNA and instead of going to the school’s lab for hands-on with the hardware you’d just replicate the setup at ‘home’. Nothing in the setup would be relied on beyond the specific thing you’re testing in the moment. If you’re going to stick to the original intent of the name, anything beyond “lab” use wouldn’t be “homelab”.
Now it skews more to meaning anything you’re using to learn the technology even if you’re using it as the equivalent of production and rely on it being up as a part of your daily life.
Holy shit… thats quite a “home” enterprise
I don’t see a single other person mentioning it, so I’ll just say it: 52TB of flash storage alone is enough to make me jealous. 52TB of flash storage in an RV is just a few more layers on top.
Sad that the picture-wall project repository isn’t open on github - I hoped to see it in action. Seems very neat.
When I started doing informal change control reviews with family and scheduling disruptive work outside of peak windows to avoid “user impact” - also having critical hot spares available, haha.
Questions: Are you in deep shit if the cat bathroom rack goes down?
Jukebox project sounds cool. Any extra info on that?
What’s the deal with the Oh-Fuck server in the arcade cabinet?
Also, 84 cores in the arcade cabinet? Just… damn.
I am working on the build log for the jukebox project. It’ll be on github eventually.
I have a $700 Tyan motherboard in my workstation. When I was moving the motherboard from one case to another, I scraped the underside of the motherboard against the metal case and broke off a number of small SMT caps and resistors. In the middle of the pandemic. In the middle of a project. So I had to jump on Amazon and have a new motherboard shipped to me next day whilst I RMA’d the damaged one. What do you say when you break your workstation motherboard in a moment of casual clumsiness? “Well… fuck!”
The jukebox is a “retro” jukebox. Wood grain, lots of tactile buttons. Two 14" 4160x1100 pixel touch screens with a vacuum fluroescent display graphic effect that shows the tracks. Click an arcade button, play that track. So it looks like those old-style jukebox devices you’d find in a diner. There are two 1920x1080 flexible touchscreens (though I have them encased so they are just permanently curved touchscreen displays), that let you navigate the full library, show album artwork, search box, etc. It’s all driven by a single Raspberry Pi with a 4TB USB SSD for storage, and everything syncs to the music directory on my Synology NAS.
Am I in deep shit? Only if I don’t clean the litter box in the “Cat Bathroom.” So the only thing that can really go wrong is the power going out. Everything else is sort of redundant, and you can route around it by moving a few cables. I guess the UDM Pro SE taking a shit would cause me some issues. Or the cable modem. Everything else, though used daily and to its fullest extent, simply means those services, e.g. music server, become temporarily unavailable. No real disasters in over 20 years. The backup Synology NAS is effectivel a fail over for essential services, e.g. adguard, but even if both synology devices are down, there’s backup DNS resolving on the UDM and also with quad9.
Is it 84 cores? 4 in the NUC, 28x2 in Storm, 28 in oh-fuck. Never really thought of it that way. I’d like some 8490H XEONs but I cannot justify it right now.