Getting Started with Proxmox
Hello everyone,
I finally managed to get my hands on a Beelink EQ 14 to upgrade from the RPi running DietPi that I have been using for many years to host my services.
I have always was interested in using Proxmox and today is the day. Only problem is I am not sure where to start.
For example, do you guys spin up a VM for every service you intend to run? Do you set it up as ext4, btrfs, or zfs?
Do you attach external HDD/SSD to expand your storage (beyond the 2 PCIe slots in the Beelink in this example).
I’ve only started reading up on Proxmox just today so I am by no means knowledgeable on the topic
I hope to hear how you guys setup yours and how you use it in terms of hosting all your services (nextcloud, vaultwarden, cgit, pihole, unbound, etc…) and your ”Dos and Don’ts“
Thank you 😊
like this
reshared this
anamethatisnt
in reply to modeh • • •pve.proxmox.com/wiki/ZFS_on_Li…
Learning the redundancy and backup systems before having too many services active allows you to screw up and redo.
ZFS on Linux - Proxmox VE
pve.proxmox.comSidewaysHighways
in reply to anamethatisnt • • •anamethatisnt
in reply to SidewaysHighways • • •modeh
in reply to anamethatisnt • • •anamethatisnt
in reply to modeh • • •If you only got 2x m.2 slots then I would probably prioritize disk space over RAID1 and ensure you got a backup up and running. There are m.2 to sata adapters but your Bee-link doesn't have a suitable psu for that.
Zwuzelmaus
in reply to modeh • • •You have that new machine to play with. So do it.
Install it and play around. If you do nothing that should "last forever" in these first days, you can tear it down and do it again in different ways.
I have recently played in the same way with the proxmox unattended install feature, and it was a lot fun. One text file and a bootable image on a stick.
modeh
in reply to Zwuzelmaus • • •Nis
in reply to modeh • • •I've been doing it for a couple of years. I don't think I'll ever be done screwing around with it.
Embrace the flux :)
catrass
in reply to modeh • • •As with most things homelab related, there is no real "right" or "wrong" way, because its about learning and playing around with cool new stuff! If you want to learn about different file systems, architectures, and software, do some reading, spin up a test VM (or LXC, my preference), and go nuts!
That being said, my architecture is built up of general purpose LXCs (one for my Arr stack, one for my game servers, one for my web stuff, etc). Each LXC runs the related services in docker, which all connect to a central Portainer instance for management.
Some things are exceptions though, such as Open Media Vault and HomeAssistant, which seem to work better as standalone VMs.
The services I run are usually something that are useful for me, and that I want to keep off public clouds. Vaultwarden for passwords and passkeys, DoneTick for my todo-list, etc. If I have a gap in my digital toolkit, I always look for something that I can host myself to fill thay gap. But also a lot of stuff I want to learn about, such as the Grafana stack for observability at the moment.
modeh
in reply to catrass • • •Thank you.
I guess I have more reading to do on Portainer and LXC. Using an RPi with DietPi, I didn’t have the need to learn any of this. Now is a good time as ever.
But generally speaking, how is a Linux container different (or worse) than a VM?
anamethatisnt
in reply to modeh • • •If you are starved for hardware resources then running lxcs instead of vms could give you more bang for the buck.
like this
osaerisxero likes this.
Lyra_Lycan
in reply to modeh • • •An LXC is isolated, system-wise, by default (unprivileged) and has very low resource requirements.
- Storage also expands when needed, i.e. you can say it can have 40GB but it'll only use as much as needed and nothing bad will happen if your allocated storage is higher than your actual storage.. Until the total usage approaches 100%. So there's some flexibility. With a VM the storage is definite.
- Usually a Debian 12 container image takes up ~1.5GB.
- LXCs are perfectly good for most use cases. VMs, for me, only come in when necessary, when the desired program has more needs like root privileges, in which case a VM is much safer than giving an LXC access to the Proxmox system. Or when the program is a full OS, in the case of Home Assistant.
Separating each service ensures that if something breaks, there are zero collateral casualties.
BOFH666
in reply to modeh • • •modeh
in reply to BOFH666 • • •Only reason I am thinking cgit is because I want a simple interface to show repos and commit history, not interested in doing pull requests, opening issues, etc…
I feel Forgejo would be “killing an ant with a sledgehammer” kinda situation for my needs.
Nonetheless, thank you for your suggestion.
abeorch
in reply to modeh • •Selfhosted reshared this.
modeh
in reply to abeorch • • •incentive
in reply to modeh • • •jubilationtcornpone
in reply to modeh • • •I use one VM per service. WAN facing services, of which I only have a couple, are on a separate DMZ subnet and are firewalled off from the LAN.
It's probably little overkill for a self hosted setup but I have enough server resources, experience, and paranoia to support it.
anamethatisnt
in reply to jubilationtcornpone • • •Playing with lxcs and docker could allow one to run more services on a little beelink.
jubilationtcornpone
in reply to anamethatisnt • • •Yeah, with something that size you're pretty much limited to containers.
Edit: Which is totally fine, OP. Self hosting is an opportunity to learn and your setup can be easily changed as your needs change over time.
lucas
in reply to jubilationtcornpone • • •jubilationtcornpone
in reply to lucas • • •In this situation it's not necessarily that it's the "right" or "wrong" device. The better question is, "does it meet your needs?" There are pros and cons to running each service in its own VM. One of the cons is the overhead consumed by the VM OS.
Sometimes that's a necessary sacrifice.
Some of the advantages of running a system like Proxmox are that it's easily scalable and you're not locked into specific hardware. If your current Beelink doesn't prove to be enough, you can just add another one to the cluster or add a different host and Proxmox doesn't care what it is.
TLDR: it's adequate until it's not. When it's not, it's an easy fix.
lucas
in reply to jubilationtcornpone • • •The fact that it's an easy fix to more a VM/lxc to a new host is absolutely it, though.
modeh
in reply to jubilationtcornpone • • •I have a couple of publicly accessible services (vaultwarden, git, and searxng). Do you place them on a separate subnet via proxmox or through the router?
My understanding in networking is fundamental enough to properly setup OpenWrt with an inbound and outbound VPN tunnels along with policy based routing, and that’s where my networking knowledge ends.
abeorch
in reply to modeh • •Selfhosted reshared this.
anamethatisnt
in reply to modeh • • •modeh
in reply to anamethatisnt • • •I travel internationally and some of the countries In been to have been blocking my wireguard tunnel back home preventing me from accessing my vault. I tried setting it up with shadowsocks and broke my entire setup so I ended up resetting it.
Any suggestions that is not tailscale?
anamethatisnt
in reply to modeh • • •abeorch
in reply to modeh • •Selfhosted reshared this.
MangoPenguin
in reply to modeh • • •I have a single container for docker that runs 95% of services, and a few other containers and VMs for things that aren't docker, or are windows/osx.
ext4 is the simple easy option, I tend to pick that on systems with lower amounts of RAM since ZFS does need some RAM for itself.
I do have an external USB HDD for backups to be stored on.
Lyra_Lycan
in reply to modeh • • •For inspiration, here's my list of services:
Here is the overhead for everything. CPU is an i3 6100 and RAM is 2133MHz:
Quick note about my setup, some things threw a permissions hissy fit when in separate containers, so Media actually has Emby, Sonarr, Radarr, Prowlarr and two instances of qBittorrent. A few of my containers do have supplementary programs.
modeh
in reply to Lyra_Lycan • • •Thank you, that’s actually quite informative. Gives me a good idea of what could go where in terms of my setup.
So far I recreated my RPi DietPi setup in a VM but for some reason Pi-Hole + Unbound combo is now fucking with my internet connectivity.
It is so weird, I assigned it a static lease for the old RPi IP address in OpenWrt and left all the rules in there intact and you would think it would be a “drop-in replacement” but it isn’t. Not sure if Proxmox has some weird firewall situation going on. Definitely need to fuck around more with it to better understand it.
lemming741
in reply to modeh • • •To piggyback on the permissions hissy fit-
My aar stack, openmediavault, and transmission stack have different usernames mapped to the same uid and it is a pain in the ass. I "fixed it" by making a NAS group that catches them all, but by "fixed it" I really mean "got it working"
So be aware of what uid will own a file and maybe change it to a uid in the 1100+ range to make NFS easier in the future.
hobbsc
in reply to modeh • • •i have very few services and tend to lean into virtual machines instead of containers out of habit. i have proxmox running on an old mini-pc that needs to be replaced at some point. 16GB of RAM in it, 4 cores on the CPU (it's an i3 at 2ghz), and a 100GB SSD.
VMs and services are as follows:
home assistant backs itself up to my craptastic nas and the rest of the stuff doesn't really have any backups. i wouldn't be upset if they died, except for my kanboard instance. i can rebuild that from scratch if needed.
i'll be investing in a new mini-pc and some more disks soon, though.
Possibly linux
in reply to modeh • • •Install Proxmox with ZFS
Next configure the non enterprise repo or buy a subscription
interdimensionalmeme
in reply to Possibly linux • • •Is there any way to remove ZFS and Ceph, they cause errors and taint the kernel
itsfoss.com/linus-torvalds-zfs…
Don't Use ZFS on Linux: Linus Torvalds
Abhishek Prakash (It's FOSS)3dcadmin
in reply to interdimensionalmeme • • •Possibly linux
in reply to interdimensionalmeme • • •interdimensionalmeme
in reply to Possibly linux • • •Possibly linux
in reply to interdimensionalmeme • • •LVM is not even close
ZFS is way more fault tolerant and scalable due to the underlying design. In continually does data integrity checks and will catch but flips.
ZFS also has Arc which allows your ram to act as a full on cache which improves performance.
interdimensionalmeme
in reply to Possibly linux • • •You can do data scrubbing with PAR2 or filesystem level with btrfs on top of LVM (or even in a traditionnal partition)
I think you can ram cache with bcachefs or a ramdrive, and unless you're in a VM then your file system driver would already do file caching in ram ?
tvcvt
in reply to interdimensionalmeme • • •interdimensionalmeme
in reply to tvcvt • • •Ron
in reply to modeh • • •My proxmox setup is like multiple nodes (computers) with local (2 drives with ZFS mirrorig), they all use a truenas server as NFS host for data storage.
For some things I use conaitners (LXC) but other thing I use VMs.
sj_zero
in reply to modeh • • •I moved to Proxmox a while back and it was a big upgrade for my setup.
I do not use VMs for most of my services. Instead, I run LXC containers. They are lighter and perfect for individual services. To set one up, you need to download a template for an operating system. You can do this right from the Proxmox web interface. Go to the storage that supports LXC templates and click the Download Templates button in the top right corner. Pick something like Debian or Ubuntu. Once the template is downloaded, you can create a new container using it.
The difference between VMs and LXC containers is important. A VM emulates an entire computer, including its own virtual hardware and kernel. This gives you full isolation and lets you run completely different operating systems such as Windows or BSD, but it comes with a heavier resource load. An LXC container just isolates a Linux environment while running on the host system’s kernel. This makes containers much faster and more efficient, but they can only run Linux. Each container can also have its own IP address and act like a separate machine on your network.
I tend to keep all my services in lxc containers, and I run one VM which I use for a jump box I can hop into if need be. It's a pain getting x11 working in a container, so the VM makes more sense.
Before you start creating containers, you will probably need to create a storage pool. I named mine AIDS because I am an edgelord, but you can use a sensible name like pool0 or data.
Make sure you check the Start at boot option for any container or VM you want to come online automatically after a reboot or power outage. If you forget this step, your services will stay offline until you manually start them.
Expanding your storage with an external SSD works well for smaller setups. Longer term, you may want to use a NAS with fast network access. That lets you store your drive images centrally and, if you ever run multiple Proxmox servers, configure hot standby so one server can take over if another fails.
I do not use hot standby myself. My approach is to keep files stored locally, then back them up to my NAS. The NAS in turn performs routine backups to an external drive. This gives me three copies of all my important files, which is a solid backup strategy.