Skip to main content


Getting Started with Proxmox


Hello everyone,

I finally managed to get my hands on a Beelink EQ 14 to upgrade from the RPi running DietPi that I have been using for many years to host my services.

I have always was interested in using Proxmox and today is the day. Only problem is I am not sure where to start.
For example, do you guys spin up a VM for every service you intend to run? Do you set it up as ext4, btrfs, or zfs?
Do you attach external HDD/SSD to expand your storage (beyond the 2 PCIe slots in the Beelink in this example).

I’ve only started reading up on Proxmox just today so I am by no means knowledgeable on the topic

I hope to hear how you guys setup yours and how you use it in terms of hosting all your services (nextcloud, vaultwarden, cgit, pihole, unbound, etc…) and your ”Dos and Don’ts“

Thank you 😊

reshared this

in reply to modeh

I would start with one VM running portainer and once that is up and running I would recommend learning how to backup and restore the VM. If you have enough disks I would look into ZFS RAID 1 for redundancy.
pve.proxmox.com/wiki/ZFS_on_Li…
Learning the redundancy and backup systems before having too many services active allows you to screw up and redo.
in reply to SidewaysHighways

I remember trying both back when my server was new but missing something in dockge, can't remember what right now.
in reply to anamethatisnt

The Beelink comes with two PCIe slots, so I have two internal drives for now. Is it acceptable to attach external HDDs and set them up in a RAID configuration with the internal ones? I do plan on the Beelink being a NAS too (limited budget, can’t afford a separate dedicated NAS at the moment)
in reply to modeh

I wouldn't use RAID on USB.
If you only got 2x m.2 slots then I would probably prioritize disk space over RAID1 and ensure you got a backup up and running. There are m.2 to sata adapters but your Bee-link doesn't have a suitable psu for that.
in reply to modeh

You have that new machine to play with. So do it.

Install it and play around. If you do nothing that should "last forever" in these first days, you can tear it down and do it again in different ways.

I have recently played in the same way with the proxmox unattended install feature, and it was a lot fun. One text file and a bootable image on a stick.

This entry was edited (1 day ago)
in reply to Zwuzelmaus

Oh yeah, absolutely will do. Was simply hoping to get an idea of how self-hosters who’ve been using it for a while now set it up to get a rough picture of where I want to be once I am done screwing around with it.
in reply to modeh

I've been doing it for a couple of years. I don't think I'll ever be done screwing around with it.

Embrace the flux :)

This entry was edited (1 day ago)
in reply to modeh

As with most things homelab related, there is no real "right" or "wrong" way, because its about learning and playing around with cool new stuff! If you want to learn about different file systems, architectures, and software, do some reading, spin up a test VM (or LXC, my preference), and go nuts!

That being said, my architecture is built up of general purpose LXCs (one for my Arr stack, one for my game servers, one for my web stuff, etc). Each LXC runs the related services in docker, which all connect to a central Portainer instance for management.

Some things are exceptions though, such as Open Media Vault and HomeAssistant, which seem to work better as standalone VMs.

The services I run are usually something that are useful for me, and that I want to keep off public clouds. Vaultwarden for passwords and passkeys, DoneTick for my todo-list, etc. If I have a gap in my digital toolkit, I always look for something that I can host myself to fill thay gap. But also a lot of stuff I want to learn about, such as the Grafana stack for observability at the moment.

in reply to catrass

Thank you.

I guess I have more reading to do on Portainer and LXC. Using an RPi with DietPi, I didn’t have the need to learn any of this. Now is a good time as ever.

But generally speaking, how is a Linux container different (or worse) than a VM?

in reply to modeh

A VM is properly isolated and has it's own OS and kernel. This improves security at the cost of overhead.
If you are starved for hardware resources then running lxcs instead of vms could give you more bang for the buck.
in reply to modeh

An LXC is isolated, system-wise, by default (unprivileged) and has very low resource requirements.
- Storage also expands when needed, i.e. you can say it can have 40GB but it'll only use as much as needed and nothing bad will happen if your allocated storage is higher than your actual storage.. Until the total usage approaches 100%. So there's some flexibility. With a VM the storage is definite.
- Usually a Debian 12 container image takes up ~1.5GB.
- LXCs are perfectly good for most use cases. VMs, for me, only come in when necessary, when the desired program has more needs like root privileges, in which case a VM is much safer than giving an LXC access to the Proxmox system. Or when the program is a full OS, in the case of Home Assistant.

Separating each service ensures that if something breaks, there are zero collateral casualties.

This entry was edited (1 day ago)
in reply to modeh

Replace cgit with Forgejo. I really like the software from Jason, but Forgejo is a huge difference
in reply to BOFH666

Only reason I am thinking cgit is because I want a simple interface to show repos and commit history, not interested in doing pull requests, opening issues, etc…

I feel Forgejo would be “killing an ant with a sledgehammer” kinda situation for my needs.

Nonetheless, thank you for your suggestion.

in reply to modeh

Certainly no expert but would starting with setting up some cloudint image templates be somewhere in there?

Selfhosted reshared this.

in reply to abeorch

Not even sure what that is, so most likely a no for me.
in reply to modeh

Template for setting up your new VMs - after setting up your first template its a few clicks and deploy for new VMs
in reply to modeh

I use one VM per service. WAN facing services, of which I only have a couple, are on a separate DMZ subnet and are firewalled off from the LAN.

It's probably little overkill for a self hosted setup but I have enough server resources, experience, and paranoia to support it.

in reply to jubilationtcornpone

I prefer running true vms too, but it is resource intensive.
Playing with lxcs and docker could allow one to run more services on a little beelink.
in reply to anamethatisnt

Yeah, with something that size you're pretty much limited to containers.

Edit: Which is totally fine, OP. Self hosting is an opportunity to learn and your setup can be easily changed as your needs change over time.

This entry was edited (1 day ago)
in reply to jubilationtcornpone

Am I looking at the wrong device? Beelink EQ15 looks like it has an N150 and looks like 16GB of ram? That's plenty for quite few VMs. I run an N100 minipc with only 8GB of RAM and about half a dozen VMs and a similar number of LXC containers. As long as you're careful about only provisioning what each VM actually needs, it can be plenty.
in reply to lucas

In this situation it's not necessarily that it's the "right" or "wrong" device. The better question is, "does it meet your needs?" There are pros and cons to running each service in its own VM. One of the cons is the overhead consumed by the VM OS.
Sometimes that's a necessary sacrifice.

Some of the advantages of running a system like Proxmox are that it's easily scalable and you're not locked into specific hardware. If your current Beelink doesn't prove to be enough, you can just add another one to the cluster or add a different host and Proxmox doesn't care what it is.

TLDR: it's adequate until it's not. When it's not, it's an easy fix.

in reply to jubilationtcornpone

Absolutely. I actually have an upgrade already planned, but it's just that it's not because I can't run VMs, it's more that I want to run more hungry services than will fit on those resources, whatever virtualisation layers were being used.
The fact that it's an easy fix to more a VM/lxc to a new host is absolutely it, though.
in reply to jubilationtcornpone

I have a couple of publicly accessible services (vaultwarden, git, and searxng). Do you place them on a separate subnet via proxmox or through the router?

My understanding in networking is fundamental enough to properly setup OpenWrt with an inbound and outbound VPN tunnels along with policy based routing, and that’s where my networking knowledge ends.

in reply to modeh

We should talk - I am using Proxmox and #openwrt. I am setting up a dmz for publoc services with external ports exposed. (but failing)

Selfhosted reshared this.

in reply to modeh

Unless you wanna expose services to others my recommendation is always to hide your services behind a vpn connection.
in reply to anamethatisnt

I travel internationally and some of the countries In been to have been blocking my wireguard tunnel back home preventing me from accessing my vault. I tried setting it up with shadowsocks and broke my entire setup so I ended up resetting it.

Any suggestions that is not tailscale?

in reply to modeh

I find setting up an openvpn server with self-signed certificates + username and password login works well. You can even run it on tcp/443 instead of tcp/1194 if you want to make it less likely to be blocked.
in reply to modeh

Id love to meet others who are just starting out with Proxmox and do some casual video calls/chats European tomezones to learn together /try stuff out.

Selfhosted reshared this.

in reply to modeh

I have a single container for docker that runs 95% of services, and a few other containers and VMs for things that aren't docker, or are windows/osx.

ext4 is the simple easy option, I tend to pick that on systems with lower amounts of RAM since ZFS does need some RAM for itself.

I do have an external USB HDD for backups to be stored on.

in reply to modeh

For inspiration, here's my list of services:

NameID No.Primary Use
heart(Node)ProxMox
guard(CT) 202AdGuard Home
management(CT) 203NginX Proxy Manager
smarthome(VM) 804Home Assistant
HEIMDALLR(CT) 205Samba/Nextcloud
authentication(VM) 806BitWarden
mail(VM) 807Mailcow
notes(CT) 208CouchDB
messaging(CT) 209Prosody
media(CT) 211Emby
music(CT) 212Navidrome
books(CT) 213AudioBookShelf
security(CT) 214AgentDVR
realms(CT) 216Minecraft Server
blog(CT) 217Ghost
ourtube(CT) 218ytdl-sub YouTube Archive
cloud(CT) 219NextCloud
remote(CT) 221Rustdesk Server

Here is the overhead for everything. CPU is an i3 6100 and RAM is 2133MHz:

Image/photo

Quick note about my setup, some things threw a permissions hissy fit when in separate containers, so Media actually has Emby, Sonarr, Radarr, Prowlarr and two instances of qBittorrent. A few of my containers do have supplementary programs.

in reply to Lyra_Lycan

Thank you, that’s actually quite informative. Gives me a good idea of what could go where in terms of my setup.

So far I recreated my RPi DietPi setup in a VM but for some reason Pi-Hole + Unbound combo is now fucking with my internet connectivity.
It is so weird, I assigned it a static lease for the old RPi IP address in OpenWrt and left all the rules in there intact and you would think it would be a “drop-in replacement” but it isn’t. Not sure if Proxmox has some weird firewall situation going on. Definitely need to fuck around more with it to better understand it.

in reply to modeh

To piggyback on the permissions hissy fit-

My aar stack, openmediavault, and transmission stack have different usernames mapped to the same uid and it is a pain in the ass. I "fixed it" by making a NAS group that catches them all, but by "fixed it" I really mean "got it working"

So be aware of what uid will own a file and maybe change it to a uid in the 1100+ range to make NFS easier in the future.

in reply to modeh

i have very few services and tend to lean into virtual machines instead of containers out of habit. i have proxmox running on an old mini-pc that needs to be replaced at some point. 16GB of RAM in it, 4 cores on the CPU (it's an i3 at 2ghz), and a 100GB SSD.

VMs and services are as follows:

  • ubuntu vm
    • runs my omada controller in docker
    • used to run all of my containers in docker but i migrated them to podman


  • fedora vm
    • runs several containers via podman
    • alexandrite, where i'm composing this now!
    • uptime kuma
    • redlib for browsing reddit
    • kanboard for organizing my contracting work


  • dietpi in a vm to run pi-hole (migrated here when my pi zero-w cooked itself)
    • this also handles internal dns for each server so i don't have to type out IP addresses


  • home assistant HAOS vm

home assistant backs itself up to my craptastic nas and the rest of the stuff doesn't really have any backups. i wouldn't be upset if they died, except for my kanboard instance. i can rebuild that from scratch if needed.

i'll be investing in a new mini-pc and some more disks soon, though.

in reply to modeh

Install Proxmox with ZFS

Next configure the non enterprise repo or buy a subscription

This entry was edited (8 hours ago)
in reply to Possibly linux

Is there any way to remove ZFS and Ceph, they cause errors and taint the kernel

Image/photo

itsfoss.com/linus-torvalds-zfs…

This entry was edited (10 hours ago)
in reply to interdimensionalmeme

Whilst on many things I respect Linus he is always opinionated to the max. I seem to remember him doing the same over hardware raid and then software raid as well over the years - so Linus what would you like us to use for some data security eh? I do have to throw in that this article is also from 2023 - a long time and a whole host of the issues he says are simply not true any longer
in reply to interdimensionalmeme

There isn't anything better than ZFS at the moment. Having a tainted kernel doesn't really mean much.
in reply to Possibly linux

Except having slightly better deduplication, I don't the see what justifies the extra complexity and living under the bad aura of Oracle. LVM does almost everything ZFS does, it's just less abstracted, which I like actually because I want to know on what hard drive my stuff is, not some mushy file cloud that either all works or is all gone.
in reply to interdimensionalmeme

LVM is not even close

ZFS is way more fault tolerant and scalable due to the underlying design. In continually does data integrity checks and will catch but flips.

ZFS also has Arc which allows your ram to act as a full on cache which improves performance.

in reply to Possibly linux

You can do data scrubbing with PAR2 or filesystem level with btrfs on top of LVM (or even in a traditionnal partition)

I think you can ram cache with bcachefs or a ramdrive, and unless you're in a VM then your file system driver would already do file caching in ram ?

in reply to interdimensionalmeme

Ceph isn't installed by default (at least it hasn't been any time I've set up PVE) and there's no need to use ZFS if you don't want to. It's available, but you can go right ahead and install the system on LVM instead.
in reply to tvcvt

Well, it taints the kernel, probably runs some background processes, it should be removable ?
in reply to modeh

It depends a bit on your needs.
My proxmox setup is like multiple nodes (computers) with local (2 drives with ZFS mirrorig), they all use a truenas server as NFS host for data storage.
For some things I use conaitners (LXC) but other thing I use VMs.
in reply to modeh

I moved to Proxmox a while back and it was a big upgrade for my setup.

I do not use VMs for most of my services. Instead, I run LXC containers. They are lighter and perfect for individual services. To set one up, you need to download a template for an operating system. You can do this right from the Proxmox web interface. Go to the storage that supports LXC templates and click the Download Templates button in the top right corner. Pick something like Debian or Ubuntu. Once the template is downloaded, you can create a new container using it.

The difference between VMs and LXC containers is important. A VM emulates an entire computer, including its own virtual hardware and kernel. This gives you full isolation and lets you run completely different operating systems such as Windows or BSD, but it comes with a heavier resource load. An LXC container just isolates a Linux environment while running on the host system’s kernel. This makes containers much faster and more efficient, but they can only run Linux. Each container can also have its own IP address and act like a separate machine on your network.

I tend to keep all my services in lxc containers, and I run one VM which I use for a jump box I can hop into if need be. It's a pain getting x11 working in a container, so the VM makes more sense.

Before you start creating containers, you will probably need to create a storage pool. I named mine AIDS because I am an edgelord, but you can use a sensible name like pool0 or data.

Make sure you check the Start at boot option for any container or VM you want to come online automatically after a reboot or power outage. If you forget this step, your services will stay offline until you manually start them.

Expanding your storage with an external SSD works well for smaller setups. Longer term, you may want to use a NAS with fast network access. That lets you store your drive images centrally and, if you ever run multiple Proxmox servers, configure hot standby so one server can take over if another fails.

I do not use hot standby myself. My approach is to keep files stored locally, then back them up to my NAS. The NAS in turn performs routine backups to an external drive. This gives me three copies of all my important files, which is a solid backup strategy.