Upgrading to Proxmox 9 sounds exciting on paper — new features, better performance, cleaner UI. But for a lot of users, that excitement quickly fades the moment their VMs start hoarding RAM like it's the apocalypse. Memory usage shoots through the roof, the Out-of-Memory (OOM) killer kicks in, and boom — your VMs go down hard.
And nope, you're not alone. Reports from the community are rolling in, and they all sound eerily similar: Proxmox 9 is handling memory in a whole new (and not-so-great) way compared to the more stable 8.4 release. Let's dig into what's really going on and how you can get your system under control again.
What's Happening with RAM on Proxmox 9?
Let's say you've got a server packing 196GB of RAM and 64GB of swap. You're running just a couple of VMs — an Ubuntu box needing 3GB and a Windows 11 VM with a generous 40GB allocation. Everything should be smooth sailing, right?
Not quite.
- Launch one VM and memory usage jumps to 120GB.
- Add another, and it spikes to 180GB.
- Try one more, and you hit the wall — OOM killer starts taking out your VMs.
It's a wild situation. The problem? Proxmox 9 is aggressively locking the full memory allocation for each VM, even if the VM itself is barely using it. This behavior is especially jarring for folks coming from 8.4, where memory was dynamically handled and ballooning actually worked.
One user even described assigning 100GB to a VM with ballooning enabled (min: 64GB), only to see the full 100GB locked out immediately — even though the OS only needed maybe 8GB. Compared to 8.4's "actual usage" model, this feels like a big regression.
Another user with 96GB RAM and 62 VMs (24 running) saw usage climb to 71.3GB, even though the workloads were tiny. The conclusion? Proxmox 9 is doing something different — and it's not good news if you like predictable resource usage.
What's Really Causing the RAM Drain?
After poking through the weeds (and a lot of forum posts), here's what seems to be behind the madness:
1. Ballooning Is Busted
Ballooning, in theory, lets a VM release unused memory back to the host. That way, RAM isn't locked up unless it's truly needed. In Proxmox 9, though, ballooning seems more like a placebo. Memory gets locked the moment a VM boots, regardless of whether the guest OS is using it. It's like ballooning is completely ignored.
In 8.4, ballooning worked pretty well. RSS (Resident Set Size) would match the VM's actual memory use. Now? RSS just reflects the max RAM assigned to the VM, even if it's mostly idle.
2. ZFS ARC Cache Might Be a Side Actor
ZFS is great, but its ARC (Adaptive Replacement Cache) can eat up to half of your RAM by default. While this isn't the main culprit here, it can definitely add pressure when RAM starts running thin. If you're already fighting KVM for memory, unchecked ZFS ARC just piles on.
3. PCI Passthrough = Ballooning Blackout
If you're doing GPU passthrough (like passing through a 4070 to a Windows VM), ballooning's out. PCI passthrough disables it entirely, and that VM's RAM is 100% locked in. On Proxmox 9, this gets worse, since there's no dynamic relief when memory gets tight.
4. KVM Overhead + Fragmentation
Each VM doesn't just use its assigned RAM — KVM adds some overhead (roughly 5–10%). In 9, it looks like the hypervisor is holding onto that memory longer than it should, even when guests are done with it. Also, memory fragmentation means the system sometimes can't grab the big chunks of RAM it needs, leading to sudden OOM crashes.
5. OOM Killer Doesn't Wait Around
When memory runs low, Linux's OOM killer steps in. Problem is, it often picks off the biggest RAM consumers — which are your VMs — even if they're not doing anything crazy. In 9, ballooning doesn't kick in to prevent this, so the VMs get smacked first.
What You Can Actually Do to Fix It
Good news: you're not completely helpless here. With some config tweaks and good monitoring, you can work around these quirks. Here's what's been working for folks in the field:
1. Revisit Ballooning Strategy
If ballooning is acting up, double-check your setup:
- Ensure QEMU guest agent + VirtIO balloon drivers are running in the guest.
- Use
info balloonin the VM's monitor tab to verify it's doing anything. - Try narrowing the min/max range (like 8GB min, 12GB max).
- For VMs with passthrough, just turn ballooning off and give them only what they need.
2. Cap That ZFS ARC
If ZFS is involved, don't let it run wild. Limit ARC with:
echo "options zfs zfs_arc_max=8589934592" > /etc/modprobe.d/zfs.conf
update-initramfs -u
rebootThat example sets ARC to 8GB. After reboot, check it with arc_summary or arcstat. You want ZFS to play nice with your VMs.
3. Don't Overcommit Like It's 2023
Proxmox 9 doesn't like RAM overcommitment. Keep at least 10–20% of your RAM for the host + system tasks. For a 196GB system, don't allocate more than ~176GB total across all VMs. And don't assume ballooning will save you — it probably won't.
4. Optimize VM Settings
Couple of VM tweaks can help:
- Set disk cache mode to
noneorwritethroughto avoid extra host RAM usage. - Don't assign ridiculous amounts of RAM unless needed. Leave a small buffer for overhead.
- Monitor with
htoportopto catch VMs with runaway memory usage. - Restart VMs if their RSS climbs way beyond guest usage.
5. Tame the OOM Killer
You can make VMs less likely to be killed off:
- Adjust
oom_score_adjto prioritize what stays alive. - One method is using a systemd service to lower the score for KVM processes (e.g., set to
500). - Here's a solid gist on how to do it.
Just be careful — protecting VMs too aggressively might cause the host to kill something more essential.
6. Test Before You Launch a Fleet
Don't spin up all your VMs at once. Try them one by one and monitor:
free -hhtoparcstatif you're using ZFS
See where the RAM starts vanishing. If you're convinced it's a bug, check the Proxmox forums. Rolling back to 8.4 is also an option — several users did just that and saw a huge improvement.
7. It Could Be Your Hardware or Kernel Too
Some setups just don't vibe well with Proxmox 9 — especially when GPU passthrough is involved. And kernel regressions aren't off the table. One user on kernel 6.8.12–4 had issues even on 8.3, suggesting it might not just be Proxmox's fault.
Try disabling passthrough temporarily to see if behavior improves. Also keep an eye out for kernel patches on the Proxmox forums.
So… Should You Be Using Proxmox 9 Right Now?
Honestly? If stability matters to you (and you're not chasing new features), you might want to wait a bit. Proxmox 9's memory behavior feels like a regression, especially if you're coming from 8.4 where ballooning was actually useful.
That said, it's not all doom. Proxmox is open-source, and devs plus the community are already looking into it. There's a good chance things will smooth out over the next few updates.
You Can Win This RAM War
Proxmox 9 may be a bit wild with how it handles memory, but it's not unbeatable. With the right settings, a bit of tuning, and awareness of your hardware's quirks, you can still get solid, reliable performance.
Keep tabs on updates, check in with the community, and don't be afraid to experiment. Whether you're running a packed homelab or just a couple of VMs for dev work, there's still plenty of power in this platform — you just have to wrestle it into shape.