There’s a special category of problems that only show up when everything looks like it should just work.
This was one of them.
The goal wasn’t ambitious. Back up a Synology NAS to a remote NFS share using Hyper Backup. No clever tricks, no exotic setup. Just something simple and predictable.
Instead, I ran into one of those quiet limitations that turns a straightforward task into a dead end.
Problem#
Hyper Backup is pretty strict about where it writes data. Not a preference—more like a hard requirement: the destination has to be at the root of a volume.
On its own, that doesn’t sound unreasonable. Until you bring NFS into the picture.
Because Synology has its own opinion about that: NFS mounts always land inside a subfolder. No toggle, no workaround, no discussion.
Which leaves you stuck between two perfectly valid rules that just don’t agree:
The backup tool insists on a root-level destination The system won’t let you mount anything at root
And suddenly, a basic backup setup turns into something you can’t actually build.
How I fixed it#
At some point it stopped making sense to fight Synology’s rules. Both sides were behaving exactly as designed, they just weren’t designed to work together.
So instead of forcing it, I added a thin layer in between.
The solution ended up being a small Docker container that acts as a bridge between rsync and NFS:
- It runs an rsync daemon
- It mounts the NFS export internally
- It writes whatever comes in straight to that mount
From Synology’s point of view, nothing changed. It’s just talking to a regular rsync target. No special configuration, no weird tweaks.
Behind the scenes, the container handles the translation and drops the data onto NFS.
What makes this work is the separation:
- Hyper Backup knows how to talk rsync, not NFS
- The container handles that translation
- NFS stays exactly what it is: the storage layer
No bending Hyper Backup into something it isn’t. Just giving it something it already understands..
Using It#
The container is available here:
https://github.com/h0bbel/rsync-nfs and the README file has all the details.
The container image is published and ready to use:
https://ghcr.io/h0bbel/rsync-nfs:latest
https://hub.docker.com/r/h0bbel/rsync-nfs
Run it, I’m running it on Synology itself in Container Manager but I’ve also tested running it “externally” without problems, point Synology’s Hyper Backup at it using rsync as the backup method, and suddenly the impossible destination becomes just another module.
Container Contents#
The container is intentionally minimal, the initial release is a total of 55.9 MB when extracted.
- Alpine-based image (3.23.3)
- rsync daemon frontend
- NFS mount handled at runtime
- Optional debug mode for visibility
- No unnecessary services or dependencies
It’s designed to do one thing well, and stay out of the way while doing it.
A Couple of Caveats#
- It requires privileged mode for NFS mounting
- In my testing mounting as NFS 4 works when running on Synology, NFS 4.1 does not work.
- If running on Synology, the external port for the container needs to be something else than default rsync port 873, since it is already in use on Synology (using this port is configurable in the Hyper Backup destination setup)
- It currently has no authentication added. Not for the rsync front end, or for the NFS mount. My setup doesn’t require it, but I might add it in a later version if there is a need for it.
Workaround or adapter layer? Both?#
This isn’t a workaround in the messy sense. It’s more of an adapter layer. Instead of forcing Synology to support remote NFS roots (which it won’t), we introduce something that fits neatly into both worlds. Sometimes the cleanest architecture isn’t the one that removes friction.
It’s the one that contains it.
Closing Thoughts#
Not every system integrates cleanly with every other system, and that’s okay. I’m surprised that I couldn’t backup directly to a mounted NFS export, but that’s just how it is.
The trick is recognizing when to bend the environment and when to insert a small, well-defined boundary that does the translation for you. In this case, a tiny rsync daemon turned out to be the missing piece.
And once it’s in place, everything just works the way I want it to.


