There’s a certain kind of problem that doesn’t show up until everything should be working.
This was one of those.
All I wanted was simple: back up a Synology NAS to a remote NFS share using Synology Hyper Backup. No weird protocols. No duct tape. Just clean, boring infrastructure.
Instead, I ran straight into one of those invisible walls that make you question your life choices.
The Catch Nobody Mentions#
Synology’s Hyper Backup has a rule. Not a guideline. A rule. The destination must live at the root of a volume. This seems reasonable, that is until you try to use NFS.
Because Synology has another rule: Remote NFS mounts are always placed inside a subfolder. Not optional and not configurable.
So now you have:
- A backup tool that refuses anything but root-level paths
- A filesystem that refuses to mount anything at root
And just like that, something trivial becomes impossible.
The Fix: Insert a Thin Layer#
Instead of fighting the platform, I leaned into it.
The solution is a small Docker container that acts as a bridge:
- It runs an rsync daemon
- It mounts an NFS export internally
- It writes incoming rsync data directly to that mount
From Synology’s perspective, it’s just a standard rsync server. Nothing special. Nothing unusual. No hacks required.
Under the hood, it quietly forwards everything into NFS.
Why This Works#
This approach works because it separates concerns cleanly:
- Synology talks rsync
- The container translates that into writes on NFS
- NFS remains the actual storage layer
No need to contort Synology into doing something it wasn’t designed to do.
Using It#
It is available here:
https://github.com/h0bbel/rsync-nfs and the README file has all the details.
The container image is published and ready to use:
https://ghcr.io/h0bbel/rsync-nfs:v0.5.0
Run it, I’m running it on Synology itself in Container Manager but I’ve also tested running it “externally” without problems, point Synology’s Hyper Backup at it using rsync as the backup method, and suddenly the impossible destination becomes just another module.
What’s Inside#
The container is intentionally minimal, the initial release is a total of 55.9 MB when extracted.
- Alpine-based image
- rsync daemon frontend
- NFS mount handled at runtime
- Optional debug mode for visibility
- No unnecessary services or dependencies
It’s designed to do one thing well, and stay out of the way while doing it.
A Couple of Caveats#
- It requires privileged mode for NFS mounting
- In my testing mounting as NFS 4 works when running on Synology, NFS 4.1 does not work.
- If running on Synology, the external port for the container needs to be something else than default rsync port 873, since it is already in use on Synology (using this port is configurable in the Hyper Backup destination setup)
- It currently has no authentication added. Not for the rsync front end, or for the NFS mount. My setup doesn’t require it, but I might add it in a later version if there is a need for it.
The Reality of Workarounds#
This isn’t a workaround in the messy sense. It’s more of an adapter layer. Instead of forcing Synology to support remote NFS roots (which it won’t), we introduce something that fits neatly into both worlds. Sometimes the cleanest architecture isn’t the one that removes friction.
It’s the one that contains it.
Closing Thought#
Not every system integrates cleanly with every other system, and that’s okay. I’m surprised that I couldn’t backup directly to a mounted NFS export, but that’s how it is.
The trick is recognizing when to bend the environment and when to insert a small, well-defined boundary that does the translation for you. In this case, a tiny rsync daemon turned out to be the missing piece.
And once it’s in place, everything just works.


