
Assigning IP Addresses to Synology Containers
How to assign static IP addresses to Docker containers running on a Synology NAS
Justin Wyne / January 11, 2025
Overview
I have a few Docker containers running on my Synology NAS at home for Plex, photo backups, speedtest, cloudflare, and home assistant.
A few days ago, I noticed an unidentified 1GB upload from my NAS to the internet on my Firewalla firewall. However, since all containers have the same IP address as the NAS, I can't tell which container is responsible for that specific network traffic.
To resolve this issue, I need to assign static IP addresses to each container.
I first tried to use the Synology Container Manager UI to assign static IP addresses to do this by disabling IP Masquerade, but this didn't work.
Eventually I found a solution by manually configuring a macvlan network through SSH.
Setting up the network
First, you need to identify the network interface that you want to use. Run ip link show to view a list of your interfaces on the NAS over SSH. For me, I use eth2 as my primary interface since I'm using the added 10gig network card.
Now, you can't directly use the eth0/eth2 interface for the macvlan, because it is currently managed as part of an Open vSwitch (ovs-system). This is common on Synology NAS devices to handle network bonding, VLANs, or advanced networking, and it can interfere with the creation of macvlan networks.
To resolve the issue and allow Docker to use a macvlan network, you'll need to work around the Open vSwitch management and use the Underlying Interface (ovs_eth2 instead for example)
The interface ovs_eth2 is the Open vSwitch representation of eth2. You can try creating the macvlan network on ovs_eth2.
1$ ip link show235: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 10004link/ether 90:09:d0:54:8f:3d brd ff:ff:ff:ff:ff:ff56: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 16link/ether ca:09:3b:63:65:bc brd ff:ff:ff:ff:ff:ff77: ovs_eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 18link/ether 90:09:d0:59:1b:74 brd ff:ff:ff:ff:ff:ff98: ovs_eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 110link/ether 90:09:d0:59:1b:75 brd ff:ff:ff:ff:ff:ff119: ovs_eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 112link/ether 90:09:d0:54:8f:3d brd ff:ff:ff:ff:ff:ff
Avoiding IP conflicts
Now we need to make sure that IP addresses assigned to containers don't overlap with the rest of the network. If you don't do this, collisions can happen and will prevent devices from connecting to the network.
On my router, which is a Firewalla Gold, I updated my DHCP server to assigns IP from 192.168.86.1 to 192.168.86.191 and leave the remaining 64 IP addresses for containers.
I verified the CIDR calcuation using a CIDR calculator.

Applied the range to my router:

Then I created the macvlan network on the Synology NAS with the corresponding subnet range. In my case, 192.168.86.192/28 is the CIDR notation for the range.
1sudo docker network create -d macvlan \2--subnet=192.168.86.0/24 \3--gateway=192.168.86.1 \4--ip-range=192.168.86.192/28 \5-o parent=ovs_eth2 \6macvlan_network
Now, you can see this network inside of the Synology Container Manager UI.

You can verify your network is working by running an alpine container and checking access to your gateway.
1docker run --rm -it \2--network=macvlan_network \3alpine ping 192.168.86.1
Migrating Containers
I found that live migrating over the containers to the new network left risidual incorrect IP addresses in the container. To resolve this, I had to stop the container, remove the container, and recreate the container with the new network.

After that, I was able to see the new IP address assigned to the container.

Conclusion
I hope this guide helps you assign static IP addresses to your Docker containers. Let me know if you have any questions or need help with your setup below.