Are you running a firewall like ufw with docker? You might be surprised to learn that your firewall is probably not doing anything to block unwanted internet traffic from reaching your docker services. Docker modifies iptables rules to completely bypass or ignore the rules set by ufw. In this article, I will explain how to check if the services running on your server are exposed and how to protect them.

Check for exposed docker services

I usually begin articles, like this one, by explaining some history or back-story to provide context. But in this case, let's dive right into how to check if your services are exposed remotely.

In this section we will use netstat and nmap to check for local processes that are listening for TCP connections and to scan ports. To install them:

sudo apt-get install net-tools nmap

Use netstat to print a list of processes that are actively listening for TCP connections:

sudo netstat -tlpn

Example results:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0*               LISTEN      17021/docker-proxy  
tcp        0      0*               LISTEN      17146/docker-proxy  
tcp        0      0  *               LISTEN      17330/docker-proxy  
tcp        0      0    *               LISTEN      651/sshd            

From the above results, we can see that we have 5 services listening for TCP connections. "Local Address" refers to the host (IP address and port number) on which the service is listening. For example, requests to "" will be handled by that service.

  • "" is the loopback address. Services bound to the loopback address are not accessible remotely.
  • "" means all interfaces. Services bound to this address are accessible remotely unless a firewall is blocking those requests.

Let's check this assumption by using nmap to scan for open ports:

nmap -p 0-65535

Example results:

Starting Nmap 7.60 ( ) at 2021-08-16 16:00 UTC
Nmap scan report for
Host is up (0.010s latency).
Not shown: 65535 closed ports
22/tcp    open  ssh
8332/tcp  open  unknown
8333/tcp  open  bitcoin
5432/tcp  open  postgresql

Nmap done: 1 IP address (1 host up) scanned in 3.15 seconds

Here we see that all 5 service ports are open on any interface. But this doesn't tell us what we really want to know - are these ports exposed remotely?

To answer that, we need the system's LAN IP address. You can get this by using ifconfig:

ifconfig | grep -Po "inet 192.168.[^ ]+" | grep -Po "192.168.[^ ]+"

If your system is a VPS, running in a cloud, then its LAN IP address might be begin with "10." instead of "192.168.". Check the full output of ifconfig to view all of your system's networking interfaces.

Now let's repeat the scan with the LAN IP address:

nmap -p 0-65535 192.168.XXX.XXX

Example results:

Starting Nmap 7.60 ( ) at 2021-08-16 16:00 UTC
Nmap scan report for 192.168.XXX.XXX
Host is up (0.010s latency).
Not shown: 65535 closed ports
22/tcp    open  ssh
5432/tcp  open  postgresql

Nmap done: 1 IP address (1 host up) scanned in 3.31 seconds

From the above we can see that the system has two ports open for remote TCP traffic. The first is port 22 which is used for SSH access. If we need to access the machine remotely via SSH, then this port should stay open.

The second port is for postgreSQL, which very likely should not be exposed remotely. In this example server, we're running postgreSQL in a docker container.

So what gives? Why is docker exposing this service remotely? The answer is because you told it to. Now let's fix it.

The Fix: Don't expose docker services remotely

Sounds simple, right?

Most users of docker don't realize that they are exposing their services remotely when they publish ports. For example, this command creates and runs a docker container:

docker run -p 3000:3000 

The -p argument tells docker to "publish" port 3000 - i.e. create a listener and forward requests to port 3000 to the new container. But this is insecure because the default host that docker binds to is ""!

These kinds of examples are all over the internet in tutorials, how-to articles, GitHub issues, stackoverflow answers, and more. Users have been trained to use docker in an insecure way.

The fix is actually quite simple. When publishing ports, tell docker to bind to "" instead:

docker run -p 

Now the service will not be exposed remotely.

You can find more details about using the -p, --publish arguments in the official documentation.

The docker + ufw problem: Unintuitive defaults

Based on the many articles, bugs, and issues about this common problem, it's safe to say that docker's default behavior is far from intuitive. One could even call the default behavior dangerous.

From an old issue which remains un-fixed as of today:

Docker Network bypasses Firewall, no option to disable

Steps to reproduce the issue:

  1. Setup the system with a locked down firewall
  2. Create a set of docker containers with exposed ports
  3. Check the firewall; docker will by use "anywhere" as the source, thereby all containers are exposed to the public.

And the problem has recently attracted the attention of hackernews:

Hacker deleted all of NewsBlur’s Mongo data and is now holding the data hostage

NewsBlur's founder here. I'll attempt to explain what's happening.


It's been a great year of maintenance and I've enjoyed the fruits of Ansible + Docker for NewsBlur's 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models).


Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn't work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world.

So what's going on here? Why is ufw ineffective at blocking traffic to services run inside docker containers?

Docker inserts its own iptables rules, which bypass ufw's own iptables rules. So those ufw rules that you think are protecting your docker services, are not actually doing that.

If you're curious, you can print your system's iptables rules with the following command:

sudo iptables -S

You will notice both ufw and docker have inserted their own rules.

I don't claim to understand the deep, dark magic of iptables. So I won't even begin to try to explain it here.

What about --iptables=false?

The most popular solution to the docker + ufw problem is to configure the docker daemon with --iptables=false. This is a bad idea because it makes docker unusable by blocking out-bound traffic as well as any networking between containers. So if you want docker to function properly, you will need to create and manage iptables rules manually. That doesn't sound like a long-term viable solution.

If you're really interested, you can have a look at the proposed solution that can be found in this stackoverflow answer. It looks like a lot of effort for not a lot of benefit. The much simpler solution is to just not expose your services. Bind to "" when publishing your service ports.

Are non-docker services exposed as well?

The big question that might be on your mind. Are the services running outside of docker exposed too? Luckily, the answer appears to be no. You can verify this yourself. Create a TCP listener on port 12345 using netcat:

nc -l -k -p 12345

Leave this running and open a new, separate terminal window. If your ufw is enabled, then port 12345 should be blocked by default. Use nmap to perform a port scan on the individual port:

nmap -p 12345 192.168.XXX.XXX


Starting Nmap 7.70 ( ) at 2021-08-16 16:00 UTC
Nmap scan report for 192.168.XXX.XXX
Host is up (0.00031s latency).

12345/tcp open  netbus

Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds

Whoops! Looks like ufw didn't block the port scan. Maybe it's because we're running the port scan locally? Let's try remotely. Run the following command from a different computer that's connected to the same LAN (router/wifi):

nmap -p 12345 192.168.XXX.XXX


Starting Nmap 7.60 ( ) at 2021-08-16 16:00 UTC
Nmap scan report for 192.168.XXX.XXX
Host is up.

12345/tcp filtered netbus

Nmap done: 1 IP address (1 host up) scanned in 2.01 seconds

Phew! Looks like ufw is doing its job - at least for non-dockerized services.

Defense in-depth

Keep using ufw to protect your systems. But you shouldn't be relying on a single line of defense to protect your services. If ufw was the only thing standing between your services and the public internet, then that was a mistake.

Use the networking tools above to check for exposed services on your systems.

If your system is a VPS at a cloud provider, then you should look at what firewall options they have available. In the case of DigitalOcean, it's possible to configure a firewall that can protect your VPS ("droplet") from unwanted traffic. You can find this in Network > Firewalls. Create a new firewall, white-list the ports that you want, and then add your droplets. It's that simple.

If your cloud provider doesn't provide a network-level firewall, you could use CloudFlare's network firewall service. This will have negative privacy implications due to routing all traffic thru CloudFlare, but it could be a reasonable trade-off for your case.

If the system is on physical hardware to which you have physical access, then you might consider configuring a firewall on the router thru which your system connects to the internet. If your router doesn't allow you to configure such a firewall, then it might be time to invest in better hardware.

That's it for this one. Good luck and stay safe!