I'm not sure if I have a fundamental misunderstanding of how this should work, or if there is a problem. I have the following setup:
Dokploy Server - Doubles as Swarm Manager app1 - Swarm Worker app2 - Swarm Worker
I have connected the workers to the manager as per the Dokploy instructions. My "Cluster" shows the nodes correctly in Dokploy. Everything looks OK. I have deployed a Docker image to my Dokploy project. Dokploy shows that the image is running on all 3 nodes. However, when I SSH in to the hosts and run a docker ps, I only see the container running on the manager node. On the worker nodes, I only see the the traefik container.
The Cluster page also shows that both worker servers are running Dokploy, but when I try to visit them via http://ip:3000, nothing loads, and a netstat -tulpn | grep 3000 on the hosts shows nothing listening on that port.
What I'm trying to do here is to set up a load balanced application running across multiple servers, all behind a load balancer. I thought this would be relatively simple, and what Dokploy was designed to do, but either that isn't the case or I'm doing something very wrong. Any advice is appreciated.