Graphical Interfaces for Docker Swarm
In this article I will explore some different options for graphical interfaces to manage a Docker Swarm cluster and I will show how to install Portainer as a Swarm Service.
The ecosystem of graphical interfaces for Docker is experiencing a fast grow and improvement lately, and I believe it will continue to do so for the next years.
In my opinion the status of the Docker ecosystem requires still to work mostly using the CLI tools, they are mature and well documented, not so much yet the GUI tools available to manage a Swarm.
In this article I will analyze some of them, specially portainer, shipyard and rancher. But there are a few others, some of them interesting enough to take a further look, like panamax or Docker Cloud. Other articles with an overview of this tools and more can be found in:
Portainer is by far the easiest one to install, it does not require a separated container for the DB, the interface is very fast and responsive, great to have an overview of what is happening on the Cluster.
The product is sold as an easy way to manage Docker and Docker Swarm, but it actually has many downsides, first of all it is still not able to launch containers and apps as a Swarm Service. In general, it’s Swarm integration is still far from what it Should be to be really useful.
It also lacks local registry configuration and Active Directory integration, which are both very important features for our organization. There are news that they will be offering a separated module for AD integration end of the year, but it will be paid.
In general, I really like it’s simplicity, the interface, and I find it very useful to have a fast overview of the status of the Swarm, and also great are the possibilities to debug containers, allowing you to check the logs, access to the shell and check it’s performance graphs.
Shipyard is built to be a great UI for docker Swarm, it has support for Swarm services, local registries and AD integration.
The problem with Shipyard is it’s installation, and not because is difficult, they offer 2 options to install, one based on a script, called automatic, and another one manual were you have to create 6 containers for the app, which is also not complicated. The problem I find with this approach is mostly the lack of a processs to install Shipyard as a Swarm Service. I find an application with a focus on being the management portal of your Swarm Cluster needs to have HA capabilities.
I don’t understand why they automatize the installation process with an script and not with a docker-compose file, I would also appreciate the possibility to change their db container (It uses RethinkDB) to some external database service, maybe MySQL, so you can achieve data persistence using services like AWS RDS or just an external database.
This is a complain I have actually about all the Swarm/Kubernetes tools I had examined.
In general Shipyard is already pretty feature rich, it’s interface is clear and responsive and I like it’s focus on providing support for Docker Swarm.
Rancher is a very powerful tool, well done, and the most enterprise grade of this 3 options. It has RBAC, AD integration and detailed audit logs. Also, it’s existing catalog of applications and the way you can add new ones is the best one from this 3 options.
It supports Docker Swarm (Recently Added), Kubernetes and Rancher’s own orchestration called Cattle.
Rancher recommendation is to install it outside the Swarm, in an independent host, I have to say this is an idea I don’t like. I like to keep the architecture as simple as possible, specially when I think about tools like AWS CloudFormation, and I don’t want to think much in how to add High Availability to a tool like this, it should come out-of-the-box, or even better, use the Swarm Cluster.
It also does not offer instructions to be installed as a Swarm Service, and, Docker Swarm still appears as in testing/beta state in some parts of it’s website and documentation.
I think the 3 tools are great, almost ready for me, but not yet. At this point I still think managing and configuring your local registries, compose files and in some cases the scripts with API calls is better done through CLI than trying to use web interfaces.
For sure I will give this tools a new oportunity in the near future, because I am sure they will continue improving at a fast pace and is probable that in my organization we end using one of them to administer our Swarm, just, not yet.
I don’t understand why there is not a clear and fast way to configure persistent storage with an external database, is something I do often on some of the Docker Files I wrote, and for me it is a pretty elegant solucion to have a Service Running in the Swarm with High Availability.
I am pretty sure many readers of this article will disagree with me, or maybe point me in the right direction. I welcome any different point of view in the comments.
Nevertheless, I will install Portainer as a service in my Docker Swarm, while I will not use it for management tasks, it is still a great tool to get a graphical overview of the Cluster and also to debug problems with single containers.
Installing Portainer as a Swarm Service
The official instructions to install Portainer have recently included a way to install it as a Swarm Service, while I like more this solution than to have it without HA, It is worth mentioning that since there’s no way to point portainer to an external storage for configuration, every time the service switches from host to host, the admin credentials will have to be reconfigured, and all configurations, if any, done inside the Portainer interface will be lost.
To install and start Portainer as a Swarm Service connect to a Manager Node and run the following command:
docker service create --name portainer --publish 9000:9000 --constraint 'node.role == manager' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock
This will create a Service called portainer, publish is internal 9000 port to the outside, and limit the Nodes it can run to Manager Nodes, this is because only the Manager Nodes can give and overview of the state of the Swarm and do changes on it.
After that we can already connect to it, by using the IP of any of the Nodes (Mesh Networking on Swarm) and the port 9000.