“What are you using it for?”
“To learn!” should be your stock response.
I purchased the Clusterhat a while back as a scaled down version of FrankenPi although given the age of FrankenPi the scale of Clusterhat has more to do with physical size than computing ability. To be honest I suspect Clusterhat is as powerful if not more so if I could only be bothered to do a benchmark test on both projects to find out. Obviously there are literally hundreds of uses for Clusterhat and a multitude of ways to configure it, personally, I like Docker Swarm mainly because it is so simple to install and easy to set up.
I wrote this mainly as a walkthrough for myself, there’s no particular order to setting up Storage and Docker I just prefer to do it this way. If you choose to follow what I’ve written here you do so at your own risk. This walkthrough comes with no support whatsoever. “If you break it you get to keep all the pieces.”
Setup shared storage device
Most of us have a USB storage device slung at the back of a drawer somewhere. You don’t need to have it but it will come in handy sometimes. To find out where the storage device is loaded in /dev, we need to run the lsblk command:
I had already formatted my drive but you may wish to use the mkfs.ext4 command. Just make doubly sure you are formatting the correct drive: “You wipe it, you lose it!”
I generally use /media for mounting external drives, but you can place the folder anywhere on your filesystem. Make sure you use the same folder across all of your nodes!
Let’s go ahead and create the folder where you will mount the storage device.
Now we need to run the blkid command so we can get the UUID of the drive. This will enable us to set up automatic mounting of the drive whenever the pi is rebooted. The output will look similar to this:
The information you are looking for is:
Now we need to add the storage device to the bottom of your fstab.
Now let’s install NFS server if you haven’t already done it.
Now we’ll need to edit /etc/exports and place the following at the bottom:
Next up we need to update the NFS server:
Now to add the storage device to each of the Nodes (Pi Zero’s) This is pretty much the same procedure we have already completed.
You will need to add a slightly different entry to the bottom of fstab on each and every Node (Pi Zero)
Now let’s run:
If you have any errors, double check your /etc/fstab files in the nodes and the /etc/exports file on the controller.
Next, create a text file inside the NFS mount directory /media/Storage to ensure that you can see it across all of the nodes (Pi Zero’s). To confirm it’s working do:
Let’s install Docker
Starting with the Clusterhat Host, in my case pi0 I first like to make sure the system is up to date before I begin.
Now we’ll fetch and install Docker.
Now we’ll add (in my case) the user pi to the group Docker:
You’ll need to repeat this on all the Nodes (Pi Zero’s)
Now let’s advertise your Host (Manager, main machine whatever you like to call it)
Docker Swarm needs a Quorum so let’s add a couple more managers by generating a “join” token:
This will output something similar to this:
Now we need to ssh into p1.local and paste the output into a Terminal. Docker should report back that p1.local has joined as a manager. ssh into p2.local and repeat the process. We now have three Managers pi0, p1 & p2 forming our Quorum.
Next we must create some workers:
This will output something similar to this:
ssh into p3.local and p4.local respectively and paste that output into a Terminal on each Node (Pi Zero)
Now nip back to the Host pi0 and see if everything is OK:
Let’s do something with it
Now you’ve built your lovelly Docker Swarm you’ll want to run a service on it. You could install Visualizer but it dosen’t do a lot other than give you a visual overview of what containers and services you have. I like Portainer, not only does it give you full control but also in depth information as well as including, yes, visualizer.
On our Host machine (In my case pi0) do:
I’d love to talk to you about .yml which is used by Ansible for playbooks but that’s for another day.
Next we’ll deploy Portainer in our Swarm:
Wait a few moments for the service to propagate across your cluster (it’s literally seconds) and then in the browser of your choice type the ip address with port 9000 for your host (pi0) in my case 192.168.1.18:9000 You will be asked to set an Admin password and will then be logged in. It took a few moments for the information of the cluster to be probed by Portainer but eventually we were happy bunny’s.