This is a work in progress tutorial for work in progress tools. It’s not ready for use yet.
We are trying to assume as little as possible about the reader knowledge, but basic understanding of unix is definitely required. Here is the description of tools that most readers would be intoduced here to:
- A tool for setting up development environments. For this tutorial, we will use it for building container images. Similar tools: vagrant, docker-compose, otto, packer (in some sense).
- A container supervisor. This one starts containers in production environment. Unlike docker it doesn’t have tools for building and fetching container images we will use vagga and rsync for that tasks. Similar tools: docker, rocket, systemd-nspawn.
- A monitoring system, or a system collecting statistics. It’s main distinction is that it is decentralized. It stores data in memory, and keeps only recent data. This makes it fast and highly-available. And this in turn allows to make orchestration decisions based on the metrics. Another feature is that it has built-in peer discovery. Similar tools: collectd, prometheus, graphite.
- A orchestration system. It’s highly scriptable and decentralized. Meaning you can do orchestration tasks in split-brain scenario and it depends on you what specific things system can actually do. The tool also includes text templates for rendering configuration for any external system that is included in the cluster. Similar tools: mesos, kubernetes.
Any tool can potentially replaced by some other tool. Currently, the only hard dependency is that you need cantal to run verwalter.
Anyway this combination provides good robustness, security and ease of use. See Concepts for more details about how these tools rely on each other to provide mentioned features.
Usually you start with a vagga container that works locally. There is a tutorial for building a container for django application. We will skip this part and assume you have a working container. Please, don’t skip this part even if you have development environment already set up (but not containerized). It is important for the following reasons:
- You need to know all dependencies and their versions, in may happen that you don’t know exact list of system dependencies if you are using virtualenv for example.
- Vagga makes everything readonly by default, so as lithos. This serves as additional check of which filesystem paths are writable by the application (hopefully you don’t have any).
- We’ll need the container for the next steps. We will base our deployment container on the development one (see below)
It’s also good idea to make add a check of whether your application needs a
/tmp. Just add a volume to your vagga container config:
containers: django: ... volumes: /tmp: !Empty
/tmp read-only. So you can see errors when application tries
to write there and either fix the application (preferred in my opinion) or
/tmp mount in lithos configs later on.
(TBD: we skip exact installation instructions for now, because we don’t have repositories online yet).
Verwalter (and cantal too) requires
/etc/machine-id. If your system
is running by systemd then you already have this file. Otherwise, you
can either use
systemd-machine-id-setup from systemd utilities, or just
run simpler script like
uuidgen | sed s/-//g > /etc/machine-id. You must
run the script once on every machine and file must never change. Don’t put
the file in the virtual machine image such as AMI. System will malfunction
if several machines have same machine-id.
Here is a checklist:
/etc/lithos/master.yaml(doc) – might be empty but can be present
/etc/lithos/sandboxes/APP_NAME.yaml(doc) – must be present for each application, you want to deploy on the machine
These configs are not generated by verwalter for security reasons. For example, sandbox config limits the directories on a host system that application is able to read or write. We don’t want any application that can reach verwalter’s HTTP API to be able to change such fundamental constraints.
On the other hand, the reasons above doesn’t tell you can’t automate deploying these files. You can easily use ansible to upload them or put them into virtual machine image, such as AMI.