Merge remote-tracking branch 'origin/main' into hubsite
This commit is contained in:
commit
072d3b4037
13 changed files with 624 additions and 16 deletions
|
@ -1,3 +1,5 @@
|
|||
[](https://matrix.to/#/#mash-playbook:devture.com) [](https://liberapay.com/mother-of-all-self-hosting/donate)
|
||||
|
||||
# Mother-of-All-Self-Hosting Ansible playbook
|
||||
|
||||
**MASH** (**M**other-of-**A**ll-**S**elf-**H**osting) is an [Ansible](https://www.ansible.com/) playbook that helps you self-host services as [Docker](https://www.docker.com/) containers on your own server.
|
||||
|
@ -6,6 +8,8 @@ By running services in containers, we can have a predictable and up-to-date setu
|
|||
|
||||
This project is fairly new and only [supports a handful of services](docs/supported-services.md) so far, but will grow to support self-hosting a large number of [FOSS](https://en.wikipedia.org/wiki/Free_and_open-source_software) pieces of software.
|
||||
|
||||
[Installation](docs/README.md) (upgrades) and some maintenance tasks are automated using [Ansible](https://www.ansible.com/) (see [our Ansible guide](docs/ansible.md)).
|
||||
|
||||
|
||||
## Supported services
|
||||
|
||||
|
@ -33,7 +37,7 @@ When updating the playbook, refer to [the changelog](CHANGELOG.md) to catch up w
|
|||
|
||||
## Why create such a mega playbook?
|
||||
|
||||
We used to maintain separate playbooks for various services (Matrix, Nextcloud, Gitea, Vaultwarden, PeerTube, ..). They re-used roles (for Postgres, Traefik, etc.), but were still hard to maintain due to the large duplication of effort.
|
||||
We used to maintain separate playbooks for various services ([Matrix](https://github.com/spantaleev/matrix-docker-ansible-deploy), [Nextcloud](https://github.com/spantaleev/nextcloud-docker-ansible-deploy), [Gitea](https://github.com/spantaleev/gitea-docker-ansible-deploy), [Gitlab](https://github.com/spantaleev/gitlab-docker-ansible-deploy), [Vaultwarden](https://github.com/spantaleev/vaultwarden-docker-ansible-deploy), [PeerTube](https://github.com/spantaleev/peertube-docker-ansible-deploy), ..). They re-used Ansible roles (for [Postgres](https://github.com/devture/com.devture.ansible.role.postgres), [Traefik](https://github.com/devture/com.devture.ansible.role.traefik), etc.), but were still hard to maintain due to the large duplication of effort.
|
||||
|
||||
Most of these playbooks hosted services which require a Postgres database, a Traefik reverse-proxy, a backup solution, etc. All of them needed to come with documentation, etc.
|
||||
All these things need to be created and kept up-to-date in each and every playbook.
|
||||
|
@ -55,6 +59,9 @@ We're finding the need for a playbook which combines all of this into one, so th
|
|||
|
||||
Having one large playbook with all services does not necessarily mean you need to host everything on the same server though. Feel free to use as many servers as you see fit. While containers provide some level of isolation, it's still better to not put all your eggs in one basket and create a single point of failure.
|
||||
|
||||
All of the aforementioned playbooks have been absorbed into this one. See the [full list of supported services here](docs/supported-services.md).
|
||||
The [Matrix playbook](https://github.com/spantaleev/matrix-docker-ansible-deploy) will remain separate, because it contains a huge number of components and will likely grow even more. It deserves to stand on its own.
|
||||
|
||||
|
||||
## What's with the name?
|
||||
|
||||
|
|
121
docs/ansible.md
Normal file
121
docs/ansible.md
Normal file
|
@ -0,0 +1,121 @@
|
|||
|
||||
# Running this playbook
|
||||
|
||||
This playbook is meant to be run using [Ansible](https://www.ansible.com/).
|
||||
|
||||
Ansible typically runs on your local computer and carries out tasks on a remote server.
|
||||
If your local computer cannot run Ansible, you can also run Ansible on some server somewhere (including the server you wish to install to).
|
||||
|
||||
|
||||
## Supported Ansible versions
|
||||
|
||||
To manually check which version of Ansible you're on, run: `ansible --version`.
|
||||
|
||||
For the **best experience**, we recommend getting the **latest version of Ansible available**.
|
||||
|
||||
We're not sure what's the minimum version of Ansible that can run this playbook successfully.
|
||||
The lowest version that we've confirmed (on 2022-11-26) to be working fine is: `ansible-core` (`2.11.7`) combined with `ansible` (`4.10.0`).
|
||||
|
||||
If your distro ships with an Ansible version older than this, you may run into issues. Consider [Upgrading Ansible](#upgrading-ansible) or [using Ansible via Docker](#using-ansible-via-docker).
|
||||
|
||||
|
||||
## Upgrading Ansible
|
||||
|
||||
Depending on your distribution, you may be able to upgrade Ansible in a few different ways:
|
||||
|
||||
- by using an additional repository (PPA, etc.), which provides newer Ansible versions. See instructions for [CentOS](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-rhel-centos-or-fedora), [Debian](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-debian), or [Ubuntu](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-ubuntu) on the Ansible website.
|
||||
|
||||
- by removing the Ansible package (`yum remove ansible` or `apt-get remove ansible`) and installing via [pip](https://pip.pypa.io/en/stable/installation/) (`pip install ansible`).
|
||||
|
||||
If using the `pip` method, do note that the `ansible-playbook` binary may not be on the `$PATH` (https://linuxconfig.org/linux-path-environment-variable), but in some more special location like `/usr/local/bin/ansible-playbook`. You may need to invoke it using the full path.
|
||||
|
||||
|
||||
**Note**: Both of the above methods are a bad way to run system software such as Ansible.
|
||||
If you find yourself needing to resort to such hacks, please consider reporting a bug to your distribution and/or switching to a sane distribution, which provides up-to-date software.
|
||||
|
||||
|
||||
## Using Ansible via Docker
|
||||
|
||||
Alternatively, you can run Ansible inside a Docker container (powered by the [devture/ansible](https://hub.docker.com/r/devture/ansible/) Docker image).
|
||||
|
||||
This ensures that you're using a very recent Ansible version, which is less likely to be incompatible with the playbook.
|
||||
|
||||
You can either [run Ansible in a container on the server itself](#running-ansible-in-a-container-on-the-server-itself) or [run Ansible in a container on another computer (not the server)](#running-ansible-in-a-container-on-another-computer-not-the-server).
|
||||
|
||||
|
||||
### Running Ansible in a container on the server itself
|
||||
|
||||
To run Ansible in a (Docker) container on the server itself, you need to have a working Docker installation.
|
||||
Docker is normally installed by the playbook, so this may be a bit of a chicken and egg problem. To solve it:
|
||||
|
||||
- you **either** need to install [Docker](services/ansible.md) manually first. Follow [the upstream instructions](https://docs.docker.com/engine/install/) for your distribution and consider setting `mash_playbook_docker_installation_enabled: false` in your `vars.yml` file, to prevent the playbook from installing Docker
|
||||
- **or** you need to run the playbook in another way (e.g. [Running Ansible in a container on another computer (not the server)](#running-ansible-in-a-container-on-another-computer-not-the-server)) at least the first time around
|
||||
|
||||
Once you have a working Docker installation on the server, **clone the playbook** somewhere on the server and configure it as per usual (`inventory/hosts`, `inventory/host_vars/..`, etc.), as described in [configuring the playbook](configuring-playbook.md).
|
||||
|
||||
You would then need to add `ansible_connection=community.docker.nsenter` to the host line in `inventory/hosts`. This tells Ansible to connect to the "remote" machine by switching Linux namespaces with [nsenter](https://man7.org/linux/man-pages/man1/nsenter.1.html), instead of using SSH.
|
||||
Alternatively, you can leave your `inventory/hosts` as is and specify the connection type in **each** `ansible-playbook` call you do later, like this: `ansible-playbook --connection=community.docker.nsenter ...`
|
||||
|
||||
Run this from the playbook's directory:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
--privileged \
|
||||
--pid=host \
|
||||
-w /work \
|
||||
-v `pwd`:/work \
|
||||
--entrypoint=/bin/sh \
|
||||
docker.io/devture/ansible:2.13.6-r0-3
|
||||
```
|
||||
|
||||
Once you execute the above command, you'll be dropped into a `/work` directory inside a Docker container.
|
||||
The `/work` directory contains the playbook's code.
|
||||
|
||||
First, consider running `git config --global --add safe.directory /work` to [resolve directory ownership issues](#resolve-directory-ownership-issues).
|
||||
|
||||
Finally, you can execute `ansible-playbook ...` (or `ansible-playbook --connection=community.docker.nsenter ...`) commands as per normal now.
|
||||
|
||||
|
||||
### Running Ansible in a container on another computer (not the server)
|
||||
|
||||
Run this from the playbook's directory:
|
||||
|
||||
```bash
|
||||
docker run -it --rm \
|
||||
-w /work \
|
||||
-v `pwd`:/work \
|
||||
-v $HOME/.ssh/id_rsa:/root/.ssh/id_rsa:ro \
|
||||
--entrypoint=/bin/sh \
|
||||
docker.io/devture/ansible:2.13.6-r0-3
|
||||
```
|
||||
|
||||
The above command tries to mount an SSH key (`$HOME/.ssh/id_rsa`) into the container (at `/root/.ssh/id_rsa`).
|
||||
If your SSH key is at a different path (not in `$HOME/.ssh/id_rsa`), adjust that part.
|
||||
|
||||
Once you execute the above command, you'll be dropped into a `/work` directory inside a Docker container.
|
||||
The `/work` directory contains the playbook's code.
|
||||
|
||||
First, consider running `git config --global --add safe.directory /work` to [resolve directory ownership issues](#resolve-directory-ownership-issues).
|
||||
|
||||
Finally, you execute `ansible-playbook ...` commands as per normal now.
|
||||
|
||||
|
||||
#### If you don't use SSH keys for authentication
|
||||
|
||||
If you don't use SSH keys for authentication, simply remove that whole line (`-v $HOME/.ssh/id_rsa:/root/.ssh/id_rsa:ro`).
|
||||
To authenticate at your server using a password, you need to add a package. So, when you are in the shell of the ansible docker container (the previously used `docker run -it ...` command), run:
|
||||
```bash
|
||||
apk add sshpass
|
||||
```
|
||||
Then, to be asked for the password whenever running an `ansible-playbook` command add `--ask-pass` to the arguments of the command.
|
||||
|
||||
|
||||
#### Resolve directory ownership issues
|
||||
|
||||
Because you're `root` in the container running Ansible and this likely differs fom the owner (your regular user account) of the playbook directory outside of the container, certain playbook features which use `git` locally may report warnings such as:
|
||||
|
||||
> fatal: unsafe repository ('/work' is owned by someone else)
|
||||
> To add an exception for this directory, call:
|
||||
> git config --global --add safe.directory /work
|
||||
|
||||
These errors can be resolved by making `git` trust the playbook directory by running `git config --global --add safe.directory /work`
|
7
docs/services/aux.md
Normal file
7
docs/services/aux.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
# AUX
|
||||
|
||||
The [AUX](https://github.com/mother-of-all-self-hosting/ansible-role-aux) Ansible role can help you manage auxiliary files and directoris on your server.
|
||||
|
||||
It's useful for when you'd like to use Ansible to drop additional files on the server.
|
||||
|
||||
Consult the role's documentation for learning more.
|
46
docs/services/focalboard.md
Normal file
46
docs/services/focalboard.md
Normal file
|
@ -0,0 +1,46 @@
|
|||
# Focalboard
|
||||
|
||||
[Focalboard](https://www.focalboard.com/) is an open source, self-hosted alternative to [Trello](https://trello.com/), [Notion](https://www.notion.so/), and [Asana](https://asana.com/)
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# focalboard #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
focalboard_enabled: true
|
||||
|
||||
focalboard_hostname: mash.example.com
|
||||
focalboard_path_prefix: /focalboard
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /focalboard #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/focalboard`.
|
||||
|
||||
You can remove the `focalboard_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
After [installation](../installing.md), you can go to the Focalboard URL you've configured above.
|
||||
|
||||
You can use the signup page to register the first (administrator) user. The first signup is always allowed. Users after the first one need an invitation link to sign up.
|
96
docs/services/grafana.md
Normal file
96
docs/services/grafana.md
Normal file
|
@ -0,0 +1,96 @@
|
|||
# Grafana
|
||||
|
||||
[Grafana](https://grafana.com/) is an open and composable observability and data visualization platform, often used with [Prometheus](prometheus.md).
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# grafana #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
grafana_enabled: true
|
||||
|
||||
grafana_hostname: mash.example.com
|
||||
grafana_path_prefix: /grafana
|
||||
|
||||
grafana_default_admin_user: admin
|
||||
# Generating a strong password (e.g. `pwgen -s 64 1`) is recommended
|
||||
grafana_default_admin_password: ''
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /grafana #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/grafana`.
|
||||
|
||||
You can remove the `grafana_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
### Configuring data sources
|
||||
|
||||
Grafana is merely a visualization tool. It needs to pull data from a metrics (time-series) database, like [Prometheus](prometheus.md).
|
||||
|
||||
You can add multiple data sources to Grafana.
|
||||
|
||||
#### Integrating with a local Prometheus instance
|
||||
|
||||
If you're installing [Prometheus](prometheus.md) on the same server, you can hook Grafana to it over the container network with the following **additional** configuration:
|
||||
|
||||
```yaml
|
||||
grafana_provisioning_datasources:
|
||||
- name: Prometheus
|
||||
type: prometheus
|
||||
access: proxy
|
||||
url: "http://{{ prometheus_identifier }}:9090"
|
||||
|
||||
# Prometheus runs in another container network, so we need to connect to it.
|
||||
grafana_container_additional_networks_additional:
|
||||
- "{{ prometheus_container_network }}"
|
||||
```
|
||||
|
||||
For connecting to a **remote** Prometheus instance, you may need to adjust this configuration somehow.
|
||||
|
||||
|
||||
### Integrating with Prometheus Node Exporter
|
||||
|
||||
If you've installed [Prometheus Node Exporter](prometheus-node-exporter.md) on any host (target) scraped by Prometheus, you may wish to install a dashboard for Prometheus Node Exporter.
|
||||
|
||||
The Prometheus Node Exporter role exposes a list of URLs containing dashboards (JSON files) in its `prometheus_node_exporter_dashboard_urls` variable.
|
||||
|
||||
You can add this **additional** configuration to make the Grafana service pull these dashboards:
|
||||
|
||||
```yaml
|
||||
grafana_dashboard_download_urls: |
|
||||
{{
|
||||
prometheus_node_exporter_dashboard_urls
|
||||
}}
|
||||
```
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you should be able to access your new Gitea instance at the configured URL (see above).
|
||||
|
||||
Going there, you'll be taken to the initial setup wizard, which will let you assign some paswords and other configuration.
|
||||
|
||||
|
||||
## Recommended other services
|
||||
|
||||
Grafana is just a visualization tool which requires pulling data from a metrics (time-series) database like.
|
||||
|
||||
You may be interested in combining it with [Prometheus](prometheus.md).
|
34
docs/services/prometheus-blackbox-exporter.md
Normal file
34
docs/services/prometheus-blackbox-exporter.md
Normal file
|
@ -0,0 +1,34 @@
|
|||
# Prometheus Blackbox Exporter
|
||||
|
||||
This playbook can configure [Prometheus Blackbox Exporter](https://github.com/prometheus/blackbox_exporter).
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# prometheus-blackbox-exporter #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
prometheus_blackbox_exporter_enabled: true
|
||||
|
||||
# if you want to export blackbox's probe endpoint, uncomment and adjust the following vars
|
||||
|
||||
# prometheus_blackbox_exporter_hostname: mash.example.com
|
||||
# prometheus_blackbox_exporter_path_prefix: /metrics/blackbox-exporter
|
||||
# prometheus_blackbox_exporter_basicauth_user: your_username
|
||||
# prometheus_blackbox_exporter_basicauth_password: your password
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /prometheus-blackbox-exporter #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After you've installed the blackbox exporter, your blackbox prober will be available on `mash.example.com/metrics/blackbox-exporter` with the basic auth credentials you've configured if hostname and path prefix where provided
|
|
@ -1,7 +1,8 @@
|
|||
# Prometheus Node Expoter
|
||||
# Prometheus Node Exporter
|
||||
|
||||
This playbook can configure [Prometheus Node Exporter](https://github.com/prometheus/node_exporter).
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
@ -14,10 +15,15 @@ To enable this service, add the following configuration to your `vars.yml` file
|
|||
########################################################################
|
||||
|
||||
prometheus_node_exporter_enabled: true
|
||||
prometheus_node_exporter_hostname: mash.example.com
|
||||
prometheus_node_exporter_path_prefix: /metrics/node-exporter
|
||||
prometheus_node_exporter_basicauth_user: your_username
|
||||
prometheus_node_exporter_basicauth_password: your password
|
||||
|
||||
# To expose the metrics publicly, enable and configure the lines below:
|
||||
# prometheus_node_exporter_hostname: mash.example.com
|
||||
# prometheus_node_exporter_path_prefix: /metrics/node-exporter
|
||||
|
||||
# To protect the metrics with HTTP Basic Auth, enable and configure the lines below:
|
||||
# prometheus_node_exporter_basicauth_enabled: true
|
||||
# prometheus_node_exporter_basicauth_user: your_username
|
||||
# prometheus_node_exporter_basicauth_password: your password
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
|
@ -26,6 +32,10 @@ prometheus_node_exporter_basicauth_password: your password
|
|||
########################################################################
|
||||
```
|
||||
|
||||
Unless you're scraping the Prometheus Node Exporter metrics from a local [Prometheus](prometheus.md) instance, as described in [Integrating with Prometheus Node Exporter](prometheus.md#integrating-with-prometheus-node-exporter), you will probably wish to expose the metrics publicly so that a remote Prometheus instance can fetch them.
|
||||
|
||||
## Usage
|
||||
|
||||
After you installed the node exporter, your node stats will be available on `mash.example.com/metrics/node-exporter` with basic auth credentials you configured
|
||||
|
||||
To integrate Prometheus Node Exporter with a [Prometheus](prometheus.md) instance, see the [Integrating with Prometheus Node Exporter](prometheus.md#integrating-with-prometheus-node-exporter) section of the documentation.
|
||||
|
|
77
docs/services/prometheus.md
Normal file
77
docs/services/prometheus.md
Normal file
|
@ -0,0 +1,77 @@
|
|||
# Prometheus
|
||||
|
||||
[Prometheus](https://prometheus.io/) is a metrics collection and alerting monitoring solution.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# prometheus #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
prometheus_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /prometheus #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
By default, Prometheus is configured to scrape (collect metrics from) its own process. If you wish to disable this behavior, use `prometheus_self_process_scraper_enabled: false`.
|
||||
|
||||
To make Prometheus useful, you'll need to make it scrape one or more hosts by adjusting the configuration.
|
||||
|
||||
|
||||
### Integrating with Prometheus Node Exporter
|
||||
|
||||
If you've installed [Prometheus Node Exporter](prometheus-node-exporter.md) on the same host, you can make Prometheus scrape its metrics with the following **additional configuration**:
|
||||
|
||||
```yaml
|
||||
prometheus_self_node_scraper_enabled: true
|
||||
prometheus_self_node_scraper_static_configs_target: "{{ prometheus_node_exporter_identifier }}:9100"
|
||||
|
||||
# node-exporter runs in another container network, so we need to connect to it.
|
||||
prometheus_container_additional_networks:
|
||||
- "{{ prometheus_node_exporter_container_network }}"
|
||||
```
|
||||
|
||||
To scrape a **remote** Prometheus Node Exporter instance, do not use `prometheus_self_node_scraper_*`, but rather follow the [Scraping any other exporter service](#scraping-any-other-exporter-service) guide below.
|
||||
|
||||
|
||||
### Scraping any other exporter service
|
||||
|
||||
To inject your own scrape configuration, use the `prometheus_config_scrape_configs_additional` variable that's part of the [ansible-role-prometheus](https://github.com/mother-of-all-self-hosting/ansible-role-prometheus) Ansible role.
|
||||
|
||||
Example **additional** configuration:
|
||||
|
||||
```yaml
|
||||
prometheus_config_scrape_configs_additional:
|
||||
- job_name: some_job
|
||||
metrics_path: /metrics
|
||||
scrape_interval: 120s
|
||||
scrape_timeout: 120s
|
||||
static_configs:
|
||||
- targets:
|
||||
- some-host:8080
|
||||
|
||||
- job_name: another_job
|
||||
metrics_path: /metrics
|
||||
scrape_interval: 120s
|
||||
scrape_timeout: 120s
|
||||
static_configs:
|
||||
- targets:
|
||||
- another-host:8080
|
||||
```
|
||||
|
||||
If you're scraping others services running in containers over the container network, make sure the Prometheus container is connected to their own network by adjusting `prometheus_container_additional_networks` as demonstrated above for [Integrating with Prometheus Node Exporter](#integrating-with-prometheus-node-exporter).
|
||||
|
||||
|
||||
## Recommended other services
|
||||
|
||||
To visualize your Prometheus metrics (time-series), you may wish to use a tool like [Grafana](grafana.md).
|
|
@ -7,14 +7,18 @@
|
|||
| [Docker Registry](https://docs.docker.com/registry/) | A container image distribution registry | [Link](services/docker-registry.md) |
|
||||
| [Docker Registry Browser](https://github.com/klausmeyer/docker-registry-browser) | Web Interface for the Docker Registry HTTP API V2 written in Ruby on Rails | [Link](services/docker-registry-browser.md) |
|
||||
| [Docker Registry Purger](https://github.com/devture/docker-registry-purger) | A small tool used for purging a private Docker Registry's old tags | [Link](services/docker-registry-purger.md) |
|
||||
| [Focalboard](https://www.focalboard.com/) | An open source, self-hosted alternative to [Trello](https://trello.com/), [Notion](https://www.notion.so/), and [Asana](https://asana.com/). | [Link](services/focalboard.md) |
|
||||
| [Gitea](https://gitea.io/) | A painless self-hosted Git service. | [Link](services/gitea.md) |
|
||||
| [Grafana](https://grafana.com/) | An open and composable observability and data visualization platform, often used with [Prometheus](services/prometheus.md) | [Link](services/grafana.md) |
|
||||
| [Hubsite](https://github.com/moan0s/hubsite) | A simple, static site that shows an overview of the available services | [Link](services/hubsite.md) |
|
||||
| [Miniflux](https://miniflux.app/) | Minimalist and opinionated feed reader. | [Link](services/miniflux.md) |
|
||||
| [Nextcloud](https://nextcloud.com/) | The most popular self-hosted collaboration solution for tens of millions of users at thousands of organizations across the globe. | [Link](services/nextcloud.md) |
|
||||
| [PeerTube](https://joinpeertube.org/) | A tool for sharing online videos | [Link](services/peertube.md) |
|
||||
| [Prometheus Node Exporter](https://github.com/prometheus/node_exporter) | Exporter for machine metrics | [Link](services/prometheus-node-exporter.md) |
|
||||
| [Postgres](https://www.postgresql.org) | A powerful, open source object-relational database system | [Link](services/postgres.md) |
|
||||
| [Postgres Backup](https://github.com/prodrigestivill/docker-postgres-backup-local) | A solution for backing up PostgresSQL to local filesystem with periodic backups. | [Link](services/postgres-backup.md) |
|
||||
| [Prometheus](https://prometheus.io/) | A metrics collection and alerting monitoring solution | [Link](services/prometheus.md) |
|
||||
| [Prometheus Node Exporter](https://github.com/prometheus/node_exporter) | Exporter for machine metrics | [Link](services/prometheus-node-exporter.md) |
|
||||
| [Prometheus Blackbox Exporter](https://github.com/prometheus/blackbox_exporter) | Blackbox probing of HTTP/HTTPS/DNS/TCP/ICMP and gRPC endpoints | [Link](services/prometheus-blackbox-exporter.md) |
|
||||
| [Radicale](https://radicale.org/) | A Free and Open-Source CalDAV and CardDAV Server (solution for hosting contacts and calendars) | [Link](services/radicale.md) |
|
||||
| [Redmine](https://redmine.org/) | A flexible project management web application. | [Link](services/redmine.md) |
|
||||
| [Redis](https://redis.io/) | An in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. | [Link](services/redis.md) |
|
||||
|
@ -35,4 +39,4 @@
|
|||
| Name | Description |
|
||||
| ------------------------------ | ------------------------------------- |
|
||||
| [Garage](https://garagehq.deuxfleurs.fr/), by absorbing [garage-docker-ansible-deploy](https://github.com/moan0s/garage-docker-ansible-deploy) | Open-source distributed object storage service tailored for self-hosting |
|
||||
| [Prometheus](https://prometheus.io/)| Monitoring system and time series database |
|
||||
|
||||
|
|
|
@ -1,5 +1,25 @@
|
|||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# aux #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
aux_directory_default_owner: "{{ mash_playbook_user_username }}"
|
||||
aux_directory_default_group: "{{ mash_playbook_user_groupname }}"
|
||||
|
||||
aux_file_default_owner: "{{ mash_playbook_user_username }}"
|
||||
aux_file_default_group: "{{ mash_playbook_user_groupname }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /aux #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# system/security #
|
||||
|
@ -65,8 +85,12 @@ devture_systemd_service_manager_services_list_auto: |
|
|||
+
|
||||
([{'name': (docker_registry_purger_identifier + '.timer'), 'priority': 3000, 'groups': ['mash', 'docker-registry-purger']}] if docker_registry_purger_enabled else [])
|
||||
+
|
||||
([{'name': (focalboard_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'focalboard']}] if focalboard_enabled else [])
|
||||
+
|
||||
([{'name': (gitea_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'gitea', 'gitea-server']}] if gitea_enabled else [])
|
||||
+
|
||||
([{'name': (grafana_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'grafana']}] if grafana_enabled else [])
|
||||
+
|
||||
([{'name': (nextcloud_identifier + '-server.service'), 'priority': 2000, 'groups': ['mash', 'nextcloud', 'nextcloud-server']}] if nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': (nextcloud_identifier + '-cron.timer'), 'priority': 2500, 'groups': ['mash', 'nextcloud', 'nextcloud-cron']}] if nextcloud_enabled else [])
|
||||
|
@ -75,6 +99,10 @@ devture_systemd_service_manager_services_list_auto: |
|
|||
+
|
||||
([{'name': (peertube_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'peertube']}] if peertube_enabled else [])
|
||||
+
|
||||
([{'name': (prometheus_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'metrics', 'prometheus']}] if prometheus_enabled else [])
|
||||
+
|
||||
([{'name': (prometheus_blackbox_exporter_identifier + '.service'), 'priority': 500, 'groups': ['mash', 'metrics', 'prometheus-blackbox-exporter']}] if prometheus_blackbox_exporter_enabled else [])
|
||||
+
|
||||
([{'name': (prometheus_node_exporter_identifier + '.service'), 'priority': 500, 'groups': ['mash', 'metrics', 'prometheus-node-exporter']}] if prometheus_node_exporter_enabled else [])
|
||||
+
|
||||
([{'name': (radicale_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'radicale']}] if radicale_enabled else [])
|
||||
|
@ -124,6 +152,12 @@ devture_postgres_systemd_services_to_stop_for_maintenance_list: |
|
|||
|
||||
devture_postgres_managed_databases_auto: |
|
||||
{{
|
||||
([{
|
||||
'name': focalboard_database_name,
|
||||
'username': focalboard_database_username,
|
||||
'password': focalboard_database_password,
|
||||
}] if focalboard_enabled and focalboard_database_type == 'postgres' and focalboard_database_hostname == devture_postgres_identifier else [])
|
||||
+
|
||||
([{
|
||||
'name': gitea_config_database_name,
|
||||
'username': gitea_config_database_username,
|
||||
|
@ -460,6 +494,53 @@ docker_registry_purger_gid: "{{ mash_playbook_gid }}"
|
|||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# focalboard #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
focalboard_enabled: false
|
||||
|
||||
focalboard_identifier: "{{ mash_playbook_service_identifier_prefix }}focalboard"
|
||||
|
||||
focalboard_base_path: "{{ mash_playbook_base_path }}/focalboard"
|
||||
|
||||
focalboard_uid: "{{ mash_playbook_uid }}"
|
||||
focalboard_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
focalboard_systemd_required_systemd_services_list: |
|
||||
{{
|
||||
(['docker.service'])
|
||||
+
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and focalboard_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
focalboard_database_type: "{{ 'postgres' if devture_postgres_enabled else '' }}"
|
||||
focalboard_database_hostname: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
focalboard_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
focalboard_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.focalboard', rounds=655555) | to_uuid }}"
|
||||
|
||||
focalboard_container_additional_networks: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and focalboard_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
focalboard_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
focalboard_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
focalboard_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
focalboard_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /focalboard #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# gitea #
|
||||
|
@ -507,6 +588,43 @@ gitea_config_database_password: "{{ '%s' | format(mash_playbook_generic_secret_k
|
|||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# grafana #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
grafana_enabled: false
|
||||
|
||||
grafana_identifier: "{{ mash_playbook_service_identifier_prefix }}grafana"
|
||||
|
||||
grafana_base_path: "{{ mash_playbook_base_path }}/grafana"
|
||||
|
||||
grafana_uid: "{{ mash_playbook_uid }}"
|
||||
grafana_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
grafana_container_additional_networks: "{{ grafana_container_additional_networks_reverse_proxy + grafana_container_additional_networks_additional }}"
|
||||
|
||||
grafana_container_additional_networks_reverse_proxy: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
}}
|
||||
|
||||
grafana_container_additional_networks_additional: []
|
||||
|
||||
grafana_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
grafana_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
grafana_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
grafana_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /grafana #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# miniflux #
|
||||
|
@ -653,6 +771,67 @@ peertube_systemd_required_services_list: |
|
|||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# prometheus #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
prometheus_enabled: false
|
||||
|
||||
prometheus_identifier: "{{ mash_playbook_service_identifier_prefix }}prometheus"
|
||||
|
||||
prometheus_base_path: "{{ mash_playbook_base_path }}/prometheus"
|
||||
|
||||
prometheus_uid: "{{ mash_playbook_uid }}"
|
||||
prometheus_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /prometheus #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# prometheus_blackbox_exporter #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
prometheus_blackbox_exporter_enabled: false
|
||||
|
||||
prometheus_blackbox_exporter_identifier: "{{ mash_playbook_service_identifier_prefix }}prometheus-blackbox-exporter"
|
||||
|
||||
prometheus_blackbox_exporter_base_path: "{{ mash_playbook_base_path }}/prometheus-blackbox-exporter"
|
||||
|
||||
prometheus_blackbox_exporter_uid: "{{ mash_playbook_uid }}"
|
||||
prometheus_blackbox_exporter_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
prometheus_blackbox_exporter_basicauth_enabled: "{{ prometheus_blackbox_exporter_container_labels_traefik_enabled }}"
|
||||
prometheus_blackbox_exporter_basicauth_user: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'blackbox.user', rounds=655555) | to_uuid }}"
|
||||
prometheus_blackbox_exporter_basicauth_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'blackbox.password', rounds=655555) | to_uuid }}"
|
||||
|
||||
prometheus_blackbox_exporter_container_additional_networks: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
}}
|
||||
|
||||
# Only enable Traefik labels if a hostname is set (indicating that this will be exposed publicly)
|
||||
prometheus_blackbox_exporter_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled and prometheus_blackbox_exporter_hostname }}"
|
||||
prometheus_blackbox_exporter_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
prometheus_blackbox_exporter_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
prometheus_blackbox_exporter_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /prometheus_blackbox_exporter #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# prometheus_node_exporter #
|
||||
|
@ -668,7 +847,7 @@ prometheus_node_exporter_base_path: "{{ mash_playbook_base_path }}/prometheus-no
|
|||
prometheus_node_exporter_uid: "{{ mash_playbook_uid }}"
|
||||
prometheus_node_exporter_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
prometheus_node_exporter_basicauth_enabled: true
|
||||
prometheus_node_exporter_basicauth_enabled: "{{ prometheus_node_exporter_container_labels_traefik_enabled }}"
|
||||
prometheus_node_exporter_basicauth_user: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'node.user', rounds=655555) | to_uuid }}"
|
||||
prometheus_node_exporter_basicauth_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'node.password', rounds=655555) | to_uuid }}"
|
||||
|
||||
|
@ -677,7 +856,8 @@ prometheus_node_exporter_container_additional_networks: |
|
|||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
}}
|
||||
|
||||
prometheus_node_exporter_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
# Only enable Traefik labels if a hostname is set (indicating that this will be exposed publicly)
|
||||
prometheus_node_exporter_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled and prometheus_node_exporter_hostname }}"
|
||||
prometheus_node_exporter_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
prometheus_node_exporter_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
prometheus_node_exporter_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
|
4
justfile
4
justfile
|
@ -15,8 +15,8 @@ lint:
|
|||
install-all *extra_args: (run-tags "install-all,start" extra_args)
|
||||
|
||||
# Runs installation tasks for a single service
|
||||
install-service service:
|
||||
just --justfile {{ justfile() }} run --tags=install-{{ service }},start-group --extra-vars=group={{ service }}
|
||||
install-service service *extra_args:
|
||||
just --justfile {{ justfile() }} run --tags=install-{{ service }},start-group --extra-vars=group={{ service }} {{ extra_args }}
|
||||
|
||||
# Runs the playbook with --tags=setup-all,start and optional arguments
|
||||
setup-all *extra_args: (run-tags "setup-all,start" extra_args)
|
||||
|
|
|
@ -52,23 +52,33 @@
|
|||
version: v0.15.7-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/miniflux.git
|
||||
version: v2.0.43-0
|
||||
version: v2.0.43-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/grafana.git
|
||||
version: v9.4.3-0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/radicale.git
|
||||
version: v3.1.8.1-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/uptime_kuma.git
|
||||
version: v1.20.2-1
|
||||
version: v1.21.0-0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/redis.git
|
||||
version: v7.0.9-0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/prometheus_node_exporter.git
|
||||
version: v1.5.0-4
|
||||
version: v1.5.0-6
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/prometheus_blackbox_exporter.git
|
||||
version: v0.23.0-2
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/redmine.git
|
||||
version: v5.0.5-1
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-aux.git
|
||||
name: aux
|
||||
version: v1.0.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-collabora-online.git
|
||||
name: collabora_online
|
||||
version: v22.05.12.1.1-0
|
||||
|
@ -85,9 +95,13 @@
|
|||
name: docker_registry_purger
|
||||
version: v1.0.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-focalboard.git
|
||||
name: focalboard
|
||||
version: v7.8.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-gitea.git
|
||||
name: gitea
|
||||
version: v1.18.5-3
|
||||
version: v1.19.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-nextcloud.git
|
||||
name: nextcloud
|
||||
|
@ -97,6 +111,10 @@
|
|||
name: peertube
|
||||
version: v5.0.1-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-prometheus.git
|
||||
name: prometheus
|
||||
version: v2.42.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-vaultwarden.git
|
||||
name: vaultwarden
|
||||
version: v1.27.0-2
|
||||
|
|
|
@ -60,8 +60,12 @@
|
|||
- role: galaxy/docker_registry_browser
|
||||
- role: galaxy/docker_registry_purger
|
||||
|
||||
- role: galaxy/focalboard
|
||||
|
||||
- role: galaxy/gitea
|
||||
|
||||
- role: galaxy/grafana
|
||||
|
||||
- role: galaxy/miniflux
|
||||
|
||||
- role: galaxy/hubsite
|
||||
|
@ -70,7 +74,9 @@
|
|||
|
||||
- role: galaxy/peertube
|
||||
|
||||
- role: galaxy/prometheus
|
||||
- role: galaxy/prometheus_node_exporter
|
||||
- role: galaxy/prometheus_blackbox_exporter
|
||||
|
||||
- role: galaxy/radicale
|
||||
|
||||
|
@ -85,6 +91,8 @@
|
|||
- role: galaxy/com.devture.ansible.role.woodpecker_ci_server
|
||||
- role: galaxy/com.devture.ansible.role.woodpecker_ci_agent
|
||||
|
||||
- role: galaxy/aux
|
||||
|
||||
- when: devture_systemd_service_manager_enabled | bool
|
||||
role: galaxy/com.devture.ansible.role.systemd_service_manager
|
||||
|
||||
|
|
Loading…
Add table
Reference in a new issue