Merge remote-tracking branch 'origin' into firezone
This commit is contained in:
commit
3e5ac23f90
19 changed files with 1354 additions and 138 deletions
|
@ -23,6 +23,10 @@ indent_size = 2
|
|||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
[justfile]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
# Markdown Files
|
||||
#
|
||||
# Two spaces at the end of a line in Markdown mean "new line",
|
||||
|
|
44
CHANGELOG.md
44
CHANGELOG.md
|
@ -1,3 +1,47 @@
|
|||
# 2023-03-26
|
||||
|
||||
## (Backward Compatibility Break) PeerTube is no longer wired to Redis automatically
|
||||
|
||||
As described in our [Redis](docs/services/redis.md) services docs, running a single instance of Redis to be used by multiple services is not a good practice.
|
||||
|
||||
For this reason, we're no longer auto-wiring PeerTube to Redis. If you're running other services (which may require Redis in the future) on the same host, it's recommended that you follow the [Creating a Redis instance dedicated to PeerTube](docs/services/peertube.md#creating-a-redis-instance-dedicated-to-peertube) documentation.
|
||||
|
||||
If you're only running PeerTube on a dedicated server (no other services that may need Redis) or you'd like to stick to what you've used until now (a single shared Redis instance), follow the [Using the shared Redis instance for PeerTube](docs/services/peertube.md#using-the-shared-redis-instance-for-peertube) documentation.
|
||||
|
||||
|
||||
# 2023-03-25
|
||||
|
||||
## (Backward Compatibility Break) Docker no longer installed by default
|
||||
|
||||
The playbook used to install Docker and the Docker SDK for Python by default, unless you turned these off by setting `mash_playbook_docker_installation_enabled` and `devture_docker_sdk_for_python_installation_enabled` (respectively) to `false`.
|
||||
|
||||
From now on, both of these variables default to `false`. An empty inventory file will not install these components.
|
||||
|
||||
**Most** users will want to enable these, just like they would want to enable [Traefik](docs/services/traefik.md) and [Postgres](docs/services/postgres.md), so why default them to `false`? The answer is: it's cleaner to have "**everything** is off by default - enable as you wish" and just need to add stuff, as opposed to "**some** things are on, **some** are off - toggle as you wish".
|
||||
|
||||
To enable these components, you need to explicitly add something like this to your `vars.yml` file:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# Docker #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
mash_playbook_docker_installation_enabled: true
|
||||
|
||||
devture_docker_sdk_for_python_installation_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Docker #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
Our [example vars.yml](examples/vars.yml) file has been updated, so that new hosts created based on it will have this configuration by default.
|
||||
|
||||
|
||||
# 2023-03-15
|
||||
|
||||
## Initial release
|
||||
|
|
210
docs/running-multiple-instances.md
Normal file
210
docs/running-multiple-instances.md
Normal file
|
@ -0,0 +1,210 @@
|
|||
## Running multiple instances of the same service on the same host
|
||||
|
||||
The way this playbook is structured, each Ansible role can only be invoked once and made to install one instance of the service it's responsible for.
|
||||
|
||||
If you need multiple instances (of whichever service), you'll need some workarounds as described below.
|
||||
|
||||
The example below focuses on hosting multiple [Redis](services/redis.md) instances, but you can apply it to hosting multiple instances or whole stacks of any kind.
|
||||
|
||||
Let's say you're managing a host called `mash.example.com` which installs both [PeerTube](services/peertube.md) and [NetBox](services/netbox.md). Both of these services require a [Redis](services/redis.md) instance. If you simply add `redis_enabled: true` to your `mash.example.com` host's `vars.yml` file, you'd get a Redis instance (`mash-redis`), but it's just one instance. As described in our [Redis](services/redis.md) documentation, this is a security problem and potentially fragile as both services may try to read/write the same data and get in conflict with one another.
|
||||
|
||||
We propose that you **don't** add `redis_enabled: true` to your main `mash.example.com` file, but do the following:
|
||||
|
||||
## Re-do your inventory to add supplementary hosts
|
||||
|
||||
Create multiple hosts in your inventory (`inventory/hosts`) which target the same server, like this:
|
||||
|
||||
```ini
|
||||
[mash_servers]
|
||||
[mash_servers:children]
|
||||
mash_example_com
|
||||
|
||||
[mash_example_com]
|
||||
mash.example.com-netbox-deps ansible_host=1.2.3.4
|
||||
mash.example.com-peertube-deps ansible_host=1.2.3.4
|
||||
mash.example.com ansible_host=1.2.3.4
|
||||
```
|
||||
|
||||
This creates a new group (called `mash_example_com`) which groups all 3 hosts:
|
||||
|
||||
- (**new**) `mash.example.com-netbox-deps` - a new host, for your [NetBox](services/netbox.md) dependencies
|
||||
- (**new**) `mash.example.com-peertube-deps` - a new host, for your [PeerTube](services/peertube.md) dependencies
|
||||
- (old) `mash.example.com` - your regular inventory host
|
||||
|
||||
When running Ansible commands later on, you can use the `-l` flag to limit which host to run them against. Here are a few examples:
|
||||
|
||||
- `just install-all` - runs the [installation](installing.md) process on all hosts (3 hosts in this case)
|
||||
- `just install-all -l mash_example_com` - runs the installation process on all hosts in the `mash_example_com` group (same 3 hosts as `just install-all` in this case)
|
||||
- `just install-all -l mash.example.com-netbox-deps` - runs the installation process on the `mash.example.com-netbox-deps` host
|
||||
|
||||
|
||||
## Adjust the configuration of the supplementary hosts to use a new "namespace"
|
||||
|
||||
Multiple hosts targetting the same server as described above still causes conflicts, because services will use the same paths (e.g. `/mash/redis`) and service/container names (`mash-redis`) everywhere.
|
||||
|
||||
To avoid conflicts, adjust the `vars.yml` file for the new hosts (`mash.example.com-netbox-deps` and `mash.example.com-peertube-deps`)
|
||||
and set non-default and unique values in the `mash_playbook_service_identifier_prefix` and `mash_playbook_service_base_directory_name_prefix` variables. Examples below:
|
||||
|
||||
`inventory/host_vars/mash.example.com-netbox-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-netbox-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'netbox-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
`inventory/host_vars/mash.example.com-peertube-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-peertube-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'peertube-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
The above configuration will create **2** Redis instances:
|
||||
|
||||
- `mash-netbox-redis` with its base data path in `/mash/netbox-redis`
|
||||
- `mash-peertube-redis` with its base data path in `/mash/peertube-redis`
|
||||
|
||||
These instances reuse the `mash` user and group and the `/mash` data path, but are not in conflict with each other.
|
||||
|
||||
|
||||
## Adjust the configuration of the base host
|
||||
|
||||
Now that we've created separate Redis instances for both PeerTube and NetBox, we need to put them to use by editing the `vars.yml` file of the main host (the one that installs PeerTbue and NetBox) to wire them to their Redis instances.
|
||||
|
||||
You'll need configuration (`inventory/host_vars/mash.example.com/vars.yml`) like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
netbox_enabled: true
|
||||
|
||||
# Other NetBox configuration here
|
||||
|
||||
# Point NetBox to its dedicated Redis instance
|
||||
netbox_environment_variable_redis_host: mash-netbox-redis
|
||||
netbox_environment_variable_redis_cache_host: mash-netbox-redis
|
||||
|
||||
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated Redis service (mash-netbox-redis.service)
|
||||
netbox_systemd_required_services_list_custom:
|
||||
- mash-netbox-redis.service
|
||||
|
||||
# Make sure the NetBox container is connected to the container network of its dedicated Redis service (mash-netbox-redis)
|
||||
netbox_container_additional_networks_custom:
|
||||
- mash-netbox-redis
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Other PeerTube configuration here
|
||||
|
||||
# Point PeerTube to its dedicated Redis instance
|
||||
peertube_config_redis_hostname: mash-peertube-redis
|
||||
|
||||
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated Redis service (mash-peertube-redis.service)
|
||||
peertube_systemd_required_services_list_custom:
|
||||
- "mash-peertube-redis.service"
|
||||
|
||||
# Make sure the PeerTube container is connected to the container network of its dedicated Redis service (mash-peertube-redis)
|
||||
peertube_container_additional_networks_custom:
|
||||
- "mash-peertube-redis"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
|
||||
## Questions & Answers
|
||||
|
||||
**Can't I just use the same Redis instance for multiple services?**
|
||||
|
||||
> You may or you may not. See the [Redis](services/redis.md) documentation for why you shouldn't do this.
|
||||
|
||||
**Can't I just create one host and a separate stack for each service** (e.g. Nextcloud + all dependencies on one inventory host; PeerTube + all dependencies on another inventory host; with both inventory hosts targetting the same server)?
|
||||
|
||||
> That's a possibility which is somewhat clean. The downside is that each "full stack" comes with its own Postgres database which needs to be maintained and upgraded separately.
|
85
docs/services/adguard-home.md
Normal file
85
docs/services/adguard-home.md
Normal file
|
@ -0,0 +1,85 @@
|
|||
# AdGuard Home
|
||||
|
||||
[AdGuard Home](https://adguard.com/en/adguard-home/overview.html/) is a network-wide DNS software for blocking ads & tracking.
|
||||
|
||||
**Warning**: running a public DNS server is not advisable. You'd better install AdGuard Home in a trusted local network, or adjust its network interfaces and port exposure (via the variables in the [Networking](#networking) configuration section below) so that you don't expose your DNS server publicly to the whole world. If you're exposing your DNS server publicly, consider restricting who can use it by adjusting the **Allowed clients** setting in the **Access settings** section of **Settings** -> **DNS settings**.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# adguard-home #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
adguard_home_enabled: true
|
||||
|
||||
adguard_home_hostname: mash.example.com
|
||||
|
||||
# Hosting under a subpath sort of works, but is not ideal
|
||||
# (see the URL section below for details).
|
||||
# Consider using a dedicated hostname and removing the line below.
|
||||
adguard_home_path_prefix: /adguard-home
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /adguard-home #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/adguard-home`.
|
||||
|
||||
You can remove the `adguard_home_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
When **hosting under a subpath**, you may hit [this bug](https://github.com/AdguardTeam/AdGuardHome/issues/5478), which causes these **annoyances**:
|
||||
|
||||
- upon initial usage, you will be redirected to `/install.html` and would need to manually adjust this URL to something like `/adguard-home/install.html` (depending on your `adguard_home_path_prefix`). After the installation wizard completes, you'd be redirected to `/index.html` incorrectly as well.
|
||||
|
||||
- every time you hit the homepage and you're not logged in, you will be redirected to `/login.html` and would need to manually adjust this URL to something like `/adguard-home/login.html` (depending on your `adguard_home_path_prefix`)
|
||||
|
||||
|
||||
### Networking
|
||||
|
||||
By default, the following ports will be exposed by the container on **all network interfaces**:
|
||||
|
||||
- `53` over **TCP**, controlled by `adguard_home_container_dns_tcp_bind_port` - used for DNS over TCP
|
||||
- `53` over **UDP**, controlled by `adguard_home_container_dns_udp_bind_port` - used for DNS over UDP
|
||||
|
||||
Docker automatically opens these ports in the server's firewall, so you **likely don't need to do anything**. If you use another firewall in front of the server, you may need to adjust it.
|
||||
|
||||
To expose these ports only on **some** network interfaces, you can use additional configuration like this:
|
||||
|
||||
```yaml
|
||||
# Expose only on 192.168.1.15
|
||||
adguard_home_container_dns_tcp_bind_port: '192.168.1.15:53'
|
||||
adguard_home_container_dns_udp_bind_port: '192.168.1.15:53'
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the AdGuard Home URL, as defined in `adguard_home_hostname` and `adguard_home_path_prefix`.
|
||||
|
||||
As mentioned in the [URL](#url) section above, you may hit some annoyances when hosting under a subpath.
|
||||
|
||||
The first time you visit the AdGuard Home pages, you'll go through a setup wizard **make sure to set the HTTP port to `3000`**. This is the in-container port that our Traefik setup expects and uses for serving the install wizard to begin with. If you go with the default (`80`), the web UI will stop working after the installation wizard completes.
|
||||
|
||||
Things you should consider doing later:
|
||||
|
||||
- increasing the per-client Rate Limit (from the default of `20`) in the **DNS server configuration** section in **Settings** -> **DNS Settings**
|
||||
- enabling caching in the **DNS cache configuration** section in **Settings** -> **DNS Settings**
|
||||
- adding additional blocklists by discovering them on [Firebog](https://firebog.net/) or other sources and importing them from **Filters** -> **DNS blocklists**
|
||||
- reading the AdGuard Home [README](https://github.com/AdguardTeam/AdGuardHome/blob/master/README.md) and [Wiki](https://github.com/AdguardTeam/AdGuardHome/wiki)
|
61
docs/services/keycloak.md
Normal file
61
docs/services/keycloak.md
Normal file
|
@ -0,0 +1,61 @@
|
|||
# Keycloak
|
||||
|
||||
[Keycloak](https://www.keycloak.org/) is an open source identity and access management solution.
|
||||
|
||||
**Warning**: this service is a new addition to the playbook. It may not fully work or be configured in a suboptimal manner.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
keycloak_enabled: true
|
||||
|
||||
keycloak_hostname: mash.example.com
|
||||
keycloak_path_prefix: /keycloak
|
||||
|
||||
keycloak_environment_variable_keycloak_admin: your_username_here
|
||||
# Generating a strong password (e.g. `pwgen -s 64 1`) is recommended
|
||||
keycloak_environment_variable_keycloak_admin_password: ''
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/keycloak`.
|
||||
|
||||
You can remove the `keycloak_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
### Authentication
|
||||
|
||||
On first start, the admin user account will be created as defined with the `keycloak_environment_variable_keycloak_admin` and `keycloak_environment_variable_keycloak_admin_password` variables.
|
||||
|
||||
On each start after that, Keycloak will attempt to create the user again and report a non-fatal error (Keycloak will continue running).
|
||||
|
||||
Subsequent changes to the password will not affect an existing user's password.
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the Keycloak URL, as defined in `keycloak_hostname` and `keycloak_path_prefix` and log in as described in [Authentication](#authentication).
|
||||
|
||||
Follow the [Keycloak documentation](https://www.keycloak.org/documentation) or other guides for learning how to use Keycloak.
|
141
docs/services/navidrome.md
Normal file
141
docs/services/navidrome.md
Normal file
|
@ -0,0 +1,141 @@
|
|||
# Navidrome
|
||||
|
||||
[Navidrome](https://www.navidrome.org/) is a [Subsonic-API](http://www.subsonic.org/pages/api.jsp) compatible music server.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
navidrome_enabled: true
|
||||
|
||||
navidrome_hostname: mash.example.com
|
||||
navidrome_path_prefix: /navidrome
|
||||
|
||||
# By default, Navidrome will look at the /music directory for music files,
|
||||
# controlled by the `navidrome_environment_variable_nd_musicfolder` variable.
|
||||
#
|
||||
# You'd need to mount some music directory into the Navidrome container, like shown below.
|
||||
# The "Syncthing integration" section below may be relevant.
|
||||
# navidrome_container_additional_volumes:
|
||||
# - type: bind
|
||||
# src: /on-host/path/to/music
|
||||
# dst: /music
|
||||
# options: readonly
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/navidrome`.
|
||||
|
||||
You can remove the `navidrome_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
### Authentication
|
||||
|
||||
On first use (see [Usage](#usage) below), you'll be asked to create the first administrator user.
|
||||
|
||||
You can create additional users from the web UI after that.
|
||||
|
||||
### Syncthing integration
|
||||
|
||||
If you've got a [Syncthing](syncthing.md) service running, you can use it to synchronize your music directory onto the server and then mount it as read-only into the Navidrome container.
|
||||
|
||||
We recommend that you make use of the [aux](aux.md) role to create some shared directory like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# aux #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
aux_directory_definitions:
|
||||
- dest: "{{ mash_playbook_base_path }}/storage"
|
||||
- dest: "{{ mash_playbook_base_path }}/storage/music"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /aux #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
You can then mount this `{{ mash_playbook_base_path }}/storage/music` directory into the Syncthing container and synchronize it with some other computer:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# syncthing #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Other Syncthing configuration..
|
||||
|
||||
syncthing_container_additional_volumes:
|
||||
- type: bind
|
||||
src: "{{ mash_playbook_base_path }}/storage/music"
|
||||
dst: /music
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /syncthing #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
Finally, mount the `{{ mash_playbook_base_path }}/storage/music` directory into the Navidrome container as read-only:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Other Navidrome configuration..
|
||||
|
||||
navidrome_container_additional_volumes:
|
||||
- type: bind
|
||||
src: "{{ mash_playbook_base_path }}/storage/music"
|
||||
dst: /music
|
||||
options: readonly
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the Navidrome URL, as defined in `navidrome_hostname` and `navidrome_path_prefix`.
|
||||
|
||||
As mentioned in [Authentication](#authentication) above, you'll be asked to create the first administrator user the first time you open the web UI.
|
||||
|
||||
You can also connect various Subsonic-API-compatible [apps](https://www.navidrome.org/docs/overview/#apps) (desktop, web, mobile) to your Navidrome instance.
|
||||
|
||||
|
||||
## Recommended other services
|
||||
|
||||
- [Syncthing](syncthing.md) - a continuous file synchronization program which synchronizes files between two or more computers in real time. See [Syncthing integration](#syncthing-integration)
|
211
docs/services/netbox.md
Normal file
211
docs/services/netbox.md
Normal file
|
@ -0,0 +1,211 @@
|
|||
# NetBox
|
||||
|
||||
[NetBox](https://docs.netbox.dev/en/stable/) is an open-source web application that provides [IP address management (IPAM)](https://en.wikipedia.org/wiki/IP_address_management) and [data center infrastructure management (DCIM)](https://en.wikipedia.org/wiki/Data_center_management#Data_center_infrastructure_management) functionality.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Redis](redis.md) data-store, installation details [below](#redis)
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
netbox_enabled: true
|
||||
|
||||
netbox_hostname: mash.example.com
|
||||
netbox_path_prefix: /netbox
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
netbox_environment_variable_secret_key: ''
|
||||
|
||||
# The following superuser will be created upon launch.
|
||||
netbox_environment_variable_superuser_name: your_username_here
|
||||
netbox_environment_variable_superuser_email: your.email@example.com
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way.
|
||||
# Changing the password subsequently will not affect the user's password.
|
||||
netbox_environment_variable_superuser_password: ''
|
||||
|
||||
# Redis configuration, as described below
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/netbox`.
|
||||
|
||||
You can remove the `netbox_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
|
||||
### Authentication
|
||||
|
||||
If `netbox_environment_variable_superuser_*` variables are specified, NetBox will try to create the user (if missing).
|
||||
|
||||
|
||||
### Redis
|
||||
|
||||
As described on the [Redis](redis.md) documentation page, if you're hosting additional services which require Redis on the same server, you'd better go for installing a separate Redis instance for each service. See [Creating a Redis instance dedicated to NetBox](#creating-a-redis-instance-dedicated-to-netbox).
|
||||
|
||||
If you're only running NetBox on this server and don't need to use Redis for anything else, you can [use a single Redis instance](#using-the-shared-redis-instance-for-netbox).
|
||||
|
||||
#### Using the shared Redis instance for NetBox
|
||||
|
||||
To install a single (non-dedicated) Redis instance (`mash-redis`) and hook NetBox to it, add the following **additional** configuration:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
# Point NetBox to the shared Redis instance
|
||||
netbox_config_redis_hostname: "{{ redis_identifier }}"
|
||||
|
||||
# Make sure the NetBox service (mash-netbox.service) starts after the shared Redis service (mash-redis.service)
|
||||
netbox_systemd_required_services_list_custom:
|
||||
- "{{ redis_identifier }}.service"
|
||||
|
||||
# Make sure the NetBox container is connected to the container network of the shared Redis service (mash-redis)
|
||||
netbox_container_additional_networks_custom:
|
||||
- "{{ redis_identifier }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-redis` Redis instance on this host.
|
||||
|
||||
This is only recommended if you won't be installing other services which require Redis. Alternatively, go for [Creating a Redis instance dedicated to NetBox](#creating-a-redis-instance-dedicated-to-netbox).
|
||||
|
||||
|
||||
#### Creating a Redis instance dedicated to NetBox
|
||||
|
||||
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
|
||||
|
||||
Adjust your `inventory/hosts` file as described in [Re-do your inventory to add supplementary hosts](../running-multiple-instances.md#re-do-your-inventory-to-add-supplementary-hosts), adding a new supplementary host (e.g. if `netbox.example.com` is your main one, create `netbox.example.com-deps`).
|
||||
|
||||
Then, create a new `vars.yml` file for the
|
||||
|
||||
`inventory/host_vars/netbox.example.com-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-netbox-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'netbox-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-netbox-redis` instance on this host with its data in `/mash/netbox-redis`.
|
||||
|
||||
Then, adjust your main inventory host's variables file (`inventory/host_vars/netbox.example.com/vars.yml`) like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
|
||||
# Point NetBox to its dedicated Redis instance
|
||||
netbox_environment_variable_redis_host: mash-netbox-redis
|
||||
netbox_environment_variable_redis_cache_host: mash-netbox-redis
|
||||
|
||||
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated Redis service (mash-netbox-redis.service)
|
||||
netbox_systemd_required_services_list_custom:
|
||||
- "mash-netbox-redis.service"
|
||||
|
||||
# Make sure the NetBox container is connected to the container network of its dedicated Redis service (mash-netbox-redis)
|
||||
netbox_container_additional_networks_custom:
|
||||
- "mash-netbox-redis"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
If you've decided to install a dedicated Redis instance for NetBox, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `netbox.example.com-deps`), before running installation for the main one (e.g. `netbox.example.com`).
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the NetBox URL, as defined in `netbox_hostname` and `netbox_path_prefix`.
|
||||
|
||||
You can log in with the **username** (**not** email) and password specified in the `netbox_environment_variable_superuser*` variables.
|
|
@ -8,7 +8,7 @@
|
|||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Redis](redis.md) data-store
|
||||
- a [Redis](redis.md) data-store, installation details [below](#redis)
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
|
@ -47,6 +47,8 @@ peertube_config_root_user_initial_password: ''
|
|||
# Then, replace the example IP range below, and re-run the playbook.
|
||||
# peertube_trusted_proxies_values_custom: ["172.21.0.0/16"]
|
||||
|
||||
# Redis configuration, as described below
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
|
@ -58,6 +60,148 @@ In the example configuration above, we configure the service to be hosted at `ht
|
|||
|
||||
Hosting PeerTube under a subpath (by configuring the `peertube_path_prefix` variable) does not seem to be possible right now, due to PeerTube limitations.
|
||||
|
||||
### Redis
|
||||
|
||||
As described on the [Redis](redis.md) documentation page, if you're hosting additional services which require Redis on the same server, you'd better go for installing a separate Redis instance for each service. See [Creating a Redis instance dedicated to PeerTube](#creating-a-redis-instance-dedicated-to-peertube).
|
||||
|
||||
If you're only running PeerTube on this server and don't need to use Redis for anything else, you can [use a single Redis instance](#using-the-shared-redis-instance-for-peertube).
|
||||
|
||||
#### Using the shared Redis instance for PeerTube
|
||||
|
||||
To install a single (non-dedicated) Redis instance (`mash-redis`) and hook PeerTube to it, add the following **additional** configuration:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
# Point PeerTube to the shared Redis instance
|
||||
peertube_config_redis_hostname: "{{ redis_identifier }}"
|
||||
|
||||
# Make sure the PeerTube service (mash-peertube.service) starts after the shared Redis service (mash-redis.service)
|
||||
peertube_systemd_required_services_list_custom:
|
||||
- "{{ redis_identifier }}.service"
|
||||
|
||||
# Make sure the PeerTube container is connected to the container network of the shared Redis service (mash-redis)
|
||||
peertube_container_additional_networks_custom:
|
||||
- "{{ redis_identifier }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-redis` Redis instance on this host.
|
||||
|
||||
This is only recommended if you won't be installing other services which require Redis. Alternatively, go for [Creating a Redis instance dedicated to PeerTube](#creating-a-redis-instance-dedicated-to-peertube).
|
||||
|
||||
|
||||
#### Creating a Redis instance dedicated to PeerTube
|
||||
|
||||
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
|
||||
|
||||
Adjust your `inventory/hosts` file as described in [Re-do your inventory to add supplementary hosts](../running-multiple-instances.md#re-do-your-inventory-to-add-supplementary-hosts), adding a new supplementary host (e.g. if `peertube.example.com` is your main one, create `peertube.example.com-deps`).
|
||||
|
||||
Then, create a new `vars.yml` file for the
|
||||
|
||||
`inventory/host_vars/peertube.example.com-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-peertube-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'peertube-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-peertube-redis` instance on this host with its data in `/mash/peertube-redis`.
|
||||
|
||||
Then, adjust your main inventory host's variables file (`inventory/host_vars/peertube.example.com/vars.yml`) like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
# Point PeerTube to its dedicated Redis instance
|
||||
peertube_config_redis_hostname: mash-peertube-redis
|
||||
|
||||
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated Redis service (mash-peertube-redis.service)
|
||||
peertube_systemd_required_services_list_custom:
|
||||
- "mash-peertube-redis.service"
|
||||
|
||||
# Make sure the PeerTube container is connected to the container network of its dedicated Redis service (mash-peertube-redis)
|
||||
peertube_container_additional_networks_custom:
|
||||
- "mash-peertube-redis"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
If you've decided to install a dedicated Redis instance for PeerTube, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `peertube.example.com-deps`), before running installation for the main one (e.g. `peertube.example.com`).
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
|
@ -68,6 +212,7 @@ You should then be able to log in with:
|
|||
- username: `root`
|
||||
- password: the password you've set in `peertube_config_root_user_initial_password` in `vars.yml`
|
||||
|
||||
|
||||
## Adjusting the trusted reverse-proxy networks
|
||||
|
||||
If you go to **Administration** -> **System** -> **Debug** (`/admin/system/debug`), you'll notice that PeerTube reports some local IP instead of your own IP address.
|
||||
|
|
|
@ -4,12 +4,19 @@
|
|||
|
||||
Some of the services installed by this playbook require a Redis data store.
|
||||
|
||||
Enabling the Redis database service will automatically wire all other services to use it.
|
||||
**Warning**: Because Redis is not as flexible as [Postgres](postgres.md) when it comes to authentication and data separation, it's **recommended that you run separate Redis instances** (one for each service). Redis supports multiple database and a [SELECT](https://redis.io/commands/select/) command for switching between them. However, **reusing the same Redis instance is not good enough** because:
|
||||
|
||||
- if all services use the same Redis instance and database (id = 0), services may conflict with one another
|
||||
- the number of databases is limited to [16 by default](https://github.com/redis/redis/blob/aa2403ca98f6a39b6acd8373f8de1a7ba75162d5/redis.conf#L376-L379), which may or may not be enough. With configuration changes, this is solveable.
|
||||
- some services do not support switching the Redis database and always insist on using the default one (id = 0)
|
||||
- Redis [does not support different authentication credentials for its different databases](https://stackoverflow.com/a/37262596), so each service can potentially read and modify other services' data
|
||||
|
||||
If you're only hosting a single service (like [PeerTube](peertube.md) or [NetBox](netbox.md)) on your server, you can get away with running a single instance. If you're hosting multiple services, you should prepare separate instances for each service.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process to **host a single instance of the Redis service**:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
|
@ -26,3 +33,5 @@ redis_enabled: true
|
|||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
To **host multiple instances of the Redis service**, follow the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation or the **Redis** section (if available) of the service you're installing.
|
||||
|
|
39
docs/services/soft-serve.md
Normal file
39
docs/services/soft-serve.md
Normal file
|
@ -0,0 +1,39 @@
|
|||
# Soft Serve
|
||||
|
||||
[Soft Serve](https://github.com/charmbracelet/soft-serve) is a tasty, self-hostable [Git](https://git-scm.com/) server for the command line.
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
soft_serve_enabled: true
|
||||
|
||||
# The hostname of this system.
|
||||
# It will be used for generating git clone URLs (e.g. ssh://mash.example.com/repository.git)
|
||||
soft_serve_hostname: mash.example.com
|
||||
|
||||
# Expose Soft Serve's port. For git servers the usual git-over-ssh port is 22
|
||||
soft_serve_container_bind_port: 2222
|
||||
|
||||
# This key will be able to authenticate with ANY user until you configure Soft Serve
|
||||
soft_serve_initial_admin_key: YOUR PUBLIC SSH KEY HERE
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After you've installed Soft Serve, you can `ssh your-user@mash.example.com -p 2222` with the SSH key defined in `soft_serve_initial_admin_key` to see its [TUI](https://en.wikipedia.org/wiki/Text-based_user_interface) and follow the instructions to configure Soft Serve further.
|
||||
|
||||
Note that you have to [finish the configuration yourself](https://github.com/charmbracelet/soft-serve#configuration), otherwise any user with `soft_serve_initial_admin_key` will work as an admin.
|
|
@ -25,12 +25,7 @@ system_swap_enabled: true
|
|||
|
||||
A swap file will be created in `/var/swap` (configured using the `system_swap_path` variable) and enabled in your `/etc/fstab` file.
|
||||
|
||||
By default, the swap file will have the following size:
|
||||
|
||||
- on systems with `<= 2GB` of RAM, swap file size = `total RAM * 2`
|
||||
- on systems with `> 2GB` of RAM, swap file size = `1GB`
|
||||
|
||||
To avoid these calculations and set your own size explicitly, set the `system_swap_size` variable in megabytes, example (4gb):
|
||||
By default, the swap file will have `1GB` size, but you can set the `system_swap_size` variable in megabytes, example (4gb):
|
||||
|
||||
```yaml
|
||||
system_swap_size: 4096
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
| Name | Description | Documentation |
|
||||
| ------------------------------ | ------------------------------------- | ------------- |
|
||||
| [AUX](https://github.com/mother-of-all-self-hosting/ansible-role-aux) | Auxiliary file/directory management on your server via Ansible | [Link](services/aux.md) |
|
||||
| [AdGuard Home](https://adguard.com/en/adguard-home/overview.html/) | A network-wide DNS software for blocking ads & tracking | [Link](services/adguard-home.md) |
|
||||
| [Collabora Online](https://www.collaboraoffice.com/) | Your Private Office Suite In The Cloud | [Link](services/collabora-online.md) |
|
||||
| [Docker](https://www.docker.com/) | Open-source software for deploying containerized applications | [Link](services/docker.md) |
|
||||
| [Docker Registry](https://docs.docker.com/registry/) | A container image distribution registry | [Link](services/docker-registry.md) |
|
||||
|
@ -10,10 +11,13 @@
|
|||
| [Docker Registry Purger](https://github.com/devture/docker-registry-purger) | A small tool used for purging a private Docker Registry's old tags | [Link](services/docker-registry-purger.md) |
|
||||
| [Focalboard](https://www.focalboard.com/) | An open source, self-hosted alternative to [Trello](https://trello.com/), [Notion](https://www.notion.so/), and [Asana](https://asana.com/). | [Link](services/focalboard.md) |
|
||||
| [Firezone](https://www.firezone.dev/) | A self-hosted VPN server (based on [WireGuard](https://en.wikipedia.org/wiki/WireGuard)) with a Web UI | [Link](services/firezone.md) |
|
||||
| [Gitea](https://gitea.io/) | A painless self-hosted Git service. | [Link](services/gitea.md) |
|
||||
| [Gitea](https://gitea.io/) | A painless self-hosted [Git](https://git-scm.com/) service. | [Link](services/gitea.md) |
|
||||
| [Grafana](https://grafana.com/) | An open and composable observability and data visualization platform, often used with [Prometheus](services/prometheus.md) | [Link](services/grafana.md) |
|
||||
| [Hubsite](https://github.com/moan0s/hubsite) | A simple, static site that shows an overview of the available services | [Link](services/hubsite.md) |
|
||||
| [Keycloak](https://www.keycloak.org/) | An open source identity and access management solution. | [Link](services/keycloak.md) |
|
||||
| [Miniflux](https://miniflux.app/) | Minimalist and opinionated feed reader. | [Link](services/miniflux.md) |
|
||||
| [Navidrome](https://www.navidrome.org/) | [Subsonic-API](http://www.subsonic.org/pages/api.jsp) compatible music server | [Link](services/navidrome.md)
|
||||
| [NetBox](https://docs.netbox.dev/en/stable/) | Web application that provides [IP address management (IPAM)](https://en.wikipedia.org/wiki/IP_address_management) and [data center infrastructure management (DCIM)](https://en.wikipedia.org/wiki/Data_center_management#Data_center_infrastructure_management) functionality | [Link](services/netbox.md) |
|
||||
| [Nextcloud](https://nextcloud.com/) | The most popular self-hosted collaboration solution for tens of millions of users at thousands of organizations across the globe. | [Link](services/nextcloud.md) |
|
||||
| [PeerTube](https://joinpeertube.org/) | A tool for sharing online videos | [Link](services/peertube.md) |
|
||||
| [Postgres](https://www.postgresql.org) | A powerful, open source object-relational database system | [Link](services/postgres.md) |
|
||||
|
@ -24,6 +28,7 @@
|
|||
| [Radicale](https://radicale.org/) | A Free and Open-Source CalDAV and CardDAV Server (solution for hosting contacts and calendars) | [Link](services/radicale.md) |
|
||||
| [Redmine](https://redmine.org/) | A flexible project management web application. | [Link](services/redmine.md) |
|
||||
| [Redis](https://redis.io/) | An in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. | [Link](services/redis.md) |
|
||||
| [Soft Serve](https://github.com/charmbracelet/soft-serve) | A tasty, self-hostable [Git](https://git-scm.com/) server for the command line | [Link](services/soft-serve.md) |
|
||||
| [Syncthing](https://syncthing.net/) | A continuous file synchronization program which synchronizes files between two or more computers in real time | [Link](services/syncthing.md) |
|
||||
| [Traefik](https://doc.traefik.io/traefik/) | A container-aware reverse-proxy server | [Link](services/traefik.md) |
|
||||
| [Vaultwarden](https://github.com/dani-garcia/vaultwarden) | A lightweight unofficial and compatible implementation of the [Bitwarden](https://bitwarden.com/) password manager | [Link](services/vaultwarden.md) |
|
||||
|
|
|
@ -26,17 +26,13 @@ mash_playbook_generic_secret_key: ''
|
|||
# #
|
||||
########################################################################
|
||||
|
||||
# Docker is installed by default.
|
||||
#
|
||||
# To disable Docker installation (in case you'd be installing Docker in another way),
|
||||
# uncomment the line below:
|
||||
# mash_playbook_docker_installation_enabled: false
|
||||
# remove the line below.
|
||||
mash_playbook_docker_installation_enabled: true
|
||||
|
||||
# Docker SDK for Python is installed by default.
|
||||
#
|
||||
# To disable Docker SDK for Python installation (in case you'd be installing the SDK in another way),
|
||||
# uncomment the line below:
|
||||
# devture_docker_sdk_for_python_installation_enabled: false
|
||||
# remove the line below.
|
||||
devture_docker_sdk_for_python_installation_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
|
|
|
@ -63,6 +63,8 @@ system_swap_enabled: false
|
|||
|
||||
devture_systemd_service_manager_services_list_auto: |
|
||||
{{
|
||||
([{'name': (adguard_home_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'adguard-home']}] if adguard_home_enabled else [])
|
||||
+
|
||||
([{'name': (collabora_online_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'collabora-online']}] if collabora_online_enabled else [])
|
||||
+
|
||||
([{'name': (devture_postgres_identifier + '.service'), 'priority': 500, 'groups': ['mash', 'postgres']}] if devture_postgres_enabled else [])
|
||||
|
@ -93,12 +95,22 @@ devture_systemd_service_manager_services_list_auto: |
|
|||
+
|
||||
([{'name': (grafana_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'grafana']}] if grafana_enabled else [])
|
||||
+
|
||||
([{'name': (keycloak_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'keycloak']}] if keycloak_enabled else [])
|
||||
+
|
||||
([{'name': (miniflux_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'miniflux']}] if miniflux_enabled else [])
|
||||
+
|
||||
([{'name': (navidrome_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'navidrome']}] if navidrome_enabled else [])
|
||||
+
|
||||
([{'name': (netbox_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'netbox', 'netbox-server']}] if netbox_enabled else [])
|
||||
+
|
||||
([{'name': (netbox_identifier + '-worker.service'), 'priority': 2500, 'groups': ['mash', 'netbox', 'netbox-worker']}] if netbox_enabled else [])
|
||||
+
|
||||
([{'name': (netbox_identifier + '-housekeeping.service'), 'priority': 2500, 'groups': ['mash', 'netbox', 'netbox-housekeeping']}] if netbox_enabled else [])
|
||||
+
|
||||
([{'name': (nextcloud_identifier + '-server.service'), 'priority': 2000, 'groups': ['mash', 'nextcloud', 'nextcloud-server']}] if nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': (nextcloud_identifier + '-cron.timer'), 'priority': 2500, 'groups': ['mash', 'nextcloud', 'nextcloud-cron']}] if nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': (miniflux_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'miniflux']}] if miniflux_enabled else [])
|
||||
+
|
||||
([{'name': (peertube_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'peertube']}] if peertube_enabled else [])
|
||||
+
|
||||
([{'name': (prometheus_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'metrics', 'prometheus']}] if prometheus_enabled else [])
|
||||
|
@ -113,6 +125,8 @@ devture_systemd_service_manager_services_list_auto: |
|
|||
+
|
||||
([{'name': (redis_identifier + '.service'), 'priority': 750, 'groups': ['mash', 'redis']}] if redis_enabled else [])
|
||||
+
|
||||
([{'name': (soft_serve_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'soft-serve']}] if soft_serve_enabled else [])
|
||||
+
|
||||
([{'name': (syncthing_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'syncthing']}] if syncthing_enabled else [])
|
||||
+
|
||||
([{'name': (vaultwarden_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'vaultwarden', 'vaultwarden-server']}] if vaultwarden_enabled else [])
|
||||
|
@ -142,7 +156,7 @@ devture_postgres_identifier: "{{ mash_playbook_service_identifier_prefix }}postg
|
|||
|
||||
devture_postgres_architecture: "{{ mash_playbook_architecture }}"
|
||||
|
||||
devture_postgres_base_path: "{{ mash_playbook_base_path }}/postgres"
|
||||
devture_postgres_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}postgres"
|
||||
|
||||
devture_postgres_uid: "{{ mash_playbook_uid }}"
|
||||
devture_postgres_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -174,6 +188,12 @@ devture_postgres_managed_databases_auto: |
|
|||
'password': devture_woodpecker_ci_server_database_datasource_password,
|
||||
}] if devture_woodpecker_ci_server_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': keycloak_database_name,
|
||||
'username': keycloak_database_username,
|
||||
'password': keycloak_database_password,
|
||||
}] if keycloak_enabled and keycloak_database_type == 'postgres' and keycloak_database_hostname == devture_postgres_identifier else [])
|
||||
+
|
||||
([{
|
||||
'name': miniflux_database_name,
|
||||
'username': miniflux_database_username,
|
||||
|
@ -186,6 +206,12 @@ devture_postgres_managed_databases_auto: |
|
|||
'password': redmine_database_password,
|
||||
}] if redmine_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': netbox_database_name,
|
||||
'username': netbox_database_username,
|
||||
'password': netbox_database_password,
|
||||
}] if netbox_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': nextcloud_database_name,
|
||||
'username': nextcloud_database_username,
|
||||
|
@ -231,7 +257,7 @@ devture_postgres_backup_identifier: "{{ mash_playbook_service_identifier_prefix
|
|||
|
||||
devture_postgres_backup_architecture: "{{ mash_playbook_architecture }}"
|
||||
|
||||
devture_postgres_backup_base_path: "{{ mash_playbook_base_path }}/postgres-backup"
|
||||
devture_postgres_backup_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}postgres-backup"
|
||||
|
||||
devture_postgres_backup_systemd_required_services_list: |
|
||||
{{
|
||||
|
@ -273,9 +299,9 @@ devture_postgres_backup_databases: "{{ devture_postgres_managed_databases | map(
|
|||
devture_playbook_state_preserver_uid: "{{ mash_playbook_uid }}"
|
||||
devture_playbook_state_preserver_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
devture_playbook_state_preserver_vars_preservation_dst: "{{ mash_playbook_base_path }}/vars.yml"
|
||||
devture_playbook_state_preserver_vars_preservation_dst: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}vars.yml"
|
||||
|
||||
devture_playbook_state_preserver_commit_hash_preservation_dst: "{{ mash_playbook_base_path }}/git_hash.yml"
|
||||
devture_playbook_state_preserver_commit_hash_preservation_dst: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}git_hash.yml"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
|
@ -295,7 +321,7 @@ devture_container_socket_proxy_enabled: "{{ devture_traefik_enabled }}"
|
|||
|
||||
devture_container_socket_proxy_identifier: "{{ mash_playbook_service_identifier_prefix }}container-socket-proxy"
|
||||
|
||||
devture_container_socket_proxy_base_path: "{{ mash_playbook_base_path }}/container-socket-proxy"
|
||||
devture_container_socket_proxy_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}container-socket-proxy"
|
||||
|
||||
devture_container_socket_proxy_uid: "{{ mash_playbook_uid }}"
|
||||
devture_container_socket_proxy_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -321,7 +347,7 @@ devture_traefik_enabled: "{{ mash_playbook_reverse_proxy_type == 'playbook-manag
|
|||
|
||||
devture_traefik_identifier: "{{ mash_playbook_service_identifier_prefix }}traefik"
|
||||
|
||||
devture_traefik_base_path: "{{ mash_playbook_base_path }}/traefik"
|
||||
devture_traefik_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}traefik"
|
||||
|
||||
devture_traefik_uid: "{{ mash_playbook_uid }}"
|
||||
devture_traefik_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -354,9 +380,7 @@ devture_traefik_systemd_required_services_list: |
|
|||
# #
|
||||
########################################################################
|
||||
|
||||
# To completely disable installing the Docker SDK for Python, use `devture_docker_sdk_for_python_installation_enabled: false`.
|
||||
|
||||
devture_docker_sdk_for_python_installation_enabled: true
|
||||
devture_docker_sdk_for_python_installation_enabled: false
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
|
@ -382,6 +406,41 @@ devture_timesync_installation_enabled: false
|
|||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# adguard-home #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
adguard_home_enabled: false
|
||||
|
||||
adguard_home_identifier: "{{ mash_playbook_service_identifier_prefix }}adguard-home"
|
||||
|
||||
adguard_home_uid: "{{ mash_playbook_uid }}"
|
||||
adguard_home_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
adguard_home_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}adguard-home"
|
||||
|
||||
adguard_home_container_additional_networks: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
}}
|
||||
|
||||
adguard_home_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
adguard_home_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
adguard_home_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
adguard_home_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /adguard-home #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# collabora-online #
|
||||
|
@ -392,7 +451,7 @@ collabora_online_enabled: false
|
|||
|
||||
collabora_online_identifier: "{{ mash_playbook_service_identifier_prefix }}collabora-online"
|
||||
|
||||
collabora_online_base_path: "{{ mash_playbook_base_path }}/collabora-online"
|
||||
collabora_online_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}collabora-online"
|
||||
|
||||
collabora_online_uid: "{{ mash_playbook_uid }}"
|
||||
collabora_online_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -425,7 +484,7 @@ docker_registry_enabled: false
|
|||
|
||||
docker_registry_identifier: "{{ mash_playbook_service_identifier_prefix }}docker-registry"
|
||||
|
||||
docker_registry_base_path: "{{ mash_playbook_base_path }}/docker-registry"
|
||||
docker_registry_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}docker-registry"
|
||||
|
||||
docker_registry_uid: "{{ mash_playbook_uid }}"
|
||||
docker_registry_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -458,7 +517,7 @@ docker_registry_browser_enabled: false
|
|||
|
||||
docker_registry_browser_identifier: "{{ mash_playbook_service_identifier_prefix }}docker-registry-browser"
|
||||
|
||||
docker_registry_browser_base_path: "{{ mash_playbook_base_path }}/docker-registry-browser"
|
||||
docker_registry_browser_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}docker-registry-browser"
|
||||
|
||||
docker_registry_browser_uid: "{{ mash_playbook_uid }}"
|
||||
docker_registry_browser_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -491,7 +550,7 @@ docker_registry_purger_enabled: false
|
|||
|
||||
docker_registry_purger_identifier: "{{ mash_playbook_service_identifier_prefix }}docker-registry-purger"
|
||||
|
||||
docker_registry_purger_base_path: "{{ mash_playbook_base_path }}/docker-registry-purger"
|
||||
docker_registry_purger_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}docker-registry-purger"
|
||||
|
||||
docker_registry_purger_uid: "{{ mash_playbook_uid }}"
|
||||
docker_registry_purger_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -514,7 +573,7 @@ focalboard_enabled: false
|
|||
|
||||
focalboard_identifier: "{{ mash_playbook_service_identifier_prefix }}focalboard"
|
||||
|
||||
focalboard_base_path: "{{ mash_playbook_base_path }}/focalboard"
|
||||
focalboard_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}focalboard"
|
||||
|
||||
focalboard_uid: "{{ mash_playbook_uid }}"
|
||||
focalboard_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -561,7 +620,7 @@ gitea_enabled: false
|
|||
|
||||
gitea_identifier: "{{ mash_playbook_service_identifier_prefix }}gitea"
|
||||
|
||||
gitea_base_path: "{{ mash_playbook_base_path }}/gitea"
|
||||
gitea_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}gitea"
|
||||
|
||||
gitea_uid: "{{ mash_playbook_uid }}"
|
||||
gitea_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -608,7 +667,7 @@ grafana_enabled: false
|
|||
|
||||
grafana_identifier: "{{ mash_playbook_service_identifier_prefix }}grafana"
|
||||
|
||||
grafana_base_path: "{{ mash_playbook_base_path }}/grafana"
|
||||
grafana_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}grafana"
|
||||
|
||||
grafana_uid: "{{ mash_playbook_uid }}"
|
||||
grafana_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -635,6 +694,50 @@ grafana_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResol
|
|||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
keycloak_enabled: false
|
||||
|
||||
keycloak_identifier: "{{ mash_playbook_service_identifier_prefix }}keycloak"
|
||||
|
||||
keycloak_uid: "{{ mash_playbook_uid }}"
|
||||
keycloak_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
keycloak_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}keycloak"
|
||||
|
||||
keycloak_systemd_required_systemd_services_list_auto: |
|
||||
{{
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and keycloak_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
keycloak_container_additional_networks_auto: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and keycloak_database_hostname == devture_postgres_identifier and keycloak_container_network != devture_postgres_container_network else [])
|
||||
}}
|
||||
|
||||
keycloak_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
keycloak_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
keycloak_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
keycloak_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
keycloak_database_hostname: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
keycloak_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
keycloak_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.keycloak', rounds=655555) | to_uuid }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# miniflux #
|
||||
|
@ -645,7 +748,7 @@ miniflux_enabled: false
|
|||
|
||||
miniflux_identifier: "{{ mash_playbook_service_identifier_prefix }}miniflux"
|
||||
|
||||
miniflux_base_path: "{{ mash_playbook_base_path }}/miniflux"
|
||||
miniflux_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}miniflux"
|
||||
|
||||
miniflux_uid: "{{ mash_playbook_uid }}"
|
||||
miniflux_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -674,7 +777,40 @@ miniflux_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key)
|
|||
|
||||
########################################################################
|
||||
# #
|
||||
# /miniflux #
|
||||
# /miniflux #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
navidrome_enabled: false
|
||||
|
||||
navidrome_identifier: "{{ mash_playbook_service_identifier_prefix }}navidrome"
|
||||
|
||||
navidrome_uid: "{{ mash_playbook_uid }}"
|
||||
navidrome_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
navidrome_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}navidrome"
|
||||
|
||||
navidrome_container_additional_networks_auto: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
}}
|
||||
|
||||
navidrome_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
navidrome_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
navidrome_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
navidrome_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
@ -690,7 +826,7 @@ nextcloud_enabled: false
|
|||
|
||||
nextcloud_identifier: "{{ mash_playbook_service_identifier_prefix }}nextcloud"
|
||||
|
||||
nextcloud_base_path: "{{ mash_playbook_base_path }}/nextcloud"
|
||||
nextcloud_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}nextcloud"
|
||||
|
||||
nextcloud_uid: "{{ mash_playbook_uid }}"
|
||||
nextcloud_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -727,6 +863,52 @@ nextcloud_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key)
|
|||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
netbox_enabled: false
|
||||
|
||||
netbox_identifier: "{{ mash_playbook_service_identifier_prefix }}netbox"
|
||||
|
||||
netbox_uid: "{{ mash_playbook_uid }}"
|
||||
netbox_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
netbox_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}netbox"
|
||||
|
||||
netbox_systemd_required_services_list_auto: |
|
||||
{{
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and nextcloud_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
netbox_container_additional_networks_auto: |
|
||||
{{
|
||||
(
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and netbox_database_hostname == devture_postgres_identifier and netbox_container_network != devture_postgres_container_network else [])
|
||||
) | unique
|
||||
}}
|
||||
|
||||
netbox_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
netbox_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
netbox_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
netbox_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
netbox_database_hostname: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
netbox_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
netbox_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.netbox', rounds=655555) | to_uuid }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
|
@ -737,19 +919,17 @@ peertube_enabled: false
|
|||
|
||||
peertube_identifier: "{{ mash_playbook_service_identifier_prefix }}peertube"
|
||||
|
||||
peertube_base_path: "{{ mash_playbook_base_path }}/peertube"
|
||||
peertube_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}peertube"
|
||||
|
||||
peertube_uid: "{{ mash_playbook_uid }}"
|
||||
peertube_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
peertube_container_additional_networks: |
|
||||
peertube_container_additional_networks_auto: |
|
||||
{{
|
||||
(
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and peertube_config_database_hostname == devture_postgres_identifier and peertube_container_network != devture_postgres_container_network else [])
|
||||
+
|
||||
([redis_container_network] if peertube_config_redis_hostname == redis_identifier else [])
|
||||
) | unique
|
||||
}}
|
||||
|
||||
|
@ -763,15 +943,9 @@ peertube_config_database_port: "{{ '5432' if devture_postgres_enabled else '' }}
|
|||
peertube_config_database_username: peertube
|
||||
peertube_config_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.peertube', rounds=655555) | to_uuid }}"
|
||||
|
||||
peertube_config_redis_hostname: "{{ redis_identifier if redis_enabled else '' }}"
|
||||
|
||||
peertube_systemd_required_services_list: |
|
||||
peertube_systemd_required_services_list_auto: |
|
||||
{{
|
||||
(['docker.service'])
|
||||
+
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and peertube_config_database_hostname == devture_postgres_identifier else [])
|
||||
+
|
||||
([redis_identifier ~ '.service'] if redis_enabled and peertube_config_redis_hostname == redis_identifier else [])
|
||||
}}
|
||||
|
||||
########################################################################
|
||||
|
@ -791,7 +965,7 @@ prometheus_enabled: false
|
|||
|
||||
prometheus_identifier: "{{ mash_playbook_service_identifier_prefix }}prometheus"
|
||||
|
||||
prometheus_base_path: "{{ mash_playbook_base_path }}/prometheus"
|
||||
prometheus_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}prometheus"
|
||||
|
||||
prometheus_uid: "{{ mash_playbook_uid }}"
|
||||
prometheus_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -814,7 +988,7 @@ prometheus_blackbox_exporter_enabled: false
|
|||
|
||||
prometheus_blackbox_exporter_identifier: "{{ mash_playbook_service_identifier_prefix }}prometheus-blackbox-exporter"
|
||||
|
||||
prometheus_blackbox_exporter_base_path: "{{ mash_playbook_base_path }}/prometheus-blackbox-exporter"
|
||||
prometheus_blackbox_exporter_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}prometheus-blackbox-exporter"
|
||||
|
||||
prometheus_blackbox_exporter_uid: "{{ mash_playbook_uid }}"
|
||||
prometheus_blackbox_exporter_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -852,7 +1026,7 @@ prometheus_node_exporter_enabled: false
|
|||
|
||||
prometheus_node_exporter_identifier: "{{ mash_playbook_service_identifier_prefix }}prometheus-node-exporter"
|
||||
|
||||
prometheus_node_exporter_base_path: "{{ mash_playbook_base_path }}/prometheus-node-exporter"
|
||||
prometheus_node_exporter_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}prometheus-node-exporter"
|
||||
|
||||
prometheus_node_exporter_uid: "{{ mash_playbook_uid }}"
|
||||
prometheus_node_exporter_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -901,7 +1075,7 @@ radicale_enabled: false
|
|||
|
||||
radicale_identifier: "{{ mash_playbook_service_identifier_prefix }}radicale"
|
||||
|
||||
radicale_base_path: "{{ mash_playbook_base_path }}/radicale"
|
||||
radicale_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}radicale"
|
||||
|
||||
radicale_uid: "{{ mash_playbook_uid }}"
|
||||
radicale_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -933,7 +1107,7 @@ redmine_enabled: false
|
|||
|
||||
redmine_identifier: "{{ mash_playbook_service_identifier_prefix }}redmine"
|
||||
|
||||
redmine_base_path: "{{ mash_playbook_base_path }}/redmine"
|
||||
redmine_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}redmine"
|
||||
|
||||
redmine_uid: "{{ mash_playbook_uid }}"
|
||||
redmine_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -983,7 +1157,7 @@ redis_enabled: false
|
|||
|
||||
redis_identifier: "{{ mash_playbook_service_identifier_prefix }}redis"
|
||||
|
||||
redis_base_path: "{{ mash_playbook_base_path }}/redis"
|
||||
redis_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}redis"
|
||||
|
||||
redis_uid: "{{ mash_playbook_uid }}"
|
||||
redis_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -996,6 +1170,30 @@ redis_gid: "{{ mash_playbook_gid }}"
|
|||
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
soft_serve_enabled: false
|
||||
|
||||
soft_serve_identifier: "{{ mash_playbook_service_identifier_prefix }}soft-serve"
|
||||
|
||||
soft_serve_uid: "{{ mash_playbook_uid }}"
|
||||
soft_serve_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
soft_serve_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}soft-serve"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# syncthing #
|
||||
|
@ -1009,7 +1207,7 @@ syncthing_identifier: "{{ mash_playbook_service_identifier_prefix }}syncthing"
|
|||
syncthing_uid: "{{ mash_playbook_uid }}"
|
||||
syncthing_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
syncthing_base_path: "{{ mash_playbook_base_path }}/syncthing"
|
||||
syncthing_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}syncthing"
|
||||
|
||||
syncthing_container_additional_networks: |
|
||||
{{
|
||||
|
@ -1042,7 +1240,7 @@ vaultwarden_identifier: "{{ mash_playbook_service_identifier_prefix }}vaultwarde
|
|||
vaultwarden_uid: "{{ mash_playbook_uid }}"
|
||||
vaultwarden_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
vaultwarden_base_path: "{{ mash_playbook_base_path }}/vaultwarden"
|
||||
vaultwarden_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}vaultwarden"
|
||||
|
||||
vaultwarden_systemd_required_systemd_services_list: |
|
||||
{{
|
||||
|
@ -1086,7 +1284,7 @@ uptime_kuma_enabled: false
|
|||
|
||||
uptime_kuma_identifier: "{{ mash_playbook_service_identifier_prefix }}uptime-kuma"
|
||||
|
||||
uptime_kuma_base_path: "{{ mash_playbook_base_path }}/uptime-kuma"
|
||||
uptime_kuma_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}uptime-kuma"
|
||||
|
||||
uptime_kuma_uid: "{{ mash_playbook_uid }}"
|
||||
uptime_kuma_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -1122,7 +1320,7 @@ devture_woodpecker_ci_server_identifier: "{{ mash_playbook_service_identifier_pr
|
|||
devture_woodpecker_ci_server_uid: "{{ mash_playbook_uid }}"
|
||||
devture_woodpecker_ci_server_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
devture_woodpecker_ci_server_base_path: "{{ mash_playbook_base_path }}/woodpecker-ci/server"
|
||||
devture_woodpecker_ci_server_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}woodpecker-ci/server"
|
||||
|
||||
devture_woodpecker_ci_server_systemd_required_systemd_services_list: |
|
||||
{{
|
||||
|
@ -1173,7 +1371,7 @@ devture_woodpecker_ci_agent_identifier: "{{ mash_playbook_service_identifier_pre
|
|||
devture_woodpecker_ci_agent_uid: "{{ mash_playbook_uid }}"
|
||||
devture_woodpecker_ci_agent_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
devture_woodpecker_ci_agent_base_path: "{{ mash_playbook_base_path }}/woodpecker-ci/agent"
|
||||
devture_woodpecker_ci_agent_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}woodpecker-ci/agent"
|
||||
|
||||
devture_woodpecker_ci_agent_systemd_required_systemd_services_list: |
|
||||
{{
|
||||
|
@ -1209,7 +1407,7 @@ hubsite_enabled: false
|
|||
|
||||
hubsite_identifier: "{{ mash_playbook_service_identifier_prefix }}hubsite"
|
||||
|
||||
hubsite_base_path: "{{ mash_playbook_base_path }}/hubsite"
|
||||
hubsite_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}hubsite"
|
||||
|
||||
hubsite_uid: "{{ mash_playbook_uid }}"
|
||||
hubsite_gid: "{{ mash_playbook_gid }}"
|
||||
|
@ -1232,6 +1430,30 @@ hubsite_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResol
|
|||
# Services
|
||||
##########
|
||||
|
||||
# Adguard home
|
||||
hubsite_service_adguard_home_enabled: "{{ adguard_home_enabled }}"
|
||||
hubsite_service_adguard_home_name: Adguard Home
|
||||
hubsite_service_adguard_home_url: "https://{{ adguard_home_hostname }}{{ adguard_home_path_prefix }}"
|
||||
hubsite_service_adguard_home_logo_location: "{{ role_path }}/assets/shield.png"
|
||||
hubsite_service_adguard_home_description: "A network-wide DNS software for blocking ads & tracking"
|
||||
hubsite_service_adguard_home_priority: 1000
|
||||
|
||||
# Docker Registry Browser
|
||||
hubsite_service_docker_registry_browser_enabled: "{{ docker_registry_browser_enabled }}"
|
||||
hubsite_service_docker_registry_browser_name: Docker Registry Browser
|
||||
hubsite_service_docker_registry_browser_url: "https://{{ docker_registry_browser_hostname }}{{ docker_registry_browser_path_prefix }}"
|
||||
hubsite_service_docker_registry_browser_logo_location: "{{ role_path }}/assets/docker.png"
|
||||
hubsite_service_docker_registry_browser_description: "Browse docker images"
|
||||
hubsite_service_docker_registry_browser_priority: 1000
|
||||
|
||||
# Focalboard
|
||||
hubsite_service_focalboard_enabled: "{{ focalboard_enabled }}"
|
||||
hubsite_service_focalboard_name: Focalboard
|
||||
hubsite_service_focalboard_url: "https://{{ focalboard_hostname }}{{ focalboard_path_prefix }}"
|
||||
hubsite_service_focalboard_logo_location: "{{ role_path }}/assets/focalboard.png"
|
||||
hubsite_service_focalboard_description: "An open source, self-hosted alternative to Trello, Notion, and Asana."
|
||||
hubsite_service_focalboard_priority: 1000
|
||||
|
||||
# Gitea
|
||||
hubsite_service_gitea_enabled: "{{ gitea_enabled }}"
|
||||
hubsite_service_gitea_name: Gitea
|
||||
|
@ -1240,6 +1462,15 @@ hubsite_service_gitea_logo_location: "{{ role_path }}/assets/gitea.png"
|
|||
hubsite_service_gitea_description: "A git service"
|
||||
hubsite_service_gitea_priority: 1000
|
||||
|
||||
# Grafana
|
||||
hubsite_service_grafana_enabled: "{{ grafana_enabled }}"
|
||||
hubsite_service_grafana_name: Grafana
|
||||
hubsite_service_grafana_url: "https://{{ grafana_hostname }}{{ grafana_path_prefix }}"
|
||||
hubsite_service_grafana_logo_location: "{{ role_path }}/assets/grafana.png"
|
||||
hubsite_service_grafana_description: "Check how your server is doing"
|
||||
hubsite_service_grafana_priority: 1000
|
||||
|
||||
|
||||
# Miniflux
|
||||
hubsite_service_miniflux_enabled: "{{ miniflux_enabled }}"
|
||||
hubsite_service_miniflux_name: Miniflux
|
||||
|
@ -1264,6 +1495,22 @@ hubsite_service_peertube_logo_location: "{{ role_path }}/assets/peertube.png"
|
|||
hubsite_service_peertube_description: "Watch and upload videos"
|
||||
hubsite_service_peertube_priority: 1000
|
||||
|
||||
# Radicale
|
||||
hubsite_service_radicale_enabled: "{{ radicale_enabled }}"
|
||||
hubsite_service_radicale_name: Radicale
|
||||
hubsite_service_radicale_url: "https://{{ radicale_hostname }}{{ radicale_path_prefix }}"
|
||||
hubsite_service_radicale_logo_location: "{{ role_path }}/assets/radicale.png"
|
||||
hubsite_service_radicale_description: "Sync contacts and calendars"
|
||||
hubsite_service_radicale_priority: 1000
|
||||
|
||||
# Syncthing
|
||||
hubsite_service_syncthing_enabled: "{{ syncthing_enabled }}"
|
||||
hubsite_service_syncthing_name: Syncthing
|
||||
hubsite_service_syncthing_url: "https://{{ syncthing_hostname }}{{ syncthing_path_prefix }}"
|
||||
hubsite_service_syncthing_logo_location: "{{ role_path }}/assets/syncthing.png"
|
||||
hubsite_service_syncthing_description: "Sync your files"
|
||||
hubsite_service_syncthing_priority: 1000
|
||||
|
||||
# Uptime Kuma
|
||||
hubsite_service_uptime_kuma_enabled: "{{ uptime_kuma_enabled }}"
|
||||
hubsite_service_uptime_kuma_name: Uptime Kuma
|
||||
|
@ -1281,19 +1528,39 @@ hubsite_service_vaultwarden_logo_location: "{{ role_path }}/assets/vaultwarden.p
|
|||
hubsite_service_vaultwarden_description: "Securely access your passwords"
|
||||
hubsite_service_vaultwarden_priority: 1000
|
||||
|
||||
# Woodpecker CI
|
||||
hubsite_service_woodpecker_ci_enabled: "{{ devture_woodpecker_ci_server_enabled }}"
|
||||
hubsite_service_woodpecker_ci_name: Woodpecker CI
|
||||
hubsite_service_woodpecker_ci_url: "https://{{ devture_woodpecker_ci_server_hostname }}"
|
||||
hubsite_service_woodpecker_ci_logo_location: "{{ role_path }}/assets/woodpecker.png"
|
||||
hubsite_service_woodpecker_ci_description: "Check you CI"
|
||||
hubsite_service_woodpecker_ci_priority: 1000
|
||||
|
||||
hubsite_service_list_auto: |
|
||||
{{
|
||||
([{'name': hubsite_service_adguard_home_name, 'url': hubsite_service_adguard_home_url, 'logo_location': hubsite_service_adguard_home_logo_location, 'description': hubsite_service_adguard_home_description, 'priority': hubsite_service_adguard_home_priority}] if hubsite_service_adguard_home_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_focalboard_name, 'url': hubsite_service_focalboard_url, 'logo_location': hubsite_service_focalboard_logo_location, 'description': hubsite_service_focalboard_description, 'priority': hubsite_service_focalboard_priority}] if hubsite_service_focalboard_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_gitea_name, 'url': hubsite_service_gitea_url, 'logo_location': hubsite_service_gitea_logo_location, 'description': hubsite_service_gitea_description, 'priority': hubsite_service_gitea_priority}] if hubsite_service_gitea_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_grafana_name, 'url': hubsite_service_grafana_url, 'logo_location': hubsite_service_grafana_logo_location, 'description': hubsite_service_grafana_description, 'priority': hubsite_service_grafana_priority}] if hubsite_service_grafana_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_miniflux_name, 'url': hubsite_service_miniflux_url, 'logo_location': hubsite_service_miniflux_logo_location, 'description': hubsite_service_miniflux_description, 'priority': hubsite_service_miniflux_priority}] if hubsite_service_miniflux_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_nextcloud_name, 'url': hubsite_service_nextcloud_url, 'logo_location': hubsite_service_nextcloud_logo_location, 'description': hubsite_service_nextcloud_description, 'priority': hubsite_service_nextcloud_priority}] if hubsite_service_nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_peertube_name, 'url': hubsite_service_peertube_url, 'logo_location': hubsite_service_peertube_logo_location, 'description': hubsite_service_peertube_description, 'priority': hubsite_service_peertube_priority}] if hubsite_service_peertube_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_radicale_name, 'url': hubsite_service_radicale_url, 'logo_location': hubsite_service_radicale_logo_location, 'description': hubsite_service_radicale_description, 'priority': hubsite_service_radicale_priority}] if hubsite_service_radicale_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_uptime_kuma_name, 'url': hubsite_service_uptime_kuma_url, 'logo_location': hubsite_service_uptime_kuma_logo_location, 'description': hubsite_service_uptime_kuma_description, 'priority': hubsite_service_uptime_kuma_priority}] if hubsite_service_uptime_kuma_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_syncthing_name, 'url': hubsite_service_syncthing_url, 'logo_location': hubsite_service_syncthing_logo_location, 'description': hubsite_service_syncthing_description, 'priority': hubsite_service_syncthing_priority}] if hubsite_service_syncthing_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_vaultwarden_name, 'url': hubsite_service_vaultwarden_url, 'logo_location': hubsite_service_vaultwarden_logo_location, 'description': hubsite_service_vaultwarden_description, 'priority': hubsite_service_vaultwarden_priority}] if hubsite_service_vaultwarden_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_woodpecker_ci_name, 'url': hubsite_service_woodpecker_ci_url, 'logo_location': hubsite_service_woodpecker_ci_logo_location, 'description': hubsite_service_woodpecker_ci_description, 'priority': hubsite_service_woodpecker_ci_priority}] if hubsite_service_woodpecker_ci_enabled else [])
|
||||
}}
|
||||
|
||||
########################################################################
|
||||
|
@ -1312,7 +1579,7 @@ firezone_enabled: false
|
|||
|
||||
firezone_identifier: "{{ mash_playbook_service_identifier_prefix }}firezone"
|
||||
|
||||
firezone_base_path: "{{ mash_playbook_base_path }}/firezone"
|
||||
firezone_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}firezone"
|
||||
|
||||
firezone_uid: "{{ mash_playbook_uid }}"
|
||||
firezone_gid: "{{ mash_playbook_gid }}"
|
||||
|
|
30
justfile
30
justfile
|
@ -1,44 +1,56 @@
|
|||
# Shows help
|
||||
default:
|
||||
@just --list --justfile {{ justfile() }}
|
||||
@just --list --justfile {{ justfile() }}
|
||||
|
||||
# Pulls external Ansible roles
|
||||
roles:
|
||||
rm -rf roles/galaxy
|
||||
ansible-galaxy install -r requirements.yml -p roles/galaxy/ --force
|
||||
#!/usr/bin/env sh
|
||||
if [ -x "$(command -v agru)" ]; then
|
||||
agru
|
||||
else
|
||||
rm -rf roles/galaxy
|
||||
ansible-galaxy install -r requirements.yml -p roles/galaxy/ --force
|
||||
fi
|
||||
|
||||
# Updates requirements.yml if there are any new tags available. Requires agru
|
||||
update:
|
||||
@agru -u
|
||||
|
||||
# Runs ansible-lint against all roles in the playbook
|
||||
lint:
|
||||
ansible-lint
|
||||
ansible-lint
|
||||
|
||||
# Runs the playbook with --tags=install-all,start and optional arguments
|
||||
install-all *extra_args: (run-tags "install-all,start" extra_args)
|
||||
|
||||
# Runs installation tasks for a single service
|
||||
install-service service *extra_args:
|
||||
just --justfile {{ justfile() }} run --tags=install-{{ service }},start-group --extra-vars=group={{ service }} {{ extra_args }}
|
||||
just --justfile {{ justfile() }} run \
|
||||
--tags=install-{{ service }},start-group \
|
||||
--extra-vars=group={{ service }} \
|
||||
--extra-vars=devture_systemd_service_manager_service_restart_mode=one-by-one {{ extra_args }}
|
||||
|
||||
# Runs the playbook with --tags=setup-all,start and optional arguments
|
||||
setup-all *extra_args: (run-tags "setup-all,start" extra_args)
|
||||
|
||||
# Runs the playbook with the given list of arguments
|
||||
run +extra_args:
|
||||
time ansible-playbook -i inventory/hosts setup.yml {{ extra_args }}
|
||||
time ansible-playbook -i inventory/hosts setup.yml {{ extra_args }}
|
||||
|
||||
# Runs the playbook with the given list of comma-separated tags and optional arguments
|
||||
run-tags tags *extra_args:
|
||||
just --justfile {{ justfile() }} run --tags={{ tags }} {{ extra_args }}
|
||||
just --justfile {{ justfile() }} run --tags={{ tags }} {{ extra_args }}
|
||||
|
||||
# Starts all services
|
||||
start-all *extra_args: (run-tags "start-all" extra_args)
|
||||
|
||||
# Starts a specific service group
|
||||
start-group group *extra_args:
|
||||
@just --justfile {{ justfile() }} run-tags start-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
@just --justfile {{ justfile() }} run-tags start-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
|
||||
# Stops all services
|
||||
stop-all *extra_args: (run-tags "stop-all" extra_args)
|
||||
|
||||
# Stops a specific service group
|
||||
stop-group group *extra_args:
|
||||
@just --justfile {{ justfile() }} run-tags stop-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
@just --justfile {{ justfile() }} run-tags stop-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
|
|
|
@ -1,132 +1,107 @@
|
|||
---
|
||||
|
||||
- src: geerlingguy.docker
|
||||
- src: git+https://github.com/geerlingguy/ansible-role-docker
|
||||
version: 6.1.0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/swap
|
||||
version: 843a0222b76a5ec361b35f31bf4dc872b6d7d54e
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/ssh
|
||||
name: geerlingguy.docker
|
||||
- src: git+https://gitlab.com/etke.cc/roles/swap.git
|
||||
version: abfb18b6862108bbf24347500446203170324d7f
|
||||
- src: git+https://gitlab.com/etke.cc/roles/ssh.git
|
||||
version: 237adf859f9270db8a60e720bc4a58164806644e
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/fail2ban
|
||||
- src: git+https://gitlab.com/etke.cc/roles/fail2ban.git
|
||||
version: 09886730e8d3c061f22d1da4a542899063f97f0a
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.docker_sdk_for_python.git
|
||||
version: 129c8590e106b83e6f4c259649a613c6279e937a
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.playbook_help.git
|
||||
version: c1f40e82b4d6b072b6f0e885239322bdaaaf554f
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.systemd_docker_base.git
|
||||
version: 327d2e17f5189ac2480d6012f58cf64a2b46efba
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.timesync.git
|
||||
version: 3d5bb2976815958cdce3f368fa34fb51554f899b
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.playbook_state_preserver.git
|
||||
version: ff2fd42e1c1a9e28e3312bbd725395f9c2fc7f16
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.postgres.git
|
||||
version: 38764398bf82b06a1736c3bfedc71dfd229e4b52
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.postgres_backup.git
|
||||
version: 8e9ec48a09284c84704d7a2dce17da35f181574d
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.container_socket_proxy.git
|
||||
version: v0.1.1-1
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.traefik.git
|
||||
version: v2.9.9-0
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.systemd_service_manager.git
|
||||
version: 6ccb88ac5fc27e1e70afcd48278ade4b564a9096
|
||||
|
||||
version: v1.0.0-0
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.playbook_runtime_messages.git
|
||||
version: 9b4b088c62b528b73a9a7c93d3109b091dd42ec6
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.woodpecker_ci_server.git
|
||||
version: v0.15.7-2
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.woodpecker_ci_agent.git
|
||||
version: v0.15.7-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/miniflux.git
|
||||
version: v2.0.43-2
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/grafana.git
|
||||
version: v9.4.7-0
|
||||
|
||||
version: v9.4.7-1
|
||||
- src: git+https://gitlab.com/etke.cc/roles/radicale.git
|
||||
version: v3.1.8.1-2
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/uptime_kuma.git
|
||||
version: v1.21.0-0
|
||||
|
||||
version: v1.21.1-0
|
||||
- src: git+https://gitlab.com/etke.cc/roles/redis.git
|
||||
version: v7.0.10-0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/prometheus_node_exporter.git
|
||||
version: v1.5.0-7
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/prometheus_blackbox_exporter.git
|
||||
version: v0.23.0-3
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/redmine.git
|
||||
version: v5.0.5-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/soft_serve.git
|
||||
version: v0.4.7-0
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-adguard-home.git
|
||||
version: v0.107.26-0
|
||||
name: adguard_home
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-aux.git
|
||||
version: v1.0.0-0
|
||||
name: aux
|
||||
version: v1.0.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-collabora-online.git
|
||||
name: collabora_online
|
||||
version: v22.05.12.1.1-0
|
||||
|
||||
name: collabora_online
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-docker-registry.git
|
||||
name: docker_registry
|
||||
version: v2.8.1-1
|
||||
|
||||
name: docker_registry
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-docker-registry-browser.git
|
||||
name: docker_registry_browser
|
||||
version: v1.6.0-0
|
||||
|
||||
name: docker_registry_browser
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-docker-registry-purger.git
|
||||
name: docker_registry_purger
|
||||
version: v1.0.0-0
|
||||
|
||||
name: docker_registry_purger
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-focalboard.git
|
||||
name: focalboard
|
||||
version: v7.8.0-0
|
||||
|
||||
name: focalboard
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-gitea.git
|
||||
name: gitea
|
||||
version: v1.19.0-0
|
||||
|
||||
name: gitea
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-keycloak.git
|
||||
version: v21.0.1-1
|
||||
name: keycloak
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-navidrome.git
|
||||
version: v0.49.3-0
|
||||
name: navidrome
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-netbox.git
|
||||
version: v3.4.6-2.5.1-0
|
||||
name: netbox
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-nextcloud.git
|
||||
name: nextcloud
|
||||
version: v26.0.0-0
|
||||
|
||||
name: nextcloud
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-peertube.git
|
||||
version: v5.1.0-2
|
||||
name: peertube
|
||||
version: v5.1.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-prometheus.git
|
||||
name: prometheus
|
||||
version: v2.43.0-0
|
||||
|
||||
name: prometheus
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-syncthing.git
|
||||
version: v1.23.2-1
|
||||
name: syncthing
|
||||
version: v1.23.2-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-vaultwarden.git
|
||||
version: v1.28.0-0
|
||||
name: vaultwarden
|
||||
version: v1.27.0-2
|
||||
|
||||
- src: git+https://github.com/moan0s/hubsite.git
|
||||
version: 6b20c472d36ce5765dc44675d42cce74cbcbd0fe
|
||||
name: hubsite
|
||||
version: da6fed398a9dd0761db941cb903b53277c341cc6
|
||||
|
||||
- src: git+https://github.com/moan0s/role-firezone.git
|
||||
name: firezone
|
||||
version: ac8564d5e11a75107ba93aec6427b83be824c30a
|
||||
name: firezone
|
||||
|
|
|
@ -7,6 +7,8 @@ mash_playbook_identifier: mash
|
|||
mash_playbook_user_username: "{{ mash_playbook_identifier }}"
|
||||
mash_playbook_user_groupname: "{{ mash_playbook_identifier }}"
|
||||
|
||||
mash_playbook_user_home: "{{ mash_playbook_base_path }}"
|
||||
|
||||
# By default, the playbook creates the user (`mash_playbook_user_username`)
|
||||
# and group (`mash_playbook_user_groupname`) with a random id.
|
||||
# To use a specific user/group id, override these variables.
|
||||
|
@ -17,10 +19,15 @@ mash_playbook_gid: ~
|
|||
# You can put any string here, but generating a strong one is preferred (e.g. `pwgen -s 64 1`).
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Controls the prefixed used for all service identifiers.
|
||||
# Controls the prefix used for all service identifiers.
|
||||
# This affects systemd service names, container names, container networks, etc.
|
||||
mash_playbook_service_identifier_prefix: "{{ mash_playbook_identifier }}-"
|
||||
|
||||
# Controls the prefix of the base directory for all services.
|
||||
# Example: `/mash/{PREFIX}traefik`.
|
||||
# If `mash_playbook_identifier` is the default (mash), we intentionally use an empty prefix.
|
||||
mash_playbook_service_base_directory_name_prefix: "{{ '' if mash_playbook_identifier == 'mash' else (mash_playbook_identifier + '-') }}"
|
||||
|
||||
# Controls the base path where all services will be installed
|
||||
mash_playbook_base_path: "/{{ mash_playbook_identifier }}"
|
||||
mash_playbook_base_path_mode: "750"
|
||||
|
@ -52,11 +59,11 @@ mash_playbook_architecture: "{{ 'amd64' if ansible_architecture == 'x86_64' else
|
|||
# - no reverse-proxy will be installed
|
||||
# - no port exposure will be done for any of the container services
|
||||
# - it's up to you to expose the ports you want, etc.
|
||||
mash_playbook_reverse_proxy_type: playbook-managed-traefik
|
||||
mash_playbook_reverse_proxy_type: none
|
||||
|
||||
# Controls whether to install Docker or not
|
||||
# Also see `devture_docker_sdk_for_python_installation_enabled`.
|
||||
mash_playbook_docker_installation_enabled: true
|
||||
mash_playbook_docker_installation_enabled: false
|
||||
|
||||
# Controls whether to attach Traefik labels to services.
|
||||
# This is separate from `devture_traefik_enabled`, because you may wish to disable Traefik installation by the playbook,
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
uid: "{{ omit if mash_playbook_uid is none else mash_playbook_uid }}"
|
||||
state: present
|
||||
group: "{{ mash_playbook_user_groupname }}"
|
||||
home: "{{ mash_playbook_base_path }}"
|
||||
home: "{{ mash_playbook_user_home }}"
|
||||
create_home: false
|
||||
system: true
|
||||
register: mash_base_user_result
|
||||
|
|
10
setup.yml
10
setup.yml
|
@ -54,6 +54,8 @@
|
|||
|
||||
- role: galaxy/com.devture.ansible.role.traefik
|
||||
|
||||
- role: galaxy/adguard_home
|
||||
|
||||
- role: galaxy/collabora_online
|
||||
|
||||
- role: galaxy/docker_registry
|
||||
|
@ -68,10 +70,16 @@
|
|||
|
||||
- role: galaxy/grafana
|
||||
|
||||
- role: galaxy/keycloak
|
||||
|
||||
- role: galaxy/miniflux
|
||||
|
||||
- role: galaxy/hubsite
|
||||
|
||||
- role: galaxy/navidrome
|
||||
|
||||
- role: galaxy/netbox
|
||||
|
||||
- role: galaxy/nextcloud
|
||||
|
||||
- role: galaxy/peertube
|
||||
|
@ -86,6 +94,8 @@
|
|||
|
||||
- role: galaxy/redis
|
||||
|
||||
- role: galaxy/soft_serve
|
||||
|
||||
- role: galaxy/syncthing
|
||||
|
||||
- role: galaxy/vaultwarden
|
||||
|
|
Loading…
Add table
Reference in a new issue