Merge branch 'mother-of-all-self-hosting:main' into features/nextcloud-redis-support
This commit is contained in:
commit
d91a6baf5c
18 changed files with 1329 additions and 100 deletions
|
@ -23,6 +23,10 @@ indent_size = 2
|
|||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
[justfile]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
# Markdown Files
|
||||
#
|
||||
# Two spaces at the end of a line in Markdown mean "new line",
|
||||
|
|
24
CHANGELOG.md
24
CHANGELOG.md
|
@ -1,3 +1,27 @@
|
|||
# 2023-03-29
|
||||
|
||||
## (Backward Compatibility Break) Firezone database renamed
|
||||
|
||||
If you are running [Firezone](docs/services/firezone.md) with the default [Postgres](docs/services/postgres.md) integration the playbook automatically created the database with the name `mash-firezone`.
|
||||
To be consistent with how this playbook names databases for all other services, going forward we've changed the database name to be just `firezone`. You will have to rename you database manually by running the following commands on your server:
|
||||
|
||||
1. Stop Firezone: `systemctl stop mash-firezone`
|
||||
2. Run a Postgres `psql` shell: `/mash/postgres/bin/cli`
|
||||
3. Execute this query: `ALTER DATABASE "mash-firezone" RENAME TO firezone;` and then quit the shell with `\q`
|
||||
|
||||
Then update the playbook (don't forget to run `just roles`), run `just install-all` and you should be good to go!
|
||||
|
||||
# 2023-03-26
|
||||
|
||||
## (Backward Compatibility Break) PeerTube is no longer wired to Redis automatically
|
||||
|
||||
As described in our [Redis](docs/services/redis.md) services docs, running a single instance of Redis to be used by multiple services is not a good practice.
|
||||
|
||||
For this reason, we're no longer auto-wiring PeerTube to Redis. If you're running other services (which may require Redis in the future) on the same host, it's recommended that you follow the [Creating a Redis instance dedicated to PeerTube](docs/services/peertube.md#creating-a-redis-instance-dedicated-to-peertube) documentation.
|
||||
|
||||
If you're only running PeerTube on a dedicated server (no other services that may need Redis) or you'd like to stick to what you've used until now (a single shared Redis instance), follow the [Using the shared Redis instance for PeerTube](docs/services/peertube.md#using-the-shared-redis-instance-for-peertube) documentation.
|
||||
|
||||
|
||||
# 2023-03-25
|
||||
|
||||
## (Backward Compatibility Break) Docker no longer installed by default
|
||||
|
|
210
docs/running-multiple-instances.md
Normal file
210
docs/running-multiple-instances.md
Normal file
|
@ -0,0 +1,210 @@
|
|||
## Running multiple instances of the same service on the same host
|
||||
|
||||
The way this playbook is structured, each Ansible role can only be invoked once and made to install one instance of the service it's responsible for.
|
||||
|
||||
If you need multiple instances (of whichever service), you'll need some workarounds as described below.
|
||||
|
||||
The example below focuses on hosting multiple [Redis](services/redis.md) instances, but you can apply it to hosting multiple instances or whole stacks of any kind.
|
||||
|
||||
Let's say you're managing a host called `mash.example.com` which installs both [PeerTube](services/peertube.md) and [NetBox](services/netbox.md). Both of these services require a [Redis](services/redis.md) instance. If you simply add `redis_enabled: true` to your `mash.example.com` host's `vars.yml` file, you'd get a Redis instance (`mash-redis`), but it's just one instance. As described in our [Redis](services/redis.md) documentation, this is a security problem and potentially fragile as both services may try to read/write the same data and get in conflict with one another.
|
||||
|
||||
We propose that you **don't** add `redis_enabled: true` to your main `mash.example.com` file, but do the following:
|
||||
|
||||
## Re-do your inventory to add supplementary hosts
|
||||
|
||||
Create multiple hosts in your inventory (`inventory/hosts`) which target the same server, like this:
|
||||
|
||||
```ini
|
||||
[mash_servers]
|
||||
[mash_servers:children]
|
||||
mash_example_com
|
||||
|
||||
[mash_example_com]
|
||||
mash.example.com-netbox-deps ansible_host=1.2.3.4
|
||||
mash.example.com-peertube-deps ansible_host=1.2.3.4
|
||||
mash.example.com ansible_host=1.2.3.4
|
||||
```
|
||||
|
||||
This creates a new group (called `mash_example_com`) which groups all 3 hosts:
|
||||
|
||||
- (**new**) `mash.example.com-netbox-deps` - a new host, for your [NetBox](services/netbox.md) dependencies
|
||||
- (**new**) `mash.example.com-peertube-deps` - a new host, for your [PeerTube](services/peertube.md) dependencies
|
||||
- (old) `mash.example.com` - your regular inventory host
|
||||
|
||||
When running Ansible commands later on, you can use the `-l` flag to limit which host to run them against. Here are a few examples:
|
||||
|
||||
- `just install-all` - runs the [installation](installing.md) process on all hosts (3 hosts in this case)
|
||||
- `just install-all -l mash_example_com` - runs the installation process on all hosts in the `mash_example_com` group (same 3 hosts as `just install-all` in this case)
|
||||
- `just install-all -l mash.example.com-netbox-deps` - runs the installation process on the `mash.example.com-netbox-deps` host
|
||||
|
||||
|
||||
## Adjust the configuration of the supplementary hosts to use a new "namespace"
|
||||
|
||||
Multiple hosts targetting the same server as described above still causes conflicts, because services will use the same paths (e.g. `/mash/redis`) and service/container names (`mash-redis`) everywhere.
|
||||
|
||||
To avoid conflicts, adjust the `vars.yml` file for the new hosts (`mash.example.com-netbox-deps` and `mash.example.com-peertube-deps`)
|
||||
and set non-default and unique values in the `mash_playbook_service_identifier_prefix` and `mash_playbook_service_base_directory_name_prefix` variables. Examples below:
|
||||
|
||||
`inventory/host_vars/mash.example.com-netbox-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-netbox-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'netbox-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
`inventory/host_vars/mash.example.com-peertube-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-peertube-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'peertube-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
The above configuration will create **2** Redis instances:
|
||||
|
||||
- `mash-netbox-redis` with its base data path in `/mash/netbox-redis`
|
||||
- `mash-peertube-redis` with its base data path in `/mash/peertube-redis`
|
||||
|
||||
These instances reuse the `mash` user and group and the `/mash` data path, but are not in conflict with each other.
|
||||
|
||||
|
||||
## Adjust the configuration of the base host
|
||||
|
||||
Now that we've created separate Redis instances for both PeerTube and NetBox, we need to put them to use by editing the `vars.yml` file of the main host (the one that installs PeerTbue and NetBox) to wire them to their Redis instances.
|
||||
|
||||
You'll need configuration (`inventory/host_vars/mash.example.com/vars.yml`) like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
netbox_enabled: true
|
||||
|
||||
# Other NetBox configuration here
|
||||
|
||||
# Point NetBox to its dedicated Redis instance
|
||||
netbox_environment_variable_redis_host: mash-netbox-redis
|
||||
netbox_environment_variable_redis_cache_host: mash-netbox-redis
|
||||
|
||||
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated Redis service (mash-netbox-redis.service)
|
||||
netbox_systemd_required_services_list_custom:
|
||||
- mash-netbox-redis.service
|
||||
|
||||
# Make sure the NetBox container is connected to the container network of its dedicated Redis service (mash-netbox-redis)
|
||||
netbox_container_additional_networks_custom:
|
||||
- mash-netbox-redis
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Other PeerTube configuration here
|
||||
|
||||
# Point PeerTube to its dedicated Redis instance
|
||||
peertube_config_redis_hostname: mash-peertube-redis
|
||||
|
||||
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated Redis service (mash-peertube-redis.service)
|
||||
peertube_systemd_required_services_list_custom:
|
||||
- "mash-peertube-redis.service"
|
||||
|
||||
# Make sure the PeerTube container is connected to the container network of its dedicated Redis service (mash-peertube-redis)
|
||||
peertube_container_additional_networks_custom:
|
||||
- "mash-peertube-redis"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
|
||||
## Questions & Answers
|
||||
|
||||
**Can't I just use the same Redis instance for multiple services?**
|
||||
|
||||
> You may or you may not. See the [Redis](services/redis.md) documentation for why you shouldn't do this.
|
||||
|
||||
**Can't I just create one host and a separate stack for each service** (e.g. Nextcloud + all dependencies on one inventory host; PeerTube + all dependencies on another inventory host; with both inventory hosts targetting the same server)?
|
||||
|
||||
> That's a possibility which is somewhat clean. The downside is that each "full stack" comes with its own Postgres database which needs to be maintained and upgraded separately.
|
|
@ -76,3 +76,10 @@ After installation, you can go to the AdGuard Home URL, as defined in `adguard_h
|
|||
As mentioned in the [URL](#url) section above, you may hit some annoyances when hosting under a subpath.
|
||||
|
||||
The first time you visit the AdGuard Home pages, you'll go through a setup wizard **make sure to set the HTTP port to `3000`**. This is the in-container port that our Traefik setup expects and uses for serving the install wizard to begin with. If you go with the default (`80`), the web UI will stop working after the installation wizard completes.
|
||||
|
||||
Things you should consider doing later:
|
||||
|
||||
- increasing the per-client Rate Limit (from the default of `20`) in the **DNS server configuration** section in **Settings** -> **DNS Settings**
|
||||
- enabling caching in the **DNS cache configuration** section in **Settings** -> **DNS Settings**
|
||||
- adding additional blocklists by discovering them on [Firebog](https://firebog.net/) or other sources and importing them from **Filters** -> **DNS blocklists**
|
||||
- reading the AdGuard Home [README](https://github.com/AdguardTeam/AdGuardHome/blob/master/README.md) and [Wiki](https://github.com/AdguardTeam/AdGuardHome/wiki)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Firezone
|
||||
|
||||
[Firezone](https://www.firezone.dev/) is a self-hosted VPN server (based on [WireGuard](https://en.wikipedia.org/wiki/WireGuard)) with Web UI that this playbook can install, powered by the [moan0s/role-firezone](https://github.com/moan0s/role-firezone) Ansible role.
|
||||
[Firezone](https://www.firezone.dev/) is a self-hosted VPN server (based on [WireGuard](https://en.wikipedia.org/wiki/WireGuard)) with Web UI that this playbook can install, powered by the [mother-of-all-self-hosting/ansible-role-firezone](https://github.com/mother-of-all-self-hosting/ansible-role-firezone) Ansible role.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
|
91
docs/services/gotosocial.md
Normal file
91
docs/services/gotosocial.md
Normal file
|
@ -0,0 +1,91 @@
|
|||
# GoToSocial
|
||||
|
||||
[GoToSocial](https://gotosocial.org/) is a self-hosted [ActivityPub](https://activitypub.rocks/) social network server, that this playbook can install, powered by the [mother-of-all-self-hosting/ansible-role-gotosocial](https://github.com/mother-of-all-self-hosting/ansible-role-gotosocial) Ansible role.
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# gotosocial #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
gotosocial_enabled: true
|
||||
|
||||
|
||||
# Hostname that this server will be reachable at.
|
||||
# DO NOT change this after your server has already run once, or you will break things!
|
||||
# Examples: ["gts.example.org","some.server.com"]
|
||||
gotosocial_hostname: 'social.example.org'
|
||||
|
||||
# Domain to use when federating profiles. It defaults to `gotosocial_hostname` but you can cange it when you want your server to be at
|
||||
# eg., `gotosocial_hostname: gts.example.org`, but you want the domain on accounts to be "example.org" because it looks better
|
||||
# or is just shorter/easier to remember.
|
||||
#
|
||||
# Please read the appropriate section of the installation guide before you go messing around with this setting:
|
||||
# https://docs.gotosocial.org/installation_guide/advanced/#can-i-host-my-instance-at-fediexampleorg-but-have-just-exampleorg-in-my-username
|
||||
# gotosocial_account_domain: "example.org"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /gotosocial #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
After installation, you can use `just run-tags gotosocial-add-user --extra-vars=username=<username> --extra-vars=password=<password> --extra-vars=email=<email>"`
|
||||
to create your a user. Change `--tags=gotosocial-add-user` to `--tags=gotosocial-add-admin` to create an admin account.
|
||||
|
||||
### Usage
|
||||
|
||||
After [installing](../installing.md), you can visit at the URL specified in `gotosocial_hostname` and should see your instance.
|
||||
Start to customize it at `social.example.org/admin`.
|
||||
|
||||
Use the [GtS CLI Tool](https://docs.gotosocial.org/en/latest/admin/cli/) to do admin & maintenance tasks. E.g. use
|
||||
```bash
|
||||
docker exec -it mash-gotosocial /gotosocial/gotosocial admin account demote --username <username>
|
||||
```
|
||||
to demote a user from admin to normal user.
|
||||
|
||||
Refer to the [great official documentation](https://docs.gotosocial.org/en/latest/) for more information on GoToSocial.
|
||||
|
||||
|
||||
## Migrate an existing instance
|
||||
|
||||
The following assumes you want to migrate from `serverA` to `serverB` (managed by mash) but you just cave to adjust the copy commands if you are on the same server.
|
||||
|
||||
Stop the initial instance on `serverA`
|
||||
|
||||
```bash
|
||||
serverA$ systemctl stop gotosocial
|
||||
```
|
||||
|
||||
Dump the database (depending on your existing setup you might have to adjust this)
|
||||
```
|
||||
serverA$ pg_dump gotosocial > latest.sql
|
||||
```
|
||||
|
||||
Copy the files to the new server
|
||||
|
||||
```bash
|
||||
serverA$ rsync -av -e "ssh" latest.sql root@serverB:/mash/gotosocial/
|
||||
serverA$ rsync -av -e "ssh" data/* root@serverB:/mash/gotosocial/data/
|
||||
```
|
||||
|
||||
Install (but don't start) the service and database on the server.
|
||||
|
||||
```bash
|
||||
yourPC$ just run-tags install-all
|
||||
yourPC$ just run-tags import-postgres --extra-vars=server_path_postgres_dump=/mash/gotosocial/latest.sql --extra-vars=postgres_default_import_database=mash-gotosocial
|
||||
```
|
||||
|
||||
Start the services on the new server
|
||||
|
||||
```bash
|
||||
yourPC$ just run-tags start
|
||||
```
|
||||
|
||||
Done 🥳
|
61
docs/services/keycloak.md
Normal file
61
docs/services/keycloak.md
Normal file
|
@ -0,0 +1,61 @@
|
|||
# Keycloak
|
||||
|
||||
[Keycloak](https://www.keycloak.org/) is an open source identity and access management solution.
|
||||
|
||||
**Warning**: this service is a new addition to the playbook. It may not fully work or be configured in a suboptimal manner.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
keycloak_enabled: true
|
||||
|
||||
keycloak_hostname: mash.example.com
|
||||
keycloak_path_prefix: /keycloak
|
||||
|
||||
keycloak_environment_variable_keycloak_admin: your_username_here
|
||||
# Generating a strong password (e.g. `pwgen -s 64 1`) is recommended
|
||||
keycloak_environment_variable_keycloak_admin_password: ''
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/keycloak`.
|
||||
|
||||
You can remove the `keycloak_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
### Authentication
|
||||
|
||||
On first start, the admin user account will be created as defined with the `keycloak_environment_variable_keycloak_admin` and `keycloak_environment_variable_keycloak_admin_password` variables.
|
||||
|
||||
On each start after that, Keycloak will attempt to create the user again and report a non-fatal error (Keycloak will continue running).
|
||||
|
||||
Subsequent changes to the password will not affect an existing user's password.
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the Keycloak URL, as defined in `keycloak_hostname` and `keycloak_path_prefix` and log in as described in [Authentication](#authentication).
|
||||
|
||||
Follow the [Keycloak documentation](https://www.keycloak.org/documentation) or other guides for learning how to use Keycloak.
|
141
docs/services/navidrome.md
Normal file
141
docs/services/navidrome.md
Normal file
|
@ -0,0 +1,141 @@
|
|||
# Navidrome
|
||||
|
||||
[Navidrome](https://www.navidrome.org/) is a [Subsonic-API](http://www.subsonic.org/pages/api.jsp) compatible music server.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
navidrome_enabled: true
|
||||
|
||||
navidrome_hostname: mash.example.com
|
||||
navidrome_path_prefix: /navidrome
|
||||
|
||||
# By default, Navidrome will look at the /music directory for music files,
|
||||
# controlled by the `navidrome_environment_variable_nd_musicfolder` variable.
|
||||
#
|
||||
# You'd need to mount some music directory into the Navidrome container, like shown below.
|
||||
# The "Syncthing integration" section below may be relevant.
|
||||
# navidrome_container_additional_volumes:
|
||||
# - type: bind
|
||||
# src: /on-host/path/to/music
|
||||
# dst: /music
|
||||
# options: readonly
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/navidrome`.
|
||||
|
||||
You can remove the `navidrome_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
### Authentication
|
||||
|
||||
On first use (see [Usage](#usage) below), you'll be asked to create the first administrator user.
|
||||
|
||||
You can create additional users from the web UI after that.
|
||||
|
||||
### Syncthing integration
|
||||
|
||||
If you've got a [Syncthing](syncthing.md) service running, you can use it to synchronize your music directory onto the server and then mount it as read-only into the Navidrome container.
|
||||
|
||||
We recommend that you make use of the [aux](aux.md) role to create some shared directory like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# aux #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
aux_directory_definitions:
|
||||
- dest: "{{ mash_playbook_base_path }}/storage"
|
||||
- dest: "{{ mash_playbook_base_path }}/storage/music"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /aux #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
You can then mount this `{{ mash_playbook_base_path }}/storage/music` directory into the Syncthing container and synchronize it with some other computer:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# syncthing #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Other Syncthing configuration..
|
||||
|
||||
syncthing_container_additional_volumes:
|
||||
- type: bind
|
||||
src: "{{ mash_playbook_base_path }}/storage/music"
|
||||
dst: /music
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /syncthing #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
Finally, mount the `{{ mash_playbook_base_path }}/storage/music` directory into the Navidrome container as read-only:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Other Navidrome configuration..
|
||||
|
||||
navidrome_container_additional_volumes:
|
||||
- type: bind
|
||||
src: "{{ mash_playbook_base_path }}/storage/music"
|
||||
dst: /music
|
||||
options: readonly
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the Navidrome URL, as defined in `navidrome_hostname` and `navidrome_path_prefix`.
|
||||
|
||||
As mentioned in [Authentication](#authentication) above, you'll be asked to create the first administrator user the first time you open the web UI.
|
||||
|
||||
You can also connect various Subsonic-API-compatible [apps](https://www.navidrome.org/docs/overview/#apps) (desktop, web, mobile) to your Navidrome instance.
|
||||
|
||||
|
||||
## Recommended other services
|
||||
|
||||
- [Syncthing](syncthing.md) - a continuous file synchronization program which synchronizes files between two or more computers in real time. See [Syncthing integration](#syncthing-integration)
|
211
docs/services/netbox.md
Normal file
211
docs/services/netbox.md
Normal file
|
@ -0,0 +1,211 @@
|
|||
# NetBox
|
||||
|
||||
[NetBox](https://docs.netbox.dev/en/stable/) is an open-source web application that provides [IP address management (IPAM)](https://en.wikipedia.org/wiki/IP_address_management) and [data center infrastructure management (DCIM)](https://en.wikipedia.org/wiki/Data_center_management#Data_center_infrastructure_management) functionality.
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Redis](redis.md) data-store, installation details [below](#redis)
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
netbox_enabled: true
|
||||
|
||||
netbox_hostname: mash.example.com
|
||||
netbox_path_prefix: /netbox
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
netbox_environment_variable_secret_key: ''
|
||||
|
||||
# The following superuser will be created upon launch.
|
||||
netbox_environment_variable_superuser_name: your_username_here
|
||||
netbox_environment_variable_superuser_email: your.email@example.com
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way.
|
||||
# Changing the password subsequently will not affect the user's password.
|
||||
netbox_environment_variable_superuser_password: ''
|
||||
|
||||
# Redis configuration, as described below
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
### URL
|
||||
|
||||
In the example configuration above, we configure the service to be hosted at `https://mash.example.com/netbox`.
|
||||
|
||||
You can remove the `netbox_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
|
||||
|
||||
|
||||
### Authentication
|
||||
|
||||
If `netbox_environment_variable_superuser_*` variables are specified, NetBox will try to create the user (if missing).
|
||||
|
||||
|
||||
### Redis
|
||||
|
||||
As described on the [Redis](redis.md) documentation page, if you're hosting additional services which require Redis on the same server, you'd better go for installing a separate Redis instance for each service. See [Creating a Redis instance dedicated to NetBox](#creating-a-redis-instance-dedicated-to-netbox).
|
||||
|
||||
If you're only running NetBox on this server and don't need to use Redis for anything else, you can [use a single Redis instance](#using-the-shared-redis-instance-for-netbox).
|
||||
|
||||
#### Using the shared Redis instance for NetBox
|
||||
|
||||
To install a single (non-dedicated) Redis instance (`mash-redis`) and hook NetBox to it, add the following **additional** configuration:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
# Point NetBox to the shared Redis instance
|
||||
netbox_config_redis_hostname: "{{ redis_identifier }}"
|
||||
|
||||
# Make sure the NetBox service (mash-netbox.service) starts after the shared Redis service (mash-redis.service)
|
||||
netbox_systemd_required_services_list_custom:
|
||||
- "{{ redis_identifier }}.service"
|
||||
|
||||
# Make sure the NetBox container is connected to the container network of the shared Redis service (mash-redis)
|
||||
netbox_container_additional_networks_custom:
|
||||
- "{{ redis_identifier }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-redis` Redis instance on this host.
|
||||
|
||||
This is only recommended if you won't be installing other services which require Redis. Alternatively, go for [Creating a Redis instance dedicated to NetBox](#creating-a-redis-instance-dedicated-to-netbox).
|
||||
|
||||
|
||||
#### Creating a Redis instance dedicated to NetBox
|
||||
|
||||
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
|
||||
|
||||
Adjust your `inventory/hosts` file as described in [Re-do your inventory to add supplementary hosts](../running-multiple-instances.md#re-do-your-inventory-to-add-supplementary-hosts), adding a new supplementary host (e.g. if `netbox.example.com` is your main one, create `netbox.example.com-deps`).
|
||||
|
||||
Then, create a new `vars.yml` file for the
|
||||
|
||||
`inventory/host_vars/netbox.example.com-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-netbox-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'netbox-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-netbox-redis` instance on this host with its data in `/mash/netbox-redis`.
|
||||
|
||||
Then, adjust your main inventory host's variables file (`inventory/host_vars/netbox.example.com/vars.yml`) like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
|
||||
# Point NetBox to its dedicated Redis instance
|
||||
netbox_environment_variable_redis_host: mash-netbox-redis
|
||||
netbox_environment_variable_redis_cache_host: mash-netbox-redis
|
||||
|
||||
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated Redis service (mash-netbox-redis.service)
|
||||
netbox_systemd_required_services_list_custom:
|
||||
- "mash-netbox-redis.service"
|
||||
|
||||
# Make sure the NetBox container is connected to the container network of its dedicated Redis service (mash-netbox-redis)
|
||||
netbox_container_additional_networks_custom:
|
||||
- "mash-netbox-redis"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
If you've decided to install a dedicated Redis instance for NetBox, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `netbox.example.com-deps`), before running installation for the main one (e.g. `netbox.example.com`).
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
After installation, you can go to the NetBox URL, as defined in `netbox_hostname` and `netbox_path_prefix`.
|
||||
|
||||
You can log in with the **username** (**not** email) and password specified in the `netbox_environment_variable_superuser*` variables.
|
|
@ -8,7 +8,7 @@
|
|||
This service requires the following other services:
|
||||
|
||||
- a [Postgres](postgres.md) database
|
||||
- a [Redis](redis.md) data-store
|
||||
- a [Redis](redis.md) data-store, installation details [below](#redis)
|
||||
- a [Traefik](traefik.md) reverse-proxy server
|
||||
|
||||
|
||||
|
@ -47,6 +47,8 @@ peertube_config_root_user_initial_password: ''
|
|||
# Then, replace the example IP range below, and re-run the playbook.
|
||||
# peertube_trusted_proxies_values_custom: ["172.21.0.0/16"]
|
||||
|
||||
# Redis configuration, as described below
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
|
@ -58,6 +60,148 @@ In the example configuration above, we configure the service to be hosted at `ht
|
|||
|
||||
Hosting PeerTube under a subpath (by configuring the `peertube_path_prefix` variable) does not seem to be possible right now, due to PeerTube limitations.
|
||||
|
||||
### Redis
|
||||
|
||||
As described on the [Redis](redis.md) documentation page, if you're hosting additional services which require Redis on the same server, you'd better go for installing a separate Redis instance for each service. See [Creating a Redis instance dedicated to PeerTube](#creating-a-redis-instance-dedicated-to-peertube).
|
||||
|
||||
If you're only running PeerTube on this server and don't need to use Redis for anything else, you can [use a single Redis instance](#using-the-shared-redis-instance-for-peertube).
|
||||
|
||||
#### Using the shared Redis instance for PeerTube
|
||||
|
||||
To install a single (non-dedicated) Redis instance (`mash-redis`) and hook PeerTube to it, add the following **additional** configuration:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
# Point PeerTube to the shared Redis instance
|
||||
peertube_config_redis_hostname: "{{ redis_identifier }}"
|
||||
|
||||
# Make sure the PeerTube service (mash-peertube.service) starts after the shared Redis service (mash-redis.service)
|
||||
peertube_systemd_required_services_list_custom:
|
||||
- "{{ redis_identifier }}.service"
|
||||
|
||||
# Make sure the PeerTube container is connected to the container network of the shared Redis service (mash-redis)
|
||||
peertube_container_additional_networks_custom:
|
||||
- "{{ redis_identifier }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-redis` Redis instance on this host.
|
||||
|
||||
This is only recommended if you won't be installing other services which require Redis. Alternatively, go for [Creating a Redis instance dedicated to PeerTube](#creating-a-redis-instance-dedicated-to-peertube).
|
||||
|
||||
|
||||
#### Creating a Redis instance dedicated to PeerTube
|
||||
|
||||
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
|
||||
|
||||
Adjust your `inventory/hosts` file as described in [Re-do your inventory to add supplementary hosts](../running-multiple-instances.md#re-do-your-inventory-to-add-supplementary-hosts), adding a new supplementary host (e.g. if `peertube.example.com` is your main one, create `peertube.example.com-deps`).
|
||||
|
||||
Then, create a new `vars.yml` file for the
|
||||
|
||||
`inventory/host_vars/peertube.example.com-deps/vars.yml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
|
||||
# Various other secrets will be derived from this secret automatically.
|
||||
mash_playbook_generic_secret_key: ''
|
||||
|
||||
# Override service names and directory path prefixes
|
||||
mash_playbook_service_identifier_prefix: 'mash-peertube-'
|
||||
mash_playbook_service_base_directory_name_prefix: 'peertube-'
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /Playbook #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# redis #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
redis_enabled: true
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /redis #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
This will create a `mash-peertube-redis` instance on this host with its data in `/mash/peertube-redis`.
|
||||
|
||||
Then, adjust your main inventory host's variables file (`inventory/host_vars/peertube.example.com/vars.yml`) like this:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
# Base configuration as shown above
|
||||
|
||||
# Point PeerTube to its dedicated Redis instance
|
||||
peertube_config_redis_hostname: mash-peertube-redis
|
||||
|
||||
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated Redis service (mash-peertube-redis.service)
|
||||
peertube_systemd_required_services_list_custom:
|
||||
- "mash-peertube-redis.service"
|
||||
|
||||
# Make sure the PeerTube container is connected to the container network of its dedicated Redis service (mash-peertube-redis)
|
||||
peertube_container_additional_networks_custom:
|
||||
- "mash-peertube-redis"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /peertube #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
If you've decided to install a dedicated Redis instance for PeerTube, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `peertube.example.com-deps`), before running installation for the main one (e.g. `peertube.example.com`).
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
|
@ -68,6 +212,7 @@ You should then be able to log in with:
|
|||
- username: `root`
|
||||
- password: the password you've set in `peertube_config_root_user_initial_password` in `vars.yml`
|
||||
|
||||
|
||||
## Adjusting the trusted reverse-proxy networks
|
||||
|
||||
If you go to **Administration** -> **System** -> **Debug** (`/admin/system/debug`), you'll notice that PeerTube reports some local IP instead of your own IP address.
|
||||
|
|
|
@ -4,12 +4,19 @@
|
|||
|
||||
Some of the services installed by this playbook require a Redis data store.
|
||||
|
||||
Enabling the Redis database service will automatically wire all other services to use it.
|
||||
**Warning**: Because Redis is not as flexible as [Postgres](postgres.md) when it comes to authentication and data separation, it's **recommended that you run separate Redis instances** (one for each service). Redis supports multiple database and a [SELECT](https://redis.io/commands/select/) command for switching between them. However, **reusing the same Redis instance is not good enough** because:
|
||||
|
||||
- if all services use the same Redis instance and database (id = 0), services may conflict with one another
|
||||
- the number of databases is limited to [16 by default](https://github.com/redis/redis/blob/aa2403ca98f6a39b6acd8373f8de1a7ba75162d5/redis.conf#L376-L379), which may or may not be enough. With configuration changes, this is solveable.
|
||||
- some services do not support switching the Redis database and always insist on using the default one (id = 0)
|
||||
- Redis [does not support different authentication credentials for its different databases](https://stackoverflow.com/a/37262596), so each service can potentially read and modify other services' data
|
||||
|
||||
If you're only hosting a single service (like [PeerTube](peertube.md) or [NetBox](netbox.md)) on your server, you can get away with running a single instance. If you're hosting multiple services, you should prepare separate instances for each service.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process to **host a single instance of the Redis service**:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
|
@ -26,3 +33,5 @@ redis_enabled: true
|
|||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
To **host multiple instances of the Redis service**, follow the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation or the **Redis** section (if available) of the service you're installing.
|
||||
|
|
39
docs/services/soft-serve.md
Normal file
39
docs/services/soft-serve.md
Normal file
|
@ -0,0 +1,39 @@
|
|||
# Soft Serve
|
||||
|
||||
[Soft Serve](https://github.com/charmbracelet/soft-serve) is a tasty, self-hostable [Git](https://git-scm.com/) server for the command line.
|
||||
|
||||
## Configuration
|
||||
|
||||
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process:
|
||||
|
||||
```yaml
|
||||
########################################################################
|
||||
# #
|
||||
# soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
soft_serve_enabled: true
|
||||
|
||||
# The hostname of this system.
|
||||
# It will be used for generating git clone URLs (e.g. ssh://mash.example.com/repository.git)
|
||||
soft_serve_hostname: mash.example.com
|
||||
|
||||
# Expose Soft Serve's port. For git servers the usual git-over-ssh port is 22
|
||||
soft_serve_container_bind_port: 2222
|
||||
|
||||
# This key will be able to authenticate with ANY user until you configure Soft Serve
|
||||
soft_serve_initial_admin_key: YOUR PUBLIC SSH KEY HERE
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After you've installed Soft Serve, you can `ssh your-user@mash.example.com -p 2222` with the SSH key defined in `soft_serve_initial_admin_key` to see its [TUI](https://en.wikipedia.org/wiki/Text-based_user_interface) and follow the instructions to configure Soft Serve further.
|
||||
|
||||
Note that you have to [finish the configuration yourself](https://github.com/charmbracelet/soft-serve#configuration), otherwise any user with `soft_serve_initial_admin_key` will work as an admin.
|
|
@ -25,12 +25,7 @@ system_swap_enabled: true
|
|||
|
||||
A swap file will be created in `/var/swap` (configured using the `system_swap_path` variable) and enabled in your `/etc/fstab` file.
|
||||
|
||||
By default, the swap file will have the following size:
|
||||
|
||||
- on systems with `<= 2GB` of RAM, swap file size = `total RAM * 2`
|
||||
- on systems with `> 2GB` of RAM, swap file size = `1GB`
|
||||
|
||||
To avoid these calculations and set your own size explicitly, set the `system_swap_size` variable in megabytes, example (4gb):
|
||||
By default, the swap file will have `1GB` size, but you can set the `system_swap_size` variable in megabytes, example (4gb):
|
||||
|
||||
```yaml
|
||||
system_swap_size: 4096
|
||||
|
|
|
@ -11,10 +11,14 @@
|
|||
| [Docker Registry Purger](https://github.com/devture/docker-registry-purger) | A small tool used for purging a private Docker Registry's old tags | [Link](services/docker-registry-purger.md) |
|
||||
| [Focalboard](https://www.focalboard.com/) | An open source, self-hosted alternative to [Trello](https://trello.com/), [Notion](https://www.notion.so/), and [Asana](https://asana.com/). | [Link](services/focalboard.md) |
|
||||
| [Firezone](https://www.firezone.dev/) | A self-hosted VPN server (based on [WireGuard](https://en.wikipedia.org/wiki/WireGuard)) with a Web UI | [Link](services/firezone.md) |
|
||||
| [Gitea](https://gitea.io/) | A painless self-hosted Git service. | [Link](services/gitea.md) |
|
||||
| [Gitea](https://gitea.io/) | A painless self-hosted [Git](https://git-scm.com/) service. | [Link](services/gitea.md) |
|
||||
| [GotoSocial](https://gotosocial.org/) | [GoToSocial](https://gotosocial.org/) is a self-hosted [ActivityPub](https://activitypub.rocks/) social network server | [Link](services/gotosocial.md) |
|
||||
| [Grafana](https://grafana.com/) | An open and composable observability and data visualization platform, often used with [Prometheus](services/prometheus.md) | [Link](services/grafana.md) |
|
||||
| [Hubsite](https://github.com/moan0s/hubsite) | A simple, static site that shows an overview of the available services | [Link](services/hubsite.md) |
|
||||
| [Keycloak](https://www.keycloak.org/) | An open source identity and access management solution. | [Link](services/keycloak.md) |
|
||||
| [Miniflux](https://miniflux.app/) | Minimalist and opinionated feed reader. | [Link](services/miniflux.md) |
|
||||
| [Navidrome](https://www.navidrome.org/) | [Subsonic-API](http://www.subsonic.org/pages/api.jsp) compatible music server | [Link](services/navidrome.md)
|
||||
| [NetBox](https://docs.netbox.dev/en/stable/) | Web application that provides [IP address management (IPAM)](https://en.wikipedia.org/wiki/IP_address_management) and [data center infrastructure management (DCIM)](https://en.wikipedia.org/wiki/Data_center_management#Data_center_infrastructure_management) functionality | [Link](services/netbox.md) |
|
||||
| [Nextcloud](https://nextcloud.com/) | The most popular self-hosted collaboration solution for tens of millions of users at thousands of organizations across the globe. | [Link](services/nextcloud.md) |
|
||||
| [PeerTube](https://joinpeertube.org/) | A tool for sharing online videos | [Link](services/peertube.md) |
|
||||
| [Postgres](https://www.postgresql.org) | A powerful, open source object-relational database system | [Link](services/postgres.md) |
|
||||
|
@ -25,6 +29,7 @@
|
|||
| [Radicale](https://radicale.org/) | A Free and Open-Source CalDAV and CardDAV Server (solution for hosting contacts and calendars) | [Link](services/radicale.md) |
|
||||
| [Redmine](https://redmine.org/) | A flexible project management web application. | [Link](services/redmine.md) |
|
||||
| [Redis](https://redis.io/) | An in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. | [Link](services/redis.md) |
|
||||
| [Soft Serve](https://github.com/charmbracelet/soft-serve) | A tasty, self-hostable [Git](https://git-scm.com/) server for the command line | [Link](services/soft-serve.md) |
|
||||
| [Syncthing](https://syncthing.net/) | A continuous file synchronization program which synchronizes files between two or more computers in real time | [Link](services/syncthing.md) |
|
||||
| [Traefik](https://doc.traefik.io/traefik/) | A container-aware reverse-proxy server | [Link](services/traefik.md) |
|
||||
| [Vaultwarden](https://github.com/dani-garcia/vaultwarden) | A lightweight unofficial and compatible implementation of the [Bitwarden](https://bitwarden.com/) password manager | [Link](services/vaultwarden.md) |
|
||||
|
|
|
@ -93,14 +93,26 @@ devture_systemd_service_manager_services_list_auto: |
|
|||
+
|
||||
([{'name': (gitea_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'gitea', 'gitea-server']}] if gitea_enabled else [])
|
||||
+
|
||||
([{'name': (gotosocial_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'gotosocial']}] if gotosocial_enabled else [])
|
||||
+
|
||||
([{'name': (grafana_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'grafana']}] if grafana_enabled else [])
|
||||
+
|
||||
([{'name': (keycloak_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'keycloak']}] if keycloak_enabled else [])
|
||||
+
|
||||
([{'name': (miniflux_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'miniflux']}] if miniflux_enabled else [])
|
||||
+
|
||||
([{'name': (navidrome_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'navidrome']}] if navidrome_enabled else [])
|
||||
+
|
||||
([{'name': (netbox_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'netbox', 'netbox-server']}] if netbox_enabled else [])
|
||||
+
|
||||
([{'name': (netbox_identifier + '-worker.service'), 'priority': 2500, 'groups': ['mash', 'netbox', 'netbox-worker']}] if netbox_enabled else [])
|
||||
+
|
||||
([{'name': (netbox_identifier + '-housekeeping.service'), 'priority': 2500, 'groups': ['mash', 'netbox', 'netbox-housekeeping']}] if netbox_enabled else [])
|
||||
+
|
||||
([{'name': (nextcloud_identifier + '-server.service'), 'priority': 2000, 'groups': ['mash', 'nextcloud', 'nextcloud-server']}] if nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': (nextcloud_identifier + '-cron.timer'), 'priority': 2500, 'groups': ['mash', 'nextcloud', 'nextcloud-cron']}] if nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': (miniflux_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'miniflux']}] if miniflux_enabled else [])
|
||||
+
|
||||
([{'name': (peertube_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'peertube']}] if peertube_enabled else [])
|
||||
+
|
||||
([{'name': (prometheus_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'metrics', 'prometheus']}] if prometheus_enabled else [])
|
||||
|
@ -115,6 +127,8 @@ devture_systemd_service_manager_services_list_auto: |
|
|||
+
|
||||
([{'name': (redis_identifier + '.service'), 'priority': 750, 'groups': ['mash', 'redis']}] if redis_enabled else [])
|
||||
+
|
||||
([{'name': (soft_serve_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'soft-serve']}] if soft_serve_enabled else [])
|
||||
+
|
||||
([{'name': (syncthing_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'syncthing']}] if syncthing_enabled else [])
|
||||
+
|
||||
([{'name': (vaultwarden_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'vaultwarden', 'vaultwarden-server']}] if vaultwarden_enabled else [])
|
||||
|
@ -176,6 +190,18 @@ devture_postgres_managed_databases_auto: |
|
|||
'password': devture_woodpecker_ci_server_database_datasource_password,
|
||||
}] if devture_woodpecker_ci_server_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': gotosocial_database_name,
|
||||
'username': gotosocial_database_username,
|
||||
'password': gotosocial_database_password,
|
||||
}] if gotosocial_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': keycloak_database_name,
|
||||
'username': keycloak_database_username,
|
||||
'password': keycloak_database_password,
|
||||
}] if keycloak_enabled and keycloak_database_type == 'postgres' and keycloak_database_hostname == devture_postgres_identifier else [])
|
||||
+
|
||||
([{
|
||||
'name': miniflux_database_name,
|
||||
'username': miniflux_database_username,
|
||||
|
@ -188,6 +214,12 @@ devture_postgres_managed_databases_auto: |
|
|||
'password': redmine_database_password,
|
||||
}] if redmine_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': netbox_database_name,
|
||||
'username': netbox_database_username,
|
||||
'password': netbox_database_password,
|
||||
}] if netbox_enabled else [])
|
||||
+
|
||||
([{
|
||||
'name': nextcloud_database_name,
|
||||
'username': nextcloud_database_username,
|
||||
|
@ -670,6 +702,50 @@ grafana_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResol
|
|||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
keycloak_enabled: false
|
||||
|
||||
keycloak_identifier: "{{ mash_playbook_service_identifier_prefix }}keycloak"
|
||||
|
||||
keycloak_uid: "{{ mash_playbook_uid }}"
|
||||
keycloak_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
keycloak_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}keycloak"
|
||||
|
||||
keycloak_systemd_required_systemd_services_list_auto: |
|
||||
{{
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and keycloak_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
keycloak_container_additional_networks_auto: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and keycloak_database_hostname == devture_postgres_identifier and keycloak_container_network != devture_postgres_container_network else [])
|
||||
}}
|
||||
|
||||
keycloak_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
keycloak_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
keycloak_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
keycloak_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
keycloak_database_hostname: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
keycloak_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
keycloak_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.keycloak', rounds=655555) | to_uuid }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /keycloak #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# miniflux #
|
||||
|
@ -709,7 +785,40 @@ miniflux_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key)
|
|||
|
||||
########################################################################
|
||||
# #
|
||||
# /miniflux #
|
||||
# /miniflux #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
navidrome_enabled: false
|
||||
|
||||
navidrome_identifier: "{{ mash_playbook_service_identifier_prefix }}navidrome"
|
||||
|
||||
navidrome_uid: "{{ mash_playbook_uid }}"
|
||||
navidrome_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
navidrome_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}navidrome"
|
||||
|
||||
navidrome_container_additional_networks_auto: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
}}
|
||||
|
||||
navidrome_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
navidrome_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
navidrome_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
navidrome_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /navidrome #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
@ -732,8 +841,6 @@ nextcloud_gid: "{{ mash_playbook_gid }}"
|
|||
|
||||
nextcloud_systemd_required_services_list_auto: |
|
||||
{{
|
||||
(['docker.service'])
|
||||
+
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and nextcloud_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
|
@ -764,6 +871,52 @@ nextcloud_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key)
|
|||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
netbox_enabled: false
|
||||
|
||||
netbox_identifier: "{{ mash_playbook_service_identifier_prefix }}netbox"
|
||||
|
||||
netbox_uid: "{{ mash_playbook_uid }}"
|
||||
netbox_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
netbox_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}netbox"
|
||||
|
||||
netbox_systemd_required_services_list_auto: |
|
||||
{{
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and nextcloud_database_hostname == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
netbox_container_additional_networks_auto: |
|
||||
{{
|
||||
(
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and netbox_database_hostname == devture_postgres_identifier and netbox_container_network != devture_postgres_container_network else [])
|
||||
) | unique
|
||||
}}
|
||||
|
||||
netbox_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
netbox_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
netbox_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
netbox_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
netbox_database_hostname: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
netbox_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
netbox_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.netbox', rounds=655555) | to_uuid }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /netbox #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# peertube #
|
||||
|
@ -779,14 +932,12 @@ peertube_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base
|
|||
peertube_uid: "{{ mash_playbook_uid }}"
|
||||
peertube_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
peertube_container_additional_networks: |
|
||||
peertube_container_additional_networks_auto: |
|
||||
{{
|
||||
(
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and peertube_config_database_hostname == devture_postgres_identifier and peertube_container_network != devture_postgres_container_network else [])
|
||||
+
|
||||
([redis_container_network] if peertube_config_redis_hostname == redis_identifier else [])
|
||||
) | unique
|
||||
}}
|
||||
|
||||
|
@ -800,15 +951,9 @@ peertube_config_database_port: "{{ '5432' if devture_postgres_enabled else '' }}
|
|||
peertube_config_database_username: peertube
|
||||
peertube_config_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.peertube', rounds=655555) | to_uuid }}"
|
||||
|
||||
peertube_config_redis_hostname: "{{ redis_identifier if redis_enabled else '' }}"
|
||||
|
||||
peertube_systemd_required_services_list: |
|
||||
peertube_systemd_required_services_list_auto: |
|
||||
{{
|
||||
(['docker.service'])
|
||||
+
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and peertube_config_database_hostname == devture_postgres_identifier else [])
|
||||
+
|
||||
([redis_identifier ~ '.service'] if redis_enabled and peertube_config_redis_hostname == redis_identifier else [])
|
||||
}}
|
||||
|
||||
########################################################################
|
||||
|
@ -1033,6 +1178,30 @@ redis_gid: "{{ mash_playbook_gid }}"
|
|||
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
soft_serve_enabled: false
|
||||
|
||||
soft_serve_identifier: "{{ mash_playbook_service_identifier_prefix }}soft-serve"
|
||||
|
||||
soft_serve_uid: "{{ mash_playbook_uid }}"
|
||||
soft_serve_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
soft_serve_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}soft-serve"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /soft-serve #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# syncthing #
|
||||
|
@ -1269,6 +1438,30 @@ hubsite_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResol
|
|||
# Services
|
||||
##########
|
||||
|
||||
# Adguard home
|
||||
hubsite_service_adguard_home_enabled: "{{ adguard_home_enabled }}"
|
||||
hubsite_service_adguard_home_name: Adguard Home
|
||||
hubsite_service_adguard_home_url: "https://{{ adguard_home_hostname }}{{ adguard_home_path_prefix }}"
|
||||
hubsite_service_adguard_home_logo_location: "{{ role_path }}/assets/shield.png"
|
||||
hubsite_service_adguard_home_description: "A network-wide DNS software for blocking ads & tracking"
|
||||
hubsite_service_adguard_home_priority: 1000
|
||||
|
||||
# Docker Registry Browser
|
||||
hubsite_service_docker_registry_browser_enabled: "{{ docker_registry_browser_enabled }}"
|
||||
hubsite_service_docker_registry_browser_name: Docker Registry Browser
|
||||
hubsite_service_docker_registry_browser_url: "https://{{ docker_registry_browser_hostname }}{{ docker_registry_browser_path_prefix }}"
|
||||
hubsite_service_docker_registry_browser_logo_location: "{{ role_path }}/assets/docker.png"
|
||||
hubsite_service_docker_registry_browser_description: "Browse docker images"
|
||||
hubsite_service_docker_registry_browser_priority: 1000
|
||||
|
||||
# Focalboard
|
||||
hubsite_service_focalboard_enabled: "{{ focalboard_enabled }}"
|
||||
hubsite_service_focalboard_name: Focalboard
|
||||
hubsite_service_focalboard_url: "https://{{ focalboard_hostname }}{{ focalboard_path_prefix }}"
|
||||
hubsite_service_focalboard_logo_location: "{{ role_path }}/assets/focalboard.png"
|
||||
hubsite_service_focalboard_description: "An open source, self-hosted alternative to Trello, Notion, and Asana."
|
||||
hubsite_service_focalboard_priority: 1000
|
||||
|
||||
# Gitea
|
||||
hubsite_service_gitea_enabled: "{{ gitea_enabled }}"
|
||||
hubsite_service_gitea_name: Gitea
|
||||
|
@ -1277,6 +1470,22 @@ hubsite_service_gitea_logo_location: "{{ role_path }}/assets/gitea.png"
|
|||
hubsite_service_gitea_description: "A git service"
|
||||
hubsite_service_gitea_priority: 1000
|
||||
|
||||
# GoToSocial
|
||||
hubsite_service_gotosocial_enabled: "{{ gotosocial_enabled }}"
|
||||
hubsite_service_gotosocial_name: GoToSocial
|
||||
hubsite_service_gotosocial_url: "https://{{ gotosocial_hostname }}"
|
||||
hubsite_service_gotosocial_logo_location: "{{ role_path }}/assets/gotosocial.png"
|
||||
hubsite_service_gotosocial_description: "A fediverse server"
|
||||
hubsite_service_gotosocial_priority: 1000
|
||||
|
||||
# Grafana
|
||||
hubsite_service_grafana_enabled: "{{ grafana_enabled }}"
|
||||
hubsite_service_grafana_name: Grafana
|
||||
hubsite_service_grafana_url: "https://{{ grafana_hostname }}{{ grafana_path_prefix }}"
|
||||
hubsite_service_grafana_logo_location: "{{ role_path }}/assets/grafana.png"
|
||||
hubsite_service_grafana_description: "Check how your server is doing"
|
||||
hubsite_service_grafana_priority: 1000
|
||||
|
||||
# Miniflux
|
||||
hubsite_service_miniflux_enabled: "{{ miniflux_enabled }}"
|
||||
hubsite_service_miniflux_name: Miniflux
|
||||
|
@ -1301,6 +1510,22 @@ hubsite_service_peertube_logo_location: "{{ role_path }}/assets/peertube.png"
|
|||
hubsite_service_peertube_description: "Watch and upload videos"
|
||||
hubsite_service_peertube_priority: 1000
|
||||
|
||||
# Radicale
|
||||
hubsite_service_radicale_enabled: "{{ radicale_enabled }}"
|
||||
hubsite_service_radicale_name: Radicale
|
||||
hubsite_service_radicale_url: "https://{{ radicale_hostname }}{{ radicale_path_prefix }}"
|
||||
hubsite_service_radicale_logo_location: "{{ role_path }}/assets/radicale.png"
|
||||
hubsite_service_radicale_description: "Sync contacts and calendars"
|
||||
hubsite_service_radicale_priority: 1000
|
||||
|
||||
# Syncthing
|
||||
hubsite_service_syncthing_enabled: "{{ syncthing_enabled }}"
|
||||
hubsite_service_syncthing_name: Syncthing
|
||||
hubsite_service_syncthing_url: "https://{{ syncthing_hostname }}{{ syncthing_path_prefix }}"
|
||||
hubsite_service_syncthing_logo_location: "{{ role_path }}/assets/syncthing.png"
|
||||
hubsite_service_syncthing_description: "Sync your files"
|
||||
hubsite_service_syncthing_priority: 1000
|
||||
|
||||
# Uptime Kuma
|
||||
hubsite_service_uptime_kuma_enabled: "{{ uptime_kuma_enabled }}"
|
||||
hubsite_service_uptime_kuma_name: Uptime Kuma
|
||||
|
@ -1318,19 +1543,41 @@ hubsite_service_vaultwarden_logo_location: "{{ role_path }}/assets/vaultwarden.p
|
|||
hubsite_service_vaultwarden_description: "Securely access your passwords"
|
||||
hubsite_service_vaultwarden_priority: 1000
|
||||
|
||||
# Woodpecker CI
|
||||
hubsite_service_woodpecker_ci_enabled: "{{ devture_woodpecker_ci_server_enabled }}"
|
||||
hubsite_service_woodpecker_ci_name: Woodpecker CI
|
||||
hubsite_service_woodpecker_ci_url: "https://{{ devture_woodpecker_ci_server_hostname }}"
|
||||
hubsite_service_woodpecker_ci_logo_location: "{{ role_path }}/assets/woodpecker.png"
|
||||
hubsite_service_woodpecker_ci_description: "Check you CI"
|
||||
hubsite_service_woodpecker_ci_priority: 1000
|
||||
|
||||
hubsite_service_list_auto: |
|
||||
{{
|
||||
([{'name': hubsite_service_adguard_home_name, 'url': hubsite_service_adguard_home_url, 'logo_location': hubsite_service_adguard_home_logo_location, 'description': hubsite_service_adguard_home_description, 'priority': hubsite_service_adguard_home_priority}] if hubsite_service_adguard_home_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_focalboard_name, 'url': hubsite_service_focalboard_url, 'logo_location': hubsite_service_focalboard_logo_location, 'description': hubsite_service_focalboard_description, 'priority': hubsite_service_focalboard_priority}] if hubsite_service_focalboard_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_gitea_name, 'url': hubsite_service_gitea_url, 'logo_location': hubsite_service_gitea_logo_location, 'description': hubsite_service_gitea_description, 'priority': hubsite_service_gitea_priority}] if hubsite_service_gitea_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_gotosocial_name, 'url': hubsite_service_gotosocial_url, 'logo_location': hubsite_service_gotosocial_logo_location, 'description': hubsite_service_gotosocial_description, 'priority': hubsite_service_gotosocial_priority}] if hubsite_service_gotosocial_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_grafana_name, 'url': hubsite_service_grafana_url, 'logo_location': hubsite_service_grafana_logo_location, 'description': hubsite_service_grafana_description, 'priority': hubsite_service_grafana_priority}] if hubsite_service_grafana_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_miniflux_name, 'url': hubsite_service_miniflux_url, 'logo_location': hubsite_service_miniflux_logo_location, 'description': hubsite_service_miniflux_description, 'priority': hubsite_service_miniflux_priority}] if hubsite_service_miniflux_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_nextcloud_name, 'url': hubsite_service_nextcloud_url, 'logo_location': hubsite_service_nextcloud_logo_location, 'description': hubsite_service_nextcloud_description, 'priority': hubsite_service_nextcloud_priority}] if hubsite_service_nextcloud_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_peertube_name, 'url': hubsite_service_peertube_url, 'logo_location': hubsite_service_peertube_logo_location, 'description': hubsite_service_peertube_description, 'priority': hubsite_service_peertube_priority}] if hubsite_service_peertube_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_radicale_name, 'url': hubsite_service_radicale_url, 'logo_location': hubsite_service_radicale_logo_location, 'description': hubsite_service_radicale_description, 'priority': hubsite_service_radicale_priority}] if hubsite_service_radicale_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_uptime_kuma_name, 'url': hubsite_service_uptime_kuma_url, 'logo_location': hubsite_service_uptime_kuma_logo_location, 'description': hubsite_service_uptime_kuma_description, 'priority': hubsite_service_uptime_kuma_priority}] if hubsite_service_uptime_kuma_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_syncthing_name, 'url': hubsite_service_syncthing_url, 'logo_location': hubsite_service_syncthing_logo_location, 'description': hubsite_service_syncthing_description, 'priority': hubsite_service_syncthing_priority}] if hubsite_service_syncthing_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_vaultwarden_name, 'url': hubsite_service_vaultwarden_url, 'logo_location': hubsite_service_vaultwarden_logo_location, 'description': hubsite_service_vaultwarden_description, 'priority': hubsite_service_vaultwarden_priority}] if hubsite_service_vaultwarden_enabled else [])
|
||||
+
|
||||
([{'name': hubsite_service_woodpecker_ci_name, 'url': hubsite_service_woodpecker_ci_url, 'logo_location': hubsite_service_woodpecker_ci_logo_location, 'description': hubsite_service_woodpecker_ci_description, 'priority': hubsite_service_woodpecker_ci_priority}] if hubsite_service_woodpecker_ci_enabled else [])
|
||||
}}
|
||||
|
||||
########################################################################
|
||||
|
@ -1357,7 +1604,6 @@ firezone_generic_secret: "{{ mash_playbook_generic_secret_key }}"
|
|||
|
||||
firezone_database_host: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
firezone_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
firezone_database_name: "{{ firezone_identifier }}"
|
||||
firezone_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'fz.db.user', rounds=655555) | to_uuid }}"
|
||||
firezone_database_user: "{{ firezone_identifier }}"
|
||||
|
||||
|
@ -1385,3 +1631,49 @@ firezone_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certReso
|
|||
# /firezone #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# gotsocial #
|
||||
# #
|
||||
########################################################################
|
||||
|
||||
gotosocial_enabled: false
|
||||
|
||||
gotosocial_identifier: "{{ mash_playbook_service_identifier_prefix }}gotosocial"
|
||||
|
||||
gotosocial_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}gotosocial"
|
||||
|
||||
gotosocial_uid: "{{ mash_playbook_uid }}"
|
||||
gotosocial_gid: "{{ mash_playbook_gid }}"
|
||||
|
||||
gotosocial_database_host: "{{ devture_postgres_identifier if devture_postgres_enabled else '' }}"
|
||||
gotosocial_database_port: "{{ '5432' if devture_postgres_enabled else '' }}"
|
||||
gotosocial_database_password: "{{ '%s' | format(mash_playbook_generic_secret_key) | password_hash('sha512', 'db.gotosocial', rounds=655555) | to_uuid }}"
|
||||
gotosocial_database_username: "{{ gotosocial_identifier }}"
|
||||
|
||||
gotosocial_systemd_required_services_list: |
|
||||
{{
|
||||
(['docker.service'])
|
||||
+
|
||||
([devture_postgres_identifier ~ '.service'] if devture_postgres_enabled and gotosocial_database_host == devture_postgres_identifier else [])
|
||||
}}
|
||||
|
||||
gotosocial_container_additional_networks: |
|
||||
{{
|
||||
([mash_playbook_reverse_proxyable_services_additional_network] if mash_playbook_reverse_proxyable_services_additional_network else [])
|
||||
+
|
||||
([devture_postgres_container_network] if devture_postgres_enabled and gotosocial_database_host == devture_postgres_identifier and gotosocial_container_network != devture_postgres_container_network else [])
|
||||
}}
|
||||
|
||||
gotosocial_container_labels_traefik_enabled: "{{ mash_playbook_traefik_labels_enabled }}"
|
||||
gotosocial_container_labels_traefik_docker_network: "{{ mash_playbook_reverse_proxyable_services_additional_network }}"
|
||||
gotosocial_container_labels_traefik_entrypoints: "{{ devture_traefik_entrypoint_primary }}"
|
||||
gotosocial_container_labels_traefik_tls_certResolver: "{{ devture_traefik_certResolver_primary }}"
|
||||
|
||||
########################################################################
|
||||
# #
|
||||
# /gotosocial #
|
||||
# #
|
||||
########################################################################
|
||||
|
|
30
justfile
30
justfile
|
@ -1,44 +1,56 @@
|
|||
# Shows help
|
||||
default:
|
||||
@just --list --justfile {{ justfile() }}
|
||||
@just --list --justfile {{ justfile() }}
|
||||
|
||||
# Pulls external Ansible roles
|
||||
roles:
|
||||
rm -rf roles/galaxy
|
||||
ansible-galaxy install -r requirements.yml -p roles/galaxy/ --force
|
||||
#!/usr/bin/env sh
|
||||
if [ -x "$(command -v agru)" ]; then
|
||||
agru
|
||||
else
|
||||
rm -rf roles/galaxy
|
||||
ansible-galaxy install -r requirements.yml -p roles/galaxy/ --force
|
||||
fi
|
||||
|
||||
# Updates requirements.yml if there are any new tags available. Requires agru
|
||||
update:
|
||||
@agru -u
|
||||
|
||||
# Runs ansible-lint against all roles in the playbook
|
||||
lint:
|
||||
ansible-lint
|
||||
ansible-lint
|
||||
|
||||
# Runs the playbook with --tags=install-all,start and optional arguments
|
||||
install-all *extra_args: (run-tags "install-all,start" extra_args)
|
||||
|
||||
# Runs installation tasks for a single service
|
||||
install-service service *extra_args:
|
||||
just --justfile {{ justfile() }} run --tags=install-{{ service }},start-group --extra-vars=group={{ service }} {{ extra_args }}
|
||||
just --justfile {{ justfile() }} run \
|
||||
--tags=install-{{ service }},start-group \
|
||||
--extra-vars=group={{ service }} \
|
||||
--extra-vars=devture_systemd_service_manager_service_restart_mode=one-by-one {{ extra_args }}
|
||||
|
||||
# Runs the playbook with --tags=setup-all,start and optional arguments
|
||||
setup-all *extra_args: (run-tags "setup-all,start" extra_args)
|
||||
|
||||
# Runs the playbook with the given list of arguments
|
||||
run +extra_args:
|
||||
time ansible-playbook -i inventory/hosts setup.yml {{ extra_args }}
|
||||
time ansible-playbook -i inventory/hosts setup.yml {{ extra_args }}
|
||||
|
||||
# Runs the playbook with the given list of comma-separated tags and optional arguments
|
||||
run-tags tags *extra_args:
|
||||
just --justfile {{ justfile() }} run --tags={{ tags }} {{ extra_args }}
|
||||
just --justfile {{ justfile() }} run --tags={{ tags }} {{ extra_args }}
|
||||
|
||||
# Starts all services
|
||||
start-all *extra_args: (run-tags "start-all" extra_args)
|
||||
|
||||
# Starts a specific service group
|
||||
start-group group *extra_args:
|
||||
@just --justfile {{ justfile() }} run-tags start-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
@just --justfile {{ justfile() }} run-tags start-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
|
||||
# Stops all services
|
||||
stop-all *extra_args: (run-tags "stop-all" extra_args)
|
||||
|
||||
# Stops a specific service group
|
||||
stop-group group *extra_args:
|
||||
@just --justfile {{ justfile() }} run-tags stop-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
@just --justfile {{ justfile() }} run-tags stop-group --extra-vars="group={{ group }}" {{ extra_args }}
|
||||
|
|
101
requirements.yml
101
requirements.yml
|
@ -1,137 +1,110 @@
|
|||
---
|
||||
|
||||
- src: git+https://github.com/geerlingguy/ansible-role-docker
|
||||
name: geerlingguy.docker
|
||||
version: 6.1.0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/swap
|
||||
version: 843a0222b76a5ec361b35f31bf4dc872b6d7d54e
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/ssh
|
||||
name: geerlingguy.docker
|
||||
- src: git+https://gitlab.com/etke.cc/roles/swap.git
|
||||
version: abfb18b6862108bbf24347500446203170324d7f
|
||||
- src: git+https://gitlab.com/etke.cc/roles/ssh.git
|
||||
version: 237adf859f9270db8a60e720bc4a58164806644e
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/fail2ban
|
||||
- src: git+https://gitlab.com/etke.cc/roles/fail2ban.git
|
||||
version: 09886730e8d3c061f22d1da4a542899063f97f0a
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.docker_sdk_for_python.git
|
||||
version: 129c8590e106b83e6f4c259649a613c6279e937a
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.playbook_help.git
|
||||
version: c1f40e82b4d6b072b6f0e885239322bdaaaf554f
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.systemd_docker_base.git
|
||||
version: 327d2e17f5189ac2480d6012f58cf64a2b46efba
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.timesync.git
|
||||
version: 3d5bb2976815958cdce3f368fa34fb51554f899b
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.playbook_state_preserver.git
|
||||
version: ff2fd42e1c1a9e28e3312bbd725395f9c2fc7f16
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.postgres.git
|
||||
version: 38764398bf82b06a1736c3bfedc71dfd229e4b52
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.postgres_backup.git
|
||||
version: 8e9ec48a09284c84704d7a2dce17da35f181574d
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.container_socket_proxy.git
|
||||
version: v0.1.1-1
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.traefik.git
|
||||
version: v2.9.9-0
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.systemd_service_manager.git
|
||||
version: 6ccb88ac5fc27e1e70afcd48278ade4b564a9096
|
||||
|
||||
version: v1.0.0-0
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.playbook_runtime_messages.git
|
||||
version: 9b4b088c62b528b73a9a7c93d3109b091dd42ec6
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.woodpecker_ci_server.git
|
||||
version: v0.15.7-2
|
||||
|
||||
- src: git+https://github.com/devture/com.devture.ansible.role.woodpecker_ci_agent.git
|
||||
version: v0.15.7-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/miniflux.git
|
||||
version: v2.0.43-2
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/grafana.git
|
||||
version: v9.4.7-0
|
||||
|
||||
version: v9.4.7-1
|
||||
- src: git+https://gitlab.com/etke.cc/roles/radicale.git
|
||||
version: v3.1.8.1-2
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/uptime_kuma.git
|
||||
version: v1.21.0-0
|
||||
|
||||
version: v1.21.1-0
|
||||
- src: git+https://gitlab.com/etke.cc/roles/redis.git
|
||||
version: v7.0.10-0
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/prometheus_node_exporter.git
|
||||
version: v1.5.0-7
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/prometheus_blackbox_exporter.git
|
||||
version: v0.23.0-3
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/redmine.git
|
||||
version: v5.0.5-1
|
||||
|
||||
- src: git+https://gitlab.com/etke.cc/roles/soft_serve.git
|
||||
version: v0.4.7-0
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-adguard-home.git
|
||||
name: adguard_home
|
||||
version: v0.107.26-0
|
||||
|
||||
name: adguard_home
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-aux.git
|
||||
version: v1.0.0-0
|
||||
name: aux
|
||||
version: v1.0.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-collabora-online.git
|
||||
name: collabora_online
|
||||
version: v22.05.12.1.1-0
|
||||
|
||||
name: collabora_online
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-docker-registry.git
|
||||
name: docker_registry
|
||||
version: v2.8.1-1
|
||||
|
||||
name: docker_registry
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-docker-registry-browser.git
|
||||
name: docker_registry_browser
|
||||
version: v1.6.0-0
|
||||
|
||||
name: docker_registry_browser
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-docker-registry-purger.git
|
||||
name: docker_registry_purger
|
||||
version: v1.0.0-0
|
||||
|
||||
name: docker_registry_purger
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-focalboard.git
|
||||
name: focalboard
|
||||
version: v7.8.0-0
|
||||
|
||||
name: focalboard
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-gitea.git
|
||||
name: gitea
|
||||
version: v1.19.0-0
|
||||
|
||||
name: gitea
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-keycloak.git
|
||||
version: v21.0.1-1
|
||||
name: keycloak
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-navidrome.git
|
||||
version: v0.49.3-1
|
||||
name: navidrome
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-netbox.git
|
||||
version: v3.4.6-2.5.1-0
|
||||
name: netbox
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-nextcloud.git
|
||||
version: v26.0.0-1
|
||||
name: nextcloud
|
||||
version: v26.0.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-peertube.git
|
||||
version: v5.1.0-2
|
||||
name: peertube
|
||||
version: v5.1.0-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-prometheus.git
|
||||
name: prometheus
|
||||
version: v2.43.0-0
|
||||
|
||||
name: prometheus
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-syncthing.git
|
||||
version: v1.23.2-1
|
||||
name: syncthing
|
||||
version: v1.23.2-0
|
||||
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-vaultwarden.git
|
||||
version: v1.28.0-0
|
||||
name: vaultwarden
|
||||
version: v1.27.0-2
|
||||
|
||||
- src: git+https://github.com/moan0s/hubsite.git
|
||||
version: 6b20c472d36ce5765dc44675d42cce74cbcbd0fe
|
||||
name: hubsite
|
||||
version: da6fed398a9dd0761db941cb903b53277c341cc6
|
||||
|
||||
- src: git+https://github.com/moan0s/role-firezone.git
|
||||
version: 3a2a1e4c6b484b643a847941937a80d0efd86d6c
|
||||
name: firezone
|
||||
version: ac8564d5e11a75107ba93aec6427b83be824c30a
|
||||
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-gotosocial.git
|
||||
name: gotosocial
|
||||
version: d608eb330af28b75d3e4881b2e8c09af64d078f1
|
||||
|
|
10
setup.yml
10
setup.yml
|
@ -68,12 +68,20 @@
|
|||
|
||||
- role: galaxy/gitea
|
||||
|
||||
- role: galaxy/gotosocial
|
||||
|
||||
- role: galaxy/grafana
|
||||
|
||||
- role: galaxy/keycloak
|
||||
|
||||
- role: galaxy/miniflux
|
||||
|
||||
- role: galaxy/hubsite
|
||||
|
||||
- role: galaxy/navidrome
|
||||
|
||||
- role: galaxy/netbox
|
||||
|
||||
- role: galaxy/nextcloud
|
||||
|
||||
- role: galaxy/peertube
|
||||
|
@ -88,6 +96,8 @@
|
|||
|
||||
- role: galaxy/redis
|
||||
|
||||
- role: galaxy/soft_serve
|
||||
|
||||
- role: galaxy/syncthing
|
||||
|
||||
- role: galaxy/vaultwarden
|
||||
|
|
Loading…
Reference in a new issue