1
0
Fork 0
Modular server management based on NixOS modules and focused on best practices.
Find a file
github-actions[bot] 38393af8d5 flake.lock: Update
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/ae5c332cbb5827f6b1f02572496b141021de335f' (2024-01-25)
  → 'github:nixos/nixpkgs/c002c6aa977ad22c60398daaa9be52f2203d0006' (2024-01-27)
2024-01-29 17:13:50 -08:00
.github/workflows split CI jobs 2024-01-28 14:36:18 -08:00
demo test demos 2024-01-28 14:36:18 -08:00
docs add more info for nextcloud 2024-01-28 22:37:30 -08:00
modules add more info for nextcloud 2024-01-28 22:37:30 -08:00
test add group and reloadServices options to ssl block 2024-01-24 22:45:51 -08:00
.gitignore use nix-fast-builds in CI 2023-12-04 00:01:25 -08:00
flake.lock flake.lock: Update 2024-01-29 17:13:50 -08:00
flake.nix use contract for ssl block 2024-01-19 10:48:10 -08:00
LICENSE relicense with AGPL 2023-11-16 20:56:00 -08:00
README.md add test badge 2024-01-28 14:36:18 -08:00

Self Host Blocks

Building blocks for self-hosting with battery included.

Tests Demo Documentation

SHB's (Self Host Blocks) goal is to provide a lower entry-bar for self-hosting. SHB provides opinionated building blocks fitting together to self-host any service you'd want. Some common services are provided out of the box.

To achieve this, SHB is using the full power of NixOS modules. Indeed, each building block and each service is a NixOS module and uses the modules defined in Nixpkgs.

Each building block defines a part of what a self-hosted app should provide. For example, HTTPS access through a subdomain or Single Sign-On. The goal of SHB is to make sure those blocks all fit together, whatever the actual implementation you choose. For example, the subdomain access could be done using Caddy or Nginx. This is achieved by providing an explicit contract for each block and validating that contract using NixOS VM integration tests.

One important goal of SHB is to be the smallest amount of code above what is available in nixpkgs. It should be the minimum necessary to make packages available there conform with the contracts. This way, there are less chance of breakage when nixpkgs gets updated.

SHB provides some out of the box implementation of those blocks:

SHB provides also services that integrate with those blocks out of the box. Progress is detailed in the Supported Features section.

Caution: You should know that although I am using everything in this repo for my personal production server, this is really just a one person effort for now and there are most certainly bugs that I didn't discover yet.

TOC

Supported Features

Currently supported services and features are:

  • Authelia as SSO provider.
    • Export metrics to Prometheus.
  • LDAP server through lldap, it provides a nice Web UI.
    • Administrative UI only accessible from local network.
  • Backup with Restic or BorgBackup
    • UI for backups.
    • Export metrics to Prometheus.
    • Alert when backups fail or are not done on time.
  • Reverse Proxy with Nginx.
    • Export metrics to Prometheus.
    • Log slow requests.
    • SSL support.
    • Backup support.
  • Monitoring through Prometheus and Grafana.
    • Export systemd services status.
    • Provide out of the box dashboards and alerts for common tasks.
    • LDAP auth.
    • SSO auth.
  • Vaultwarden
    • UI only accessible for vaultwarden_user LDAP group.
    • /admin only accessible for vaultwarden_admin LDAP group.
    • [WIP] True SSO support, see dani-garcia/vaultwarden/issues/246. For now, Authelia protects access to the UI but you need to login afterwards to Vaultwarden. So there are two login required.
  • Nextcloud
    • LDAP auth, unfortunately we need to configure this manually.
      • Declarative setup.
    • SSO auth.
      • Declarative setup.
    • Backup support.
    • Optional tracing debug.
    • Export traces to Prometheus.
    • Export metrics to Prometheus.
  • Home Assistant.
    • Export metrics to Prometheus.
    • LDAP auth through homeassistant_user LDAP group.
    • SSO auth.
    • Backup support.
  • Jellyfin
    • Export metrics to Prometheus.
    • LDAP auth through jellyfin_user and jellyfin_admin LDAP groups.
    • SSO auth.
    • Backup support.
  • Hledger
    • Export metrics to Prometheus.
    • LDAP auth through hledger_user LDAP group.
    • SSO auth.
    • Backup support.
  • Database Postgres
    • Slow log monitoring.
    • Export metrics to Prometheus.
  • VPN tunnel
  • Arr suite
    • SSO auth (one account for all users).
    • VPN support.
  • Mount webdav folders
  • Gitea to deploy
  • Scrutiny to monitor hard drives health
    • Export metrics to Prometheus.
  • QoL
    • Unit tests for modules.
    • Running in CI.
    • Integration tests with real nodes.
    • Self published documentation for options.
    • Examples for all building blocks.

Usage

The following snippet shows how to deploy to a machine (here machine2) using Colmena:

{
  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";

    selfhostblocks.url = "github:ibizaman/selfhostblocks";
  };

  outputs = { self, selfhostblocks }: {
    colmena = {
      meta =
        let
          system = "x86_64-linux";
        in {
          nixpkgs = import nixpkgs { inherit system; };
          nodeNixpkgs = {
            machine2 = import selfhostblocks.inputs.nixpkgs { inherit system; };
          };
        };

      machine1 = ...;

      machine2 = { selfhostblocks, ... }: {
        imports = [
          selfhostblocks.nixosModules.${system}.default
        ];
      };
    };
  };
}

More information is provided in the manual (see below).

Manual

The (WIP) complete manual can be found at shb.skarabox.com. The information in this README will be slowly moved over there.

The best way for now to understand how to use those modules is to read the code linked above and see how they are used in the provided services and in the demos. Also, here are a few examples taken from my personal usage of selfhostblocks.

Add SSL configuration

This is pretty much a prerequisite for all services.

shb.ssl = {
  enable = true;
  domain = "example.com";
  adminEmail = "me@example.com";
  sopsFile = ./secrets/linode.yaml;
  dnsProvider = "linode";
};

The configuration above assumes you own the example.com domain and the DNS is managed by Linode.

The sops file must be in the following format:

acme: |-
    LINODE_HTTP_TIMEOUT=10
    LINODE_POLLING_INTERVAL=10
    LINODE_PROPAGATION_TIMEOUT=240
    LINODE_TOKEN=XYZ...    

For now, linode is the only supported DNS provider as it's the one I'm using. I intend to make this module more generic so you can easily use another provider not supported by selfhostblocks. You can skip setting the shb.ssl options and roll your own. Feel free to look at the ssl.nix for inspiration.

Add LDAP and Authelia services

These too are prerequisites for other services. Not all services support LDAP and SSO just yet, but I'm working on that.

shb.ldap = {
  enable = true;
  domain = "example.com";
  subdomain = "ldap";
  ldapPort = 3890;
  httpPort = 17170;
  dcdomain = "dc=example,dc=com";
  sopsFile = ./secrets/ldap.yaml;
  localNetworkIPRange = "192.168.1.0/24";
};

shb.authelia = {
  enable = true;
  domain = "example.com";
  subdomain = "authelia";

  ldapEndpoint = "ldap://127.0.0.1:${builtins.toString config.shb.ldap.ldapPort}";
  dcdomain = config.shb.ldap.dcdomain;

  smtpHost = "smtp.mailgun.org";
  smtpPort = 587;
  smtpUsername = "postmaster@mg.example.com";

  secrets = {
    jwtSecretFile = config.sops.secrets."authelia/jwt_secret".path;
    ldapAdminPasswordFile = config.sops.secrets."authelia/ldap_admin_password".path;
    sessionSecretFile = config.sops.secrets."authelia/session_secret".path;
    notifierSMTPPasswordFile = config.sops.secrets."authelia/smtp_password".path;
    storageEncryptionKeyFile = config.sops.secrets."authelia/storage_encryption_key".path;
    identityProvidersOIDCHMACSecretFile = config.sops.secrets."authelia/hmac_secret".path;
    identityProvidersOIDCIssuerPrivateKeyFile = config.sops.secrets."authelia/private_key".path;
  };
};
sops.secrets."authelia/jwt_secret" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};
sops.secrets."authelia/ldap_admin_password" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};
sops.secrets."authelia/session_secret" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};
sops.secrets."authelia/smtp_password" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};
sops.secrets."authelia/storage_encryption_key" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};
sops.secrets."authelia/hmac_secret" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};
sops.secrets."authelia/private_key" = {
  sopsFile = ./secrets/authelia.yaml;
  mode = "0400";
  owner = config.shb.authelia.autheliaUser;
  restartUnits = [ "authelia.service" ];
};

This sets up lldap under https://ldap.example.com and authelia under https://authelia.example.com.

The lldap sops file must be in the following format:

lldap:
    user_password: XXX...
    jwt_secret: YYY...

You can format the Authelia sops file as you wish since you can give the path to every secret independently. For completeness, here's the format expected by the snippet above:

authelia:
    ldap_admin_password: AAA...
    smtp_password: BBB...
    jwt_secret: CCC...
    storage_encryption_key: DDD...
    session_secret: EEE...
    storage_encryption_key: FFF...
    hmac_secret: GGG...
    private_key: |
        -----BEGIN PRIVATE KEY-----
        MII...MDQ=
        -----END PRIVATE KEY-----        

See the ldap.nix and authelia.nix modules for more info.

Backup folders

See the manual.

Deploy the full Grafana, Prometheus and Loki suite

See the manual.

Set up network tunnel with VPN and Proxy

shb.vpn.nordvpnus = {
  enable = true;
  # Only "nordvpn" supported for now.
  provider = "nordvpn";
  dev = "tun1";
  # Must be unique per VPN instance.
  routingNumber = 10;
  # Change to the one you want to connect to
  remoteServerIP = "1.2.3.4";
  sopsFile = ./secrets/vpn.yaml;
  proxyPort = 12000;
};

This sets up a tunnel interface tun1 that connects to the VPN provider, here NordVPN. Also, if the proxyPort option is not null, this will spin up a tinyproxy instance that listens on the given port and redirects all traffic through that VPN.

$ curl 'https://api.ipify.org?format=json'
{"ip":"107.21.107.115"}

$ curl --interface tun1 'https://api.ipify.org?format=json'
{"ip":"46.12.123.113"}

$ curl --proxy 127.0.0.1:12000 'https://api.ipify.org?format=json'
{"ip":"46.12.123.113"}

Provided Services

The services above are those I am using myself. I intend to add more.

The best way for now to understand how to use those modules is to read the code linked above and see how they are used in the demos. Also, here are a few examples taken from my personal usage of selfhostblocks.

Common Options

Some common options are provided for all services.

  • enable (bool). Set to true to deploy and run the service.
  • subdomain (string). Subdomain under which to serve the service.
  • domain (string). Domain under which to server the service.

Some other common options are the following. I am not satisfied with how those are expressed so those will most certainly change.

  • LDAP and OIDC options for SSO, authentication and authorization.
  • Secrets.
  • Backups.

Note that for backups, every service exposes what directory should be backed up, you must merely choose when those backups will take place and where they will be stored.

Deploy an hledger Instance with LDAP and SSO support

shb.hledger = {
  enable = true;
  subdomain = "hledger";
  domain = "example.com";
  authEndpoint = "https://authelia.example.com";
  localNetworkIPRange = "192.168.1.0/24";
};
shb.backup.instances.hledger = # Same as the examples above

This will setup:

  • The nginx reverse proxy to listen on requests for the hledger.example.com domain.
  • Backup of everything.
  • Only allow users of the hledger_user group to be able to login.
  • All the required databases and secrets.

See hledger.nix module for more details.

Deploy a Jellyfin instance with LDAP and SSO support

shb.jellyfin = {
  enable = true;
  domain = "example.com";
  subdomain = "jellyfin";

  sopsFile = ./secrets/jellyfin.yaml;
  ldapHost = "127.0.0.1";
  ldapPort = 3890;
  dcdomain = config.shb.ldap.dcdomain;
  authEndpoint = "https://${config.shb.authelia.subdomain}.${config.shb.authelia.domain}";
  oidcClientID = "jellyfin";
  oidcUserGroup = "jellyfin_user";
  oidcAdminUserGroup = "jellyfin_admin";
};
shb.backup.instances.jellyfin = # Same as the examples above

This sets up, as usual:

  • The nginx reverse proxy to listen on requests for the jellyfin.example.com domain.
  • Backup of everything.
  • Only allow users of the jellyfin_user or jellyfin_admin ldap group to be able to login.
  • All the required databases and secrets.

The sops file format is:

jellyfin:
    ldap_password: XXX...
    sso_secret: YYY...

Although the configuration of the LDAP and SSO plugins is done declaratively in the Jellyfin preStart step, they still need to be installed manually at the moment.

See jellyfin.nix module for more details.

Deploy a Home Assistant instance with LDAP support

SSO support is WIP.

shb.home-assistant = {
  enable = true;
  subdomain = "ha";
  inherit domain;
  ldapEndpoint = "http://127.0.0.1:${builtins.toString config.shb.ldap.httpPort}";
  backupCfg = # Same as the examples above
  sopsFile = ./secrets/homeassistant.yaml;
};
services.home-assistant = {
  extraComponents = [
    "backup"
    "esphome"
    "jellyfin"
    "kodi"
    "wyoming"
    "zha"
  ];
};
services.wyoming.piper.servers = {
  "fr" = {
    enable = true;
    voice = "fr-siwis-medium";
    uri = "tcp://0.0.0.0:10200";
    speaker = 0;
  };
};
services.wyoming.faster-whisper.servers = {
  "tiny-fr" = {
    enable = true;
    model = "medium-int8";
    language = "fr";
    uri = "tcp://0.0.0.0:10300";
    device = "cpu";
  };
};

This sets up everything needed to have a Home Assistant instance available under ha.example.com. It also shows how to have a piper and whisper server for respectively text to speech and speech to text. The integrations must still be setup in the web UI.

The sops file must be in the following format:

home-assistant: |
    country: "US"
    latitude_home: "0.01234567890123"
    longitude_home: "-0.01234567890123"    

Demos

Demos that start and deploy a service on a Virtual Machine on your computer are located under the demo folder. These show the onboarding experience you would get if you deployed one of the services on your own server.

Community

All issues and PRs are welcome. For PRs, if they are substantial changes, please open an issue to discuss the details first.

Come hang out in the Matrix channel. :)

Along the way, I made quite a few changes to the ubderlying nixpkgs module I'm using. I intend to upstream to nixpkgs as much of those as makes sense.

Tips

Run tests

Run all tests:

$ nix build .#checks.${system}.all
# or
$ nix flake check
# or
$ nix run github:Mic92/nix-fast-build -- --skip-cached --flake ".#checks.$(nix eval --raw --impure --expr builtins.currentSystem)"

Run one group of tests:

$ nix build .#checks.${system}.modules
$ nix build .#checks.${system}.vm_postgresql_peerAuth

Run one VM test interactively:

$ nix run .#checks.${system}.vm_postgresql_peerAuth.driverInteractive

When you get to the shell, run either start_all() or test_script(). The former just starts all the VMs and service, then you can introspect. The latter also starts the VMs if they are not yet and then will run the test script.

Upload test results to CI

Github actions do now have hardware acceleration, so running them there is not slow anymore. If needed, the tests results can still be pushed to cachix so they can be reused in CI.

After running the nix-fast-build command from the previous section, run:

$ find . -type l -name "result-vm_*" | xargs readlink | nix run nixpkgs#cachix -- push selfhostblocks

Deploy using colmena

$ nix run nixpkgs#colmena -- apply

Use a local version of selfhostblocks

This works with any flake input you have. Either, change the .url field directly in you flake.nix:

selfhostblocks.url = "/home/me/projects/selfhostblocks";

Or override on the command line:

$ nix flake lock --override-input selfhostblocks ../selfhostblocks

I usually combine the override snippet above with deploying:

$ nix flake lock --override-input selfhostblocks ../selfhostblocks && nix run nixpkgs#colmena -- apply

Diff changes

First, you must know what to compare. You need to know the path to the nix store of what is already deployed and to what you will deploy.

What is deployed

To know what is deployed, either just stash the changes you made and run build:

$ nix run nixpkgs#colmena -- build
...
Built "/nix/store/yyw9rgn8v5jrn4657vwpg01ydq0hazgx-nixos-system-baryum-23.11pre-git"

Or ask the target machine:

$ nix run nixpkgs#colmena -- exec -v readlink -f /run/current-system
baryum | /nix/store/77n1hwhgmr9z0x3gs8z2g6cfx8gkr4nm-nixos-system-baryum-23.11pre-git

What will get deployed

Assuming you made some changes, then instead of deploying with apply, just build:

$ nix run nixpkgs#colmena -- build
...
Built "/nix/store/16n1klx5cxkjpqhrdf0k12npx3vn5042-nixos-system-baryum-23.11pre-git"

Get the full diff

With nix-diff:

$ nix run nixpkgs#nix-diff -- \
  /nix/store/yyw9rgn8v5jrn4657vwpg01ydq0hazgx-nixos-system-baryum-23.11pre-git \
  /nix/store/16n1klx5cxkjpqhrdf0k12npx3vn5042-nixos-system-baryum-23.11pre-git \
  --color always | less

Get version bumps

A nice summary of version changes can be produced with:

$ nix run nixpkgs#nvd -- diff \
  /nix/store/yyw9rgn8v5jrn4657vwpg01ydq0hazgx-nixos-system-baryum-23.11pre-git \
  /nix/store/16n1klx5cxkjpqhrdf0k12npx3vn5042-nixos-system-baryum-23.11pre-git \

Generate random secret

$ nix run nixpkgs#openssl -- rand -hex 64

TODOs

  • Add examples that sets up services in a VM.
  • Do not depend on sops.
  • Add more options to avoid hardcoding stuff.
  • Make sure nginx gets reloaded when SSL certs gets updated.
  • Better backup story by taking optional LVM or ZFS snapshot before backing up.
  • Many more tests.
  • Tests deploying to real nodes.
  • DNS must be more configurable.
  • Fix tests on nix-darwin.

While creating NixOS tests:

While creating an XML config generator for Radarr:

License

I'm following the Nextcloud license which is AGPLv3. See this article from the FSF that explains what this license adds to the GPL one.