Compare commits

...

19 commits

Author SHA1 Message Date
fbfbf97fc3
Add to list of systems built
Some checks failed
/ dev-shell (push) Successful in 26s
/ rust-packages (push) Successful in 32s
/ terraform-providers (push) Successful in 55s
/ check (push) Successful in 1m21s
/ systems (push) Has been cancelled
2025-07-15 08:33:57 +02:00
7cad695983
Build authentik as well
All checks were successful
/ dev-shell (push) Successful in 26s
/ rust-packages (push) Successful in 33s
/ terraform-providers (push) Successful in 58s
/ check (push) Successful in 2m0s
/ systems (push) Successful in 1m56s
2025-07-15 08:30:57 +02:00
b83cfce0af
Make openstack servers be a server
All checks were successful
/ dev-shell (push) Successful in 26s
/ check (push) Successful in 3m26s
/ rust-packages (push) Successful in 31s
/ terraform-providers (push) Successful in 56s
/ systems (push) Successful in 3m31s
2025-07-15 08:25:43 +02:00
b0c972f5b3
Restructure conditionals on desktop in homes
Some checks failed
/ terraform-providers (push) Successful in 30s
/ systems (push) Failing after 1m42s
/ dev-shell (push) Successful in 26s
/ rust-packages (push) Successful in 32s
/ check (push) Failing after 1m11s
2025-07-15 08:22:59 +02:00
80af3c16e5
Fix wrong config path
Some checks failed
/ systems (push) Failing after 23s
/ terraform-providers (push) Successful in 1m37s
/ rust-packages (push) Successful in 33s
/ dev-shell (push) Successful in 24s
/ check (push) Failing after 59s
2025-07-15 08:16:39 +02:00
5826c78a68
More fixes
Some checks failed
/ check (push) Failing after 1m29s
/ systems (push) Failing after 22s
/ terraform-providers (push) Successful in 4m6s
/ rust-packages (push) Successful in 1m55s
/ dev-shell (push) Successful in 1m39s
2025-07-15 08:13:26 +02:00
f8a0434e2b
Fix some minor issues
Some checks failed
/ check (push) Failing after 49s
/ dev-shell (push) Successful in 26s
/ rust-packages (push) Successful in 32s
/ systems (push) Failing after 19s
/ terraform-providers (push) Successful in 29s
2025-07-14 23:41:25 +02:00
8cd2737aca
Begin moving openbao and authentik server to new setup
Some checks failed
/ rust-packages (push) Successful in 2m45s
/ systems (push) Failing after 1m40s
/ terraform-providers (push) Successful in 4m2s
/ dev-shell (push) Successful in 54s
/ check (push) Failing after 1m31s
2025-07-14 23:34:02 +02:00
a996ba3083
Add hetzner user-data url
Some checks failed
/ dev-shell (push) Successful in 58s
/ rust-packages (push) Successful in 4m28s
/ check (push) Failing after 5m11s
/ terraform-providers (push) Successful in 12m9s
/ systems (push) Successful in 9m47s
2025-07-13 00:58:40 +02:00
e360abdf4b
Begin adding services to the monitoring stack 2025-07-13 00:51:31 +02:00
32ece6eb43
Begin creating monitoring.kaareskovgaard.net
All checks were successful
/ dev-shell (push) Successful in 21s
/ rust-packages (push) Successful in 27s
/ terraform-providers (push) Successful in 23s
/ check (push) Successful in 1m4s
/ systems (push) Successful in 1m58s
2025-07-11 12:40:45 +02:00
c402ada8f7
Get basic nginx and acme setup working
All checks were successful
/ dev-shell (push) Successful in 1m18s
/ rust-packages (push) Successful in 2m54s
/ check (push) Successful in 3m21s
/ terraform-providers (push) Successful in 9m33s
/ systems (push) Successful in 8m34s
This should enable DNS-01 acme for all khs openstack servers,
thus removing the pain of setting up acme for those servers.

Do note that this might not really be needed that much anymore,
as I should be able to hit them over IPv6, but for ease of mind,
this will enable ACME trivially, also for non https workloads, as well
as servers without open ports.

Do note that currently there's a global unifi firewall rule in place to
allow port 80 and 443 to my own servers over ipv6, I'd like to remove this
and have Nix configure firewall rules for each server individually, as
requested in the setup.
2025-07-11 00:38:31 +02:00
365b16c380
Begin working on nginx setup
Some checks failed
/ dev-shell (push) Successful in 20s
/ check (push) Failing after 32s
/ rust-packages (push) Failing after 29s
/ terraform-providers (push) Successful in 21s
/ systems (push) Failing after 14s
2025-07-10 21:42:33 +02:00
12ab4ce918
Attempt to fix compilation error on Macos
Some checks failed
/ rust-packages (push) Failing after 1m2s
/ dev-shell (push) Successful in 53s
/ check (push) Failing after 1m17s
/ terraform-providers (push) Successful in 21s
/ systems (push) Successful in 1m38s
2025-07-10 20:57:01 +02:00
459b45ccc5
Get openstack working again
Some checks failed
/ systems (push) Successful in 8m26s
/ dev-shell (push) Successful in 2m4s
/ rust-packages (push) Successful in 5m2s
/ terraform-providers (push) Successful in 10m59s
/ check (push) Failing after 6m4s
Also first instance of getting server up with working certificate
right away, through cloud user data.
2025-07-10 00:51:28 +02:00
608d758f30
Begin testing bootstrapping of vault authentication
However, the nixos-install script fails on khs openstack
as the system won't boot up after installation due it
being unable to locate the root disk. I am not sure what disk
it ends up finding.
2025-07-09 23:53:42 +02:00
f7d4bef46c
Make some more changes to machine setup
Some checks failed
/ rust-packages (push) Successful in 1m22s
/ terraform-providers (push) Successful in 3m22s
/ check (push) Failing after 39s
/ dev-shell (push) Successful in 1m10s
Work being done as an attempt to be able to
create a small monitoring server
2025-07-09 15:12:11 +02:00
89d410cb6c
Fix some test issues
All checks were successful
/ dev-shell (push) Successful in 1m3s
/ rust-packages (push) Successful in 1m8s
/ check (push) Successful in 2m54s
/ terraform-providers (push) Successful in 13s
2025-07-08 23:47:54 +02:00
84bf6d0350
Convert some older nixos-system code 2025-07-08 23:43:17 +02:00
111 changed files with 20896 additions and 909 deletions

View file

@ -25,4 +25,17 @@ jobs:
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- run: | - run: |
nix build --no-link '.#packages.x86_64-linux.terraform-provider-cloudflare'
nix build --no-link '.#packages.x86_64-linux.terraform-provider-hcloud'
nix build --no-link '.#packages.x86_64-linux.terraform-provider-openstack'
nix build --no-link '.#packages.x86_64-linux.terraform-provider-unifi' nix build --no-link '.#packages.x86_64-linux.terraform-provider-unifi'
nix build --no-link '.#packages.x86_64-linux.terraform-provider-vault'
systems:
runs-on: cache.kaareskovgaard.net
steps:
- uses: actions/checkout@v4
- run: |
nix build --no-link '.#nixosConfigurations."desktop.kaareskovgaard.net".config.system.build.toplevel'
nix build --no-link '.#nixosConfigurations."desktop.kaareskovgaard.net".config.system.build.vm'
nix build --no-link '.#nixosConfigurations."monitoring.kaareskovgaard.net".config.system.build.toplevel'
nix build --no-link '.#nixosConfigurations."security.kaareskovgaard.net".config.system.build.toplevel'

1
.gitignore vendored
View file

@ -2,3 +2,4 @@
result/ result/
.DS_Store .DS_Store
rust/target rust/target
*.qcow2

View file

@ -8,32 +8,26 @@ When running on a desktop machine, simply running `nixos-install` as per usual s
## Servers ## Servers
To provision the cloud resources needed, the following can be run: To provision the cloud resources needed, and install NixOS, the following can be run:
```bash ```bash
nix run '.#create-instance' -- <hostname> nix run '.#create-instance' -- <hostname>
``` ```
This will run the `provision.pre` terraform code to ensure the cloud resources are created as needed, on either hetzner or openstack. It should also select the appropriate secrets backend to fetch secrets from. In general every server should use `vault` (OpenBAO) as the backend, except for the server hosting OpenBAO. This will run the `provision.pre` terraform code to ensure the cloud resources are created as needed, on either hetzner or openstack. It should also select the appropriate secrets backend to fetch secrets from. In general every server should use `vault` (OpenBAO) as the backend, except for the server hosting OpenBAO. Then it will install NixOS.
Once the instance has been created it will _not_ run NixOS, but rather something like Debian, which can then be provisioned into a NixOS installation. Run the following command to enroll NixOS on the instance: When making changes to eg. the approle needed, and needing to provision the instance again (but not installing NixOS again, as that won't work), run:
```bash ```bash
nix run '.#inxos-install' -- <hostname> nix run '.#provision-instance' -- <hostname>
``` ```
<details> To update the NixOS config on an instance:
<summary>NOTE</summary>
If you're creating and destroying instances on the same host name and have DNS caching trouble, you can run the following to connect using an IP address:
```bash ```bash
nix run '.#nixos-install' -- <hostname> <ip> nix run '.#update-instance` -- <hostname>
``` ```
</details>
TODO: Here should be some guidance on how to transfer RoleID/SecretID to the server, as well as running the post provisioning scripts for the servers that need it.
To delete the resources again run: To delete the resources again run:
```bash ```bash
@ -42,7 +36,7 @@ nix run '.#destroy-instance' -- <hostname>
## Secrets ## Secrets
To transfer the secrets needed for OpenTofu from Bitwarden to OpenBAO run: To transfer the secrets needed for OpenTofu from Bitwarden to OpenBAO/Vault run:
```bash ```bash
nix run '.#bitwarden-to-vault' nix run '.#bitwarden-to-vault'

BIN
assets/wallpaper.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

BIN
desktop.qcow2 Normal file

Binary file not shown.

641
flake.lock generated
View file

@ -16,35 +16,148 @@
"type": "github" "type": "github"
} }
}, },
"bats-assert": { "authentik-nix": {
"flake": false, "inputs": {
"authentik-src": "authentik-src",
"flake-compat": "flake-compat",
"flake-parts": [
"flake-parts"
],
"flake-utils": [
"flake-utils"
],
"napalm": "napalm",
"nixpkgs": [
"nixpkgs"
],
"pyproject-build-systems": "pyproject-build-systems",
"pyproject-nix": "pyproject-nix",
"systems": [
"systems"
],
"uv2nix": "uv2nix"
},
"locked": { "locked": {
"lastModified": 1636059754, "lastModified": 1751033152,
"narHash": "sha256-ewME0l27ZqfmAwJO4h5biTALc9bDLv7Bl3ftBzBuZwk=", "narHash": "sha256-0ANu9OLQJszcEyvnfDB7G957uqskZwCrTzRXz/yfAmE=",
"owner": "bats-core", "owner": "nix-community",
"repo": "bats-assert", "repo": "authentik-nix",
"rev": "34551b1d7f8c7b677c1a66fc0ac140d6223409e5", "rev": "1a4d6a5dd6fef39b99eb7ea4db79c5d5c7d7f1bf",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "bats-core", "owner": "nix-community",
"repo": "bats-assert", "repo": "authentik-nix",
"type": "github" "type": "github"
} }
}, },
"bats-support": { "authentik-src": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1548869839, "lastModified": 1751031262,
"narHash": "sha256-Gr4ntadr42F2Ks8Pte2D4wNDbijhujuoJi4OPZnTAZU=", "narHash": "sha256-SNgRMQUjL3DTlWkMyRMan+pY1FfIV+DMeq5BiTM0N0k=",
"owner": "bats-core", "owner": "goauthentik",
"repo": "bats-support", "repo": "authentik",
"rev": "d140a65044b2d6810381935ae7f0c94c7023c8c3", "rev": "b34665fabd8d938d81ce871a4e86ca528c5f253b",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "bats-core", "owner": "goauthentik",
"repo": "bats-support", "ref": "version/2025.4.3",
"repo": "authentik",
"type": "github"
}
},
"base16": {
"inputs": {
"fromYaml": "fromYaml"
},
"locked": {
"lastModified": 1746562888,
"narHash": "sha256-YgNJQyB5dQiwavdDFBMNKk1wyS77AtdgDk/VtU6wEaI=",
"owner": "SenchoPens",
"repo": "base16.nix",
"rev": "806a1777a5db2a1ef9d5d6f493ef2381047f2b89",
"type": "github"
},
"original": {
"owner": "SenchoPens",
"repo": "base16.nix",
"type": "github"
}
},
"base16-fish": {
"flake": false,
"locked": {
"lastModified": 1622559957,
"narHash": "sha256-PebymhVYbL8trDVVXxCvZgc0S5VxI7I1Hv4RMSquTpA=",
"owner": "tomyun",
"repo": "base16-fish",
"rev": "2f6dd973a9075dabccd26f1cded09508180bf5fe",
"type": "github"
},
"original": {
"owner": "tomyun",
"repo": "base16-fish",
"type": "github"
}
},
"base16-helix": {
"flake": false,
"locked": {
"lastModified": 1748408240,
"narHash": "sha256-9M2b1rMyMzJK0eusea0x3lyh3mu5nMeEDSc4RZkGm+g=",
"owner": "tinted-theming",
"repo": "base16-helix",
"rev": "6c711ab1a9db6f51e2f6887cc3345530b33e152e",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "base16-helix",
"type": "github"
}
},
"base16-vim": {
"flake": false,
"locked": {
"lastModified": 1732806396,
"narHash": "sha256-e0bpPySdJf0F68Ndanwm+KWHgQiZ0s7liLhvJSWDNsA=",
"owner": "tinted-theming",
"repo": "base16-vim",
"rev": "577fe8125d74ff456cf942c733a85d769afe58b7",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "base16-vim",
"rev": "577fe8125d74ff456cf942c733a85d769afe58b7",
"type": "github"
}
},
"cosmic-manager": {
"inputs": {
"flake-parts": [
"flake-parts"
],
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1744387566,
"narHash": "sha256-O39zTv7LdRgr4Hw38d+eQG2LYpP75rw2XKqTGV5qzgs=",
"owner": "HeitorAugustoLN",
"repo": "cosmic-manager",
"rev": "52d3fdd080a9dd4639948687682a68282fbf0314",
"type": "github"
},
"original": {
"owner": "HeitorAugustoLN",
"repo": "cosmic-manager",
"type": "github" "type": "github"
} }
}, },
@ -70,11 +183,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1751607816, "lastModified": 1751854533,
"narHash": "sha256-5PtrwjqCIJ4DKQhzYdm8RFePBuwb+yTzjV52wWoGSt4=", "narHash": "sha256-U/OQFplExOR1jazZY4KkaQkJqOl59xlh21HP9mI79Vc=",
"owner": "nix-community", "owner": "nix-community",
"repo": "disko", "repo": "disko",
"rev": "da6109c917b48abc1f76dd5c9bf3901c8c80f662", "rev": "16b74a1e304197248a1bc663280f2548dbfcae3c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -83,25 +196,19 @@
"type": "github" "type": "github"
} }
}, },
"disko_2": { "firefox-gnome-theme": {
"inputs": { "flake": false,
"nixpkgs": [
"nixos-anywhere",
"nixpkgs"
]
},
"locked": { "locked": {
"lastModified": 1748225455, "lastModified": 1748383148,
"narHash": "sha256-AzlJCKaM4wbEyEpV3I/PUq5mHnib2ryEy32c+qfj6xk=", "narHash": "sha256-pGvD/RGuuPf/4oogsfeRaeMm6ipUIznI2QSILKjKzeA=",
"owner": "nix-community", "owner": "rafaelmardojai",
"repo": "disko", "repo": "firefox-gnome-theme",
"rev": "a894f2811e1ee8d10c50560551e50d6ab3c392ba", "rev": "4eb2714fbed2b80e234312611a947d6cb7d70caf",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nix-community", "owner": "rafaelmardojai",
"ref": "master", "repo": "firefox-gnome-theme",
"repo": "disko",
"type": "github" "type": "github"
} }
}, },
@ -111,7 +218,9 @@
"nixpkgs" "nixpkgs"
], ],
"snowfall-lib": "snowfall-lib", "snowfall-lib": "snowfall-lib",
"treefmt-nix": "treefmt-nix" "treefmt-nix": [
"treefmt-nix"
]
}, },
"locked": { "locked": {
"lastModified": 1751834884, "lastModified": 1751834884,
@ -128,6 +237,22 @@
} }
}, },
"flake-compat": { "flake-compat": {
"flake": false,
"locked": {
"lastModified": 1747046372,
"narHash": "sha256-CIVLLkVgvHYbgI2UpXvIIBJ12HWgX+fjA8Xf8PUmqCY=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "9100a0f413b0c601e0533d1d94ffd501ce2e7885",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-compat_2": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1650374568, "lastModified": 1650374568,
@ -145,38 +270,14 @@
}, },
"flake-parts": { "flake-parts": {
"inputs": { "inputs": {
"nixpkgs-lib": [ "nixpkgs-lib": "nixpkgs-lib"
"nixos-anywhere",
"nixpkgs"
]
}, },
"locked": { "locked": {
"lastModified": 1743550720, "lastModified": 1751413152,
"narHash": "sha256-hIshGgKZCgWh6AYJpJmRgFdR3WUbkY04o82X05xqQiY=", "narHash": "sha256-Tyw1RjYEsp5scoigs1384gIg6e0GoBVjms4aXFfRssQ=",
"owner": "hercules-ci", "owner": "hercules-ci",
"repo": "flake-parts", "repo": "flake-parts",
"rev": "c621e8422220273271f52058f618c94e405bb0f5", "rev": "77826244401ea9de6e3bac47c2db46005e1f30b5",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_2": {
"inputs": {
"nixpkgs-lib": [
"terranix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1736143030,
"narHash": "sha256-+hu54pAoLDEZT9pjHlqL9DNzWz0NbUn8NEAHP7PQPzU=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "b905f6fc23a9051a6e1b741e1438dbfc0634c6de",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -223,12 +324,17 @@
} }
}, },
"flake-utils_2": { "flake-utils_2": {
"inputs": {
"systems": [
"systems"
]
},
"locked": { "locked": {
"lastModified": 1634851050, "lastModified": 1731533236,
"narHash": "sha256-N83GlSGPJJdcqhUxSCS/WwW5pksYf3VP1M13cDRTSVA=", "narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "flake-utils",
"rev": "c91f3de5adaf1de973b797ef7485e441a65b8935", "rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -237,18 +343,36 @@
"type": "github" "type": "github"
} }
}, },
"flake-utils_3": { "fromYaml": {
"flake": false,
"locked": { "locked": {
"lastModified": 1634851050, "lastModified": 1731966426,
"narHash": "sha256-N83GlSGPJJdcqhUxSCS/WwW5pksYf3VP1M13cDRTSVA=", "narHash": "sha256-lq95WydhbUTWig/JpqiB7oViTcHFP8Lv41IGtayokA8=",
"owner": "numtide", "owner": "SenchoPens",
"repo": "flake-utils", "repo": "fromYaml",
"rev": "c91f3de5adaf1de973b797ef7485e441a65b8935", "rev": "106af9e2f715e2d828df706c386a685698f3223b",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "numtide", "owner": "SenchoPens",
"repo": "flake-utils", "repo": "fromYaml",
"type": "github"
}
},
"gnome-shell": {
"flake": false,
"locked": {
"lastModified": 1744584021,
"narHash": "sha256-0RJ4mJzf+klKF4Fuoc8VN8dpQQtZnKksFmR2jhWE1Ew=",
"owner": "GNOME",
"repo": "gnome-shell",
"rev": "52c517c8f6c199a1d6f5118fae500ef69ea845ae",
"type": "github"
},
"original": {
"owner": "GNOME",
"ref": "48.1",
"repo": "gnome-shell",
"type": "github" "type": "github"
} }
}, },
@ -273,6 +397,32 @@
"type": "github" "type": "github"
} }
}, },
"napalm": {
"inputs": {
"flake-utils": [
"authentik-nix",
"flake-utils"
],
"nixpkgs": [
"authentik-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1725806412,
"narHash": "sha256-lGZjkjds0p924QEhm/r0BhAxbHBJE1xMOldB/HmQH04=",
"owner": "willibutz",
"repo": "napalm",
"rev": "b492440d9e64ae20736d3bec5c7715ffcbde83f5",
"type": "github"
},
"original": {
"owner": "willibutz",
"ref": "avoid-foldl-stack-overflow",
"repo": "napalm",
"type": "github"
}
},
"nix-vm-test": { "nix-vm-test": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@ -296,15 +446,21 @@
}, },
"nixos-anywhere": { "nixos-anywhere": {
"inputs": { "inputs": {
"disko": "disko_2", "disko": [
"flake-parts": "flake-parts", "disko"
],
"flake-parts": [
"flake-parts"
],
"nix-vm-test": "nix-vm-test", "nix-vm-test": "nix-vm-test",
"nixos-images": "nixos-images", "nixos-images": "nixos-images",
"nixos-stable": "nixos-stable", "nixos-stable": "nixos-stable",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
], ],
"treefmt-nix": "treefmt-nix_2" "treefmt-nix": [
"treefmt-nix"
]
}, },
"locked": { "locked": {
"lastModified": 1749105224, "lastModified": 1749105224,
@ -364,11 +520,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1751582995, "lastModified": 1751943650,
"narHash": "sha256-u7ubvtxdTnFPpV27AHpgoKn7qHuE7sgWgza/1oj5nzA=", "narHash": "sha256-7orTnNqkGGru8Je6Un6mq1T8YVVU/O5kyW4+f9C1mZQ=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "7a732ed41ca0dd64b4b71b563ab9805a80a7d693", "rev": "88983d4b665fb491861005137ce2b11a9f89f203",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -378,33 +534,116 @@
"type": "github" "type": "github"
} }
}, },
"nixpkgs_2": { "nixpkgs-lib": {
"locked": { "locked": {
"lastModified": 1636273007, "lastModified": 1751159883,
"narHash": "sha256-eb6HcZNacO9vIP/KcJ5CoCRYSGfD+VxzYs2cCafEo4Y=", "narHash": "sha256-urW/Ylk9FIfvXfliA1ywh75yszAbiTEVgpPeinFyVZo=",
"owner": "nixos", "owner": "nix-community",
"repo": "nixpkgs", "repo": "nixpkgs.lib",
"rev": "c69c6533c820c55c3f1d924b399d8f6925a1e41a", "rev": "14a40a1d7fb9afa4739275ac642ed7301a9ba1ab",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nixos", "owner": "nix-community",
"repo": "nixpkgs", "repo": "nixpkgs.lib",
"type": "github"
}
},
"nur": {
"inputs": {
"flake-parts": [
"stylix",
"flake-parts"
],
"nixpkgs": [
"stylix",
"nixpkgs"
],
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1751320053,
"narHash": "sha256-3m6RMw0FbbaUUa01PNaMLoO7D99aBClmY5ed9V3vz+0=",
"owner": "nix-community",
"repo": "NUR",
"rev": "cbde1735782f9c2bb2c63d5e05fba171a14a4670",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "NUR",
"type": "github"
}
},
"pyproject-build-systems": {
"inputs": {
"nixpkgs": [
"authentik-nix",
"nixpkgs"
],
"pyproject-nix": [
"authentik-nix",
"pyproject-nix"
],
"uv2nix": [
"authentik-nix",
"uv2nix"
]
},
"locked": {
"lastModified": 1749519371,
"narHash": "sha256-UJONN7mA2stweZCoRcry2aa1XTTBL0AfUOY84Lmqhos=",
"owner": "pyproject-nix",
"repo": "build-system-pkgs",
"rev": "7c06967eca687f3482624250428cc12f43c92523",
"type": "github"
},
"original": {
"owner": "pyproject-nix",
"repo": "build-system-pkgs",
"type": "github"
}
},
"pyproject-nix": {
"inputs": {
"nixpkgs": [
"authentik-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1750499893,
"narHash": "sha256-ThKBd8XSvITAh2JqU7enOp8AfKeQgf9u7zYC41cnBE4=",
"owner": "pyproject-nix",
"repo": "pyproject.nix",
"rev": "e824458bd917b44bf4c38795dea2650336b2f55d",
"type": "github"
},
"original": {
"owner": "pyproject-nix",
"repo": "pyproject.nix",
"type": "github" "type": "github"
} }
}, },
"root": { "root": {
"inputs": { "inputs": {
"advisory-db": "advisory-db", "advisory-db": "advisory-db",
"authentik-nix": "authentik-nix",
"cosmic-manager": "cosmic-manager",
"crane": "crane", "crane": "crane",
"disko": "disko", "disko": "disko",
"flake-base": "flake-base", "flake-base": "flake-base",
"flake-parts": "flake-parts",
"flake-utils": "flake-utils_2",
"home-manager": "home-manager", "home-manager": "home-manager",
"nixos-anywhere": "nixos-anywhere", "nixos-anywhere": "nixos-anywhere",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"rust-overlay": "rust-overlay", "rust-overlay": "rust-overlay",
"stylix": "stylix",
"systems": "systems_2",
"terranix": "terranix", "terranix": "terranix",
"terranix-hcloud": "terranix-hcloud" "terranix-hcloud": "terranix-hcloud",
"treefmt-nix": "treefmt-nix_2"
} }
}, },
"rust-overlay": { "rust-overlay": {
@ -414,11 +653,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1751769931, "lastModified": 1752028888,
"narHash": "sha256-QR2Rp/41NkA5YxcpvZEKD1S2QE1Pb9U415aK8M/4tJc=", "narHash": "sha256-LRj3/PUpII6taWOrX1w/OeI6f1ncND02PP/kEHvPCqU=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "3ac4f630e375177ea8317e22f5c804156de177e8", "rev": "a0f1c656e053463b47639234b151a05e4441bb19",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -429,7 +668,7 @@
}, },
"snowfall-lib": { "snowfall-lib": {
"inputs": { "inputs": {
"flake-compat": "flake-compat", "flake-compat": "flake-compat_2",
"flake-utils-plus": "flake-utils-plus", "flake-utils-plus": "flake-utils-plus",
"nixpkgs": [ "nixpkgs": [
"flake-base", "flake-base",
@ -450,6 +689,45 @@
"type": "github" "type": "github"
} }
}, },
"stylix": {
"inputs": {
"base16": "base16",
"base16-fish": "base16-fish",
"base16-helix": "base16-helix",
"base16-vim": "base16-vim",
"firefox-gnome-theme": "firefox-gnome-theme",
"flake-parts": [
"flake-parts"
],
"gnome-shell": "gnome-shell",
"nixpkgs": [
"nixpkgs"
],
"nur": "nur",
"systems": [
"systems"
],
"tinted-foot": "tinted-foot",
"tinted-kitty": "tinted-kitty",
"tinted-schemes": "tinted-schemes",
"tinted-tmux": "tinted-tmux",
"tinted-zed": "tinted-zed"
},
"locked": {
"lastModified": 1752084754,
"narHash": "sha256-JorlZGCWxlYV01lXmUuDeKOZoLPdoN3fAKJv8YIuavs=",
"owner": "nix-community",
"repo": "stylix",
"rev": "2df042576646d012d15637f43d6075995e785ce3",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-25.05",
"repo": "stylix",
"type": "github"
}
},
"systems": { "systems": {
"locked": { "locked": {
"lastModified": 1681028828, "lastModified": 1681028828,
@ -480,13 +758,30 @@
"type": "github" "type": "github"
} }
}, },
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"terranix": { "terranix": {
"inputs": { "inputs": {
"flake-parts": "flake-parts_2", "flake-parts": [
"flake-parts"
],
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
], ],
"systems": "systems_2" "systems": "systems_3"
}, },
"locked": { "locked": {
"lastModified": 1749381683, "lastModified": 1749381683,
@ -502,28 +797,17 @@
"type": "github" "type": "github"
} }
}, },
"terranix-examples": {
"locked": {
"lastModified": 1633465925,
"narHash": "sha256-BfXRW1ZHpK5jh5CVcw7eFpGsWE1CyVxL8R+V7uXemaU=",
"owner": "terranix",
"repo": "terranix-examples",
"rev": "70bf5d5a1ad4eabef1e4e71c1eb101021decd5a4",
"type": "github"
},
"original": {
"owner": "terranix",
"repo": "terranix-examples",
"type": "github"
}
},
"terranix-hcloud": { "terranix-hcloud": {
"inputs": { "inputs": {
"flake-utils": "flake-utils_2", "flake-utils": [
"flake-utils"
],
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
], ],
"terranix": "terranix_2" "terranix": [
"terranix"
]
}, },
"locked": { "locked": {
"lastModified": 1745572802, "lastModified": 1745572802,
@ -539,42 +823,101 @@
"type": "github" "type": "github"
} }
}, },
"terranix_2": { "tinted-foot": {
"inputs": { "flake": false,
"bats-assert": "bats-assert",
"bats-support": "bats-support",
"flake-utils": "flake-utils_3",
"nixpkgs": "nixpkgs_2",
"terranix-examples": "terranix-examples"
},
"locked": { "locked": {
"lastModified": 1636274023, "lastModified": 1726913040,
"narHash": "sha256-HDiyJGgyDUoLnpL8N+wDm3cM/vEfYYc/p4N1kKH/kLk=", "narHash": "sha256-+eDZPkw7efMNUf3/Pv0EmsidqdwNJ1TaOum6k7lngDQ=",
"owner": "terranix", "owner": "tinted-theming",
"repo": "terranix", "repo": "tinted-foot",
"rev": "342ec8490bc948c8589414eb89f26b265cbfd62a", "rev": "fd1b924b6c45c3e4465e8a849e67ea82933fcbe4",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "terranix", "owner": "tinted-theming",
"ref": "develop", "repo": "tinted-foot",
"repo": "terranix", "rev": "fd1b924b6c45c3e4465e8a849e67ea82933fcbe4",
"type": "github"
}
},
"tinted-kitty": {
"flake": false,
"locked": {
"lastModified": 1735730497,
"narHash": "sha256-4KtB+FiUzIeK/4aHCKce3V9HwRvYaxX+F1edUrfgzb8=",
"owner": "tinted-theming",
"repo": "tinted-kitty",
"rev": "de6f888497f2c6b2279361bfc790f164bfd0f3fa",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "tinted-kitty",
"type": "github"
}
},
"tinted-schemes": {
"flake": false,
"locked": {
"lastModified": 1750770351,
"narHash": "sha256-LI+BnRoFNRa2ffbe3dcuIRYAUcGklBx0+EcFxlHj0SY=",
"owner": "tinted-theming",
"repo": "schemes",
"rev": "5a775c6ffd6e6125947b393872cde95867d85a2a",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "schemes",
"type": "github"
}
},
"tinted-tmux": {
"flake": false,
"locked": {
"lastModified": 1751159871,
"narHash": "sha256-UOHBN1fgHIEzvPmdNMHaDvdRMgLmEJh2hNmDrp3d3LE=",
"owner": "tinted-theming",
"repo": "tinted-tmux",
"rev": "bded5e24407cec9d01bd47a317d15b9223a1546c",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "tinted-tmux",
"type": "github"
}
},
"tinted-zed": {
"flake": false,
"locked": {
"lastModified": 1751158968,
"narHash": "sha256-ksOyv7D3SRRtebpXxgpG4TK8gZSKFc4TIZpR+C98jX8=",
"owner": "tinted-theming",
"repo": "base16-zed",
"rev": "86a470d94204f7652b906ab0d378e4231a5b3384",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "base16-zed",
"type": "github" "type": "github"
} }
}, },
"treefmt-nix": { "treefmt-nix": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
"flake-base", "stylix",
"nur",
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": { "locked": {
"lastModified": 1750931469, "lastModified": 1733222881,
"narHash": "sha256-0IEdQB1nS+uViQw4k3VGUXntjkDp7aAlqcxdewb/hAc=", "narHash": "sha256-JIPcz1PrpXUCbaccEnrcUS8jjEb/1vJbZz5KkobyFdM=",
"owner": "numtide", "owner": "numtide",
"repo": "treefmt-nix", "repo": "treefmt-nix",
"rev": "ac8e6f32e11e9c7f153823abc3ab007f2a65d3e1", "rev": "49717b5af6f80172275d47a418c9719a31a78b53",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -586,16 +929,15 @@
"treefmt-nix_2": { "treefmt-nix_2": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
"nixos-anywhere",
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": { "locked": {
"lastModified": 1748243702, "lastModified": 1752055615,
"narHash": "sha256-9YzfeN8CB6SzNPyPm2XjRRqSixDopTapaRsnTpXUEY8=", "narHash": "sha256-19m7P4O/Aw/6+CzncWMAJu89JaKeMh3aMle1CNQSIwM=",
"owner": "numtide", "owner": "numtide",
"repo": "treefmt-nix", "repo": "treefmt-nix",
"rev": "1f3f7b784643d488ba4bf315638b2b0a4c5fb007", "rev": "c9d477b5d5bd7f26adddd3f96cfd6a904768d4f9",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -603,6 +945,31 @@
"repo": "treefmt-nix", "repo": "treefmt-nix",
"type": "github" "type": "github"
} }
},
"uv2nix": {
"inputs": {
"nixpkgs": [
"authentik-nix",
"nixpkgs"
],
"pyproject-nix": [
"authentik-nix",
"pyproject-nix"
]
},
"locked": {
"lastModified": 1750987094,
"narHash": "sha256-GujDElxLgYatnNvuL1U6qd18lcuG6anJMjpfYRScV08=",
"owner": "pyproject-nix",
"repo": "uv2nix",
"rev": "4b703d851b61e664a70238711a8ff0efa1aa2f52",
"type": "github"
},
"original": {
"owner": "pyproject-nix",
"repo": "uv2nix",
"type": "github"
}
} }
}, },
"root": "root", "root": "root",

141
flake.nix
View file

@ -2,26 +2,61 @@
description = "A very basic flake"; description = "A very basic flake";
inputs = { inputs = {
authentik-nix = {
url = "github:nix-community/authentik-nix";
inputs = {
flake-utils.follows = "flake-utils";
nixpkgs.follows = "nixpkgs";
flake-parts.follows = "flake-parts";
systems.follows = "systems";
};
};
nixpkgs.url = "github:nixos/nixpkgs/nixos-25.05"; nixpkgs.url = "github:nixos/nixpkgs/nixos-25.05";
flake-base = { flake-base = {
url = "git+https://khs.codes/nix/flake-base"; url = "git+https://khs.codes/nix/flake-base";
inputs.nixpkgs.follows = "nixpkgs"; inputs = {
nixpkgs.follows = "nixpkgs";
treefmt-nix.follows = "treefmt-nix";
};
};
flake-utils = {
url = "github:numtide/flake-utils";
inputs = {
systems.follows = "systems";
};
};
flake-parts = {
url = "github:hercules-ci/flake-parts";
}; };
disko = { disko = {
url = "github:nix-community/disko"; url = "github:nix-community/disko";
inputs.nixpkgs.follows = "nixpkgs"; inputs = {
nixpkgs.follows = "nixpkgs";
};
}; };
terranix = { terranix = {
url = "github:terranix/terranix"; url = "github:terranix/terranix";
inputs.nixpkgs.follows = "nixpkgs"; inputs = {
nixpkgs.follows = "nixpkgs";
flake-parts.follows = "flake-parts";
};
}; };
home-manager = { home-manager = {
url = "github:nix-community/home-manager/release-25.05"; url = "github:nix-community/home-manager/release-25.05";
inputs.nixpkgs.follows = "nixpkgs"; inputs = {
nixpkgs.follows = "nixpkgs";
};
}; };
terranix-hcloud = { terranix-hcloud = {
url = "github:terranix/terranix-hcloud"; url = "github:terranix/terranix-hcloud";
inputs.nixpkgs.follows = "nixpkgs"; inputs = {
flake-utils.follows = "flake-utils";
nixpkgs.follows = "nixpkgs";
terranix.follows = "terranix";
};
};
systems = {
url = "github:nix-systems/default";
}; };
crane.url = "github:ipetkov/crane"; crane.url = "github:ipetkov/crane";
advisory-db = { advisory-db = {
@ -34,10 +69,35 @@
nixpkgs.follows = "nixpkgs"; nixpkgs.follows = "nixpkgs";
}; };
}; };
treefmt-nix = {
url = "github:numtide/treefmt-nix";
inputs = {
nixpkgs.follows = "nixpkgs";
};
};
nixos-anywhere = { nixos-anywhere = {
url = "github:nix-community/nixos-anywhere/1.11.0"; url = "github:nix-community/nixos-anywhere/1.11.0";
inputs = { inputs = {
nixpkgs.follows = "nixpkgs"; nixpkgs.follows = "nixpkgs";
flake-parts.follows = "flake-parts";
treefmt-nix.follows = "treefmt-nix";
disko.follows = "disko";
};
};
stylix = {
url = "github:nix-community/stylix/release-25.05";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-parts.follows = "flake-parts";
systems.follows = "systems";
};
};
cosmic-manager = {
url = "github:HeitorAugustoLN/cosmic-manager";
inputs = {
nixpkgs.follows = "nixpkgs";
home-manager.follows = "home-manager";
flake-parts.follows = "flake-parts";
}; };
}; };
}; };
@ -45,22 +105,27 @@
outputs = outputs =
inputs@{ self, ... }: inputs@{ self, ... }:
let let
dirsInPath = inputNixosModules = [
path: inputs.disko.nixosModules.disko
let inputs.stylix.nixosModules.stylix
files = builtins.readDir path; inputs.authentik-nix.nixosModules.default
dirs = inputs.nixpkgs.lib.filterAttrs (name: kind: kind == "directory") files; ];
in inputHomeModules = [
builtins.attrNames dirs; inputs.cosmic-manager.homeManagerModules.cosmic-manager
profileArgs = { inherit self; }; ];
profileNames = dirsInPath ./nix/profiles; allowUnfreePackages = [
nixosModules = dirsInPath ./nix/modules/nixos; "spotify"
inputModules = [ inputs.disko.nixosModules.disko ]; "google-chrome"
];
in in
(inputs.flake-base.lib.mkFlake { (inputs.flake-base.lib.mkFlake {
inherit inputs; inherit inputs;
src = ./.; src = ./.;
systems.modules.nixos = inputModules; channels-config = {
allowUnfreePredicate = pkg: builtins.elem (inputs.nixpkgs.lib.getName pkg) allowUnfreePackages;
};
systems.modules.nixos = inputNixosModules;
homes.modules = inputHomeModules;
snowfall = { snowfall = {
root = ./nix; root = ./nix;
namespace = "khscodes"; namespace = "khscodes";
@ -69,44 +134,14 @@
name = "Machines"; name = "Machines";
}; };
}; };
modules.nixos = {
default =
{
imports = builtins.map (m: self.nixosModules.${m}) nixosModules ++ inputModules;
}
// (builtins.listToAttrs (
builtins.map (n: {
name = n;
value = (import ./nix/profiles/${n} profileArgs);
}) profileNames
));
};
overlays = [ inputs.rust-overlay.overlays.default ]; overlays = [ inputs.rust-overlay.overlays.default ];
}) })
// { // {
terranixModules.cloudflare = import ./nix/modules/terranix/cloudflare { terranixModules.cloudflare = import ./nix/modules/terranix/cloudflare;
inherit inputs; terranixModules.hcloud = import ./nix/modules/terranix/hcloud;
khscodesLib = inputs.self.lib; terranixModules.vault = import ./nix/modules/terranix/vault;
}; terranixModules.s3 = import ./nix/modules/terranix/s3;
terranixModules.hcloud = import ./nix/modules/terranix/hcloud { terranixModules.openstack = import ./nix/modules/terranix/openstack;
inherit inputs; terranixModules.unifi = import ./nix/modules/terranix/unifi;
khscodesLib = inputs.self.lib;
};
terranixModules.openbao = import ./nix/modules/terranix/openbao {
inherit inputs;
khscodesLib = inputs.self.lib;
};
terranixModules.s3 = import ./nix/modules/terranix/s3 {
inherit inputs;
khscodesLib = inputs.self.lib;
};
terranixModules.openstack = import ./nix/modules/terranix/openstack {
inherit inputs;
khscodesLib = inputs.self.lib;
};
terranixModules.unifi = import ./nix/modules/terranix/unifi {
inherit inputs;
khscodesLib = inputs.self.lib;
};
}; };
} }

View file

@ -12,7 +12,10 @@ pkgs.nixosTest {
{ ... }: { ... }:
{ {
imports = [ imports = [
inputs.self.nixosModules.default inputs.self.nixosModules.hetzner
inputs.self.nixosModules.systemd-boot
inputs.self.nixosModules."virtualisation/qemu-guest"
inputs.disko.nixosModules.disko
sharedModule sharedModule
]; ];
khscodes.hetzner = { khscodes.hetzner = {

View file

@ -0,0 +1,7 @@
{
...
}:
{
khscodes.khs.enable = true;
khscodes.khs.shell.oh-my-posh.enable = true;
}

View file

@ -0,0 +1,7 @@
{
...
}:
{
khscodes.khs.enable = true;
khscodes.khs.shell.oh-my-posh.enable = true;
}

View file

@ -0,0 +1,8 @@
{
...
}:
{
khscodes.khs.enable = true;
khscodes.khs.shell.oh-my-posh.enable = true;
imports = [ ./desktop.nix ];
}

View file

@ -0,0 +1,14 @@
{
pkgs,
lib,
config,
...
}:
{
imports = [ ./linux-desktop.nix ];
home.packages = lib.mkIf config.khscodes.desktop.enable [
pkgs.bitwarden-cli
pkgs.nerd-fonts.inconsolata
pkgs.google-chrome
];
}

View file

@ -0,0 +1,9 @@
{
pkgs,
config,
lib,
...
}:
{
home.packages = lib.mkIf config.khscodes.desktop.enable [ pkgs.spotify ];
}

View file

@ -1,4 +0,0 @@
{
snowfallorg.user.name = "khs";
home.stateVersion = "25.05";
}

View file

@ -0,0 +1,35 @@
{ ... }:
{
disko-root-bios =
{
diskName,
device,
bootPartName ? "boot",
rootPartName ? "root",
}:
{
devices.disk = {
"${diskName}" = {
inherit device;
type = "disk";
content = {
type = "gpt";
partitions = {
${bootPartName} = {
size = "1M";
type = "EF02"; # for grub MBR
};
${rootPartName} = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
};
}

View file

@ -0,0 +1,6 @@
{ lib, ... }:
{
options.khscodes.desktop = {
enable = lib.mkEnableOption "Generic setting other modules can use to enable/disable stuff when used on desktops";
};
}

View file

@ -0,0 +1,113 @@
{
lib,
pkgs,
config,
system,
...
}:
let
isDarwin = lib.strings.hasSuffix "-darwin" system;
isLinux = lib.strings.hasSuffix "-darwin" system;
shell = pkgs.bashInteractive;
shellArgs = [
"-c"
(lib.getExe pkgs.zellij)
];
in
{
config = lib.mkIf (config.khscodes.khs.enable && config.khscodes.desktop.enable) {
programs.alacritty = {
enable = true;
settings = {
terminal =
{
shell = {
program = "${shell}${shell.shellPath}";
args = shellArgs;
};
}
// lib.attrsets.optionalAttrs isDarwin {
osc52 = "CopyPaste";
};
scrolling = {
history = 100000;
};
window = {
padding = {
x = 2;
y = 0;
};
};
bell = {
animation = "EaseOutExpo";
};
hints = {
enabled = [
{
regex = "(ipfs:|ipns:|magnet:|mailto:|gemini:|gopher:|https:|http:|news:|file:|git:|ssh:|ftp:)[^\\u0000-\\u001F\\u007F-\\u009F<>\"\\\\s{-}\\\\^`]+";
command = if isLinux then "xdg-open" else "open";
post_processing = true;
mouse = {
enabled = true;
mods = "Control";
};
}
];
};
window = {
option_as_alt = "OnlyLeft";
};
env = {
TERM = "xterm-256color";
};
keyboard.bindings =
[
{
key = "T";
mods = "Control|Shift";
action = "SpawnNewInstance";
}
{
key = "W";
mods = "Control|Shift";
action = "Quit";
}
{
key = "Plus";
mods = "Control";
action = "IncreaseFontSize";
}
{
key = "Minus";
mods = "Control";
action = "DecreaseFontSize";
}
{
key = "Key0";
mods = "Control";
action = "ResetFontSize";
}
{
key = "C";
mods = "Super";
action = "None";
}
]
++ lib.lists.optionals isDarwin [
{
key = "N";
mods = "Command";
action = "SpawnNewInstance";
}
# Allow zellij to receive the keys, to make copy/pasting work in darwin
{
key = "C";
mods = "Command";
action = "ReceiveChar";
}
];
};
};
stylix.targets.alacritty.enable = true;
};
}

View file

@ -0,0 +1 @@
{ }

View file

@ -0,0 +1,17 @@
{ lib, config, ... }:
let
cfg = config.khscodes.khs;
in
{
options.khscodes.khs = {
enable = lib.mkEnableOption "Enables the settings for KHS";
};
config = lib.mkIf cfg.enable {
snowfallorg.user.name = "khs";
home.sessionVariables = {
EMAIL = "kaare@kaareskovgaard.net";
};
home.stateVersion = "25.05";
};
}

View file

@ -0,0 +1,120 @@
{
config,
inputs,
lib,
...
}:
let
cosmicLib = import "${inputs.cosmic-manager}/lib" { inherit lib; };
mkRON = cosmicLib.mkRON;
accent = mkRON "optional" {
red = mkRON "raw" "0.3882353";
green = mkRON "raw" "0.8156863";
blue = mkRON "raw" "0.8745098";
};
active_hint = 1;
corner_radii = {
radius_0 = mkRON "tuple" [
0.0
0.0
0.0
0.0
];
radius_xs = mkRON "tuple" [
2.0
2.0
2.0
2.0
];
radius_s = mkRON "tuple" [
8.0
8.0
8.0
8.0
];
radius_m = mkRON "tuple" [
8.0
8.0
8.0
8.0
];
radius_l = mkRON "tuple" [
8.0
8.0
8.0
8.0
];
radius_xl = mkRON "tuple" [
8.0
8.0
8.0
8.0
];
};
gaps = mkRON "tuple" [
0
1
];
in
{
config = lib.mkIf (config.khscodes.desktop.enable && config.khscodes.khs.enable) {
wayland.desktopManager.cosmic = {
enable = true;
applets = {
# This is the "dock"
app-list = {
settings = {
enable_drag_source = false;
favorites = [
"com.system76.CosmicFiles"
"thunderbird"
"Google-chrome"
"Code"
"Alacritty"
"com.system76.CosmicSettings"
"Spotify"
"steam"
];
filter_top_levels = mkRON "optional" null;
};
};
};
appearance = {
toolkit = {
interface_density = mkRON "enum" "Standard";
monospace_font = {
family = config.stylix.fonts.monospace.name;
stretch = mkRON "enum" "Normal";
style = mkRON "enum" "Normal";
weight = mkRON "enum" "Normal";
};
};
theme = {
light = {
inherit
accent
active_hint
corner_radii
gaps
;
};
dark = {
inherit
accent
active_hint
corner_radii
gaps
;
};
};
};
compositor = {
active_hint = true;
autotile = true;
autotile_behavior = mkRON "enum" "PerWorkspace";
cursor_follows_focus = false;
focus_follows_cursor = false;
};
};
};
}

View file

@ -0,0 +1,9 @@
{ config, lib, ... }:
{
config = lib.mkIf config.khscodes.khs.enable {
programs.bash = {
enable = true;
shellAliases = config.khscodes.khs.shell.aliases;
};
};
}

View file

@ -0,0 +1,12 @@
{ lib, config, ... }:
{
config = lib.mkIf (config.khscodes.khs.enable && config.khscodes.desktop.enable) {
programs.carapace = {
enable = true;
enableBashIntegration = true;
enableZshIntegration = true;
enableFishIntegration = true;
enableNushellIntegration = true;
};
};
}

View file

@ -0,0 +1,18 @@
{ lib, ... }:
{
options.khscodes.khs.shell = {
aliases = lib.mkOption {
type = lib.types.attrsOf lib.types.str;
description = "Shell aliases to be copied to different shells";
default = {
add = "git add";
commit = "git commit";
st = "git status";
push = "git push";
puff = "git puff";
pull = "git pull";
purr = "git purr";
};
};
};
}

View file

@ -0,0 +1,10 @@
{ config, lib, ... }:
{
config = lib.mkIf config.khscodes.khs.enable {
programs.fish = {
enable = true;
shellAliases = config.khscodes.khs.shell.aliases;
shellInit = "set fish_greeting";
};
};
}

View file

@ -0,0 +1,22 @@
{
lib,
config,
...
}:
let
aliases = config.khscodes.khs.shell.aliases;
in
{
config = lib.mkIf config.khscodes.khs.enable {
programs.nushell = {
enable = true;
shellAliases = aliases;
extraConfig = ''
$env.config = {
show_banner: false
}
'';
environmentVariables = config.home.sessionVariables;
};
};
}

View file

@ -0,0 +1,158 @@
{ config, lib, ... }:
let
cfg = config.khscodes.khs.shell.oh-my-posh;
unicodeChar = code: builtins.fromJSON ''"\u${code}"'';
powerline_symbol = unicodeChar "e0b0";
rpowerline_symbol = unicodeChar "e0b2";
colors = config.lib.stylix.colors.withHashtag;
bright-yellow = config.lib.stylix.colors.yellow or config.lib.stylix.colors.base0a;
bright-yellow-hashtag = "#${bright-yellow}";
segment_style = {
style = "diamond";
leading_diamond = "";
trailing_diamond = powerline_symbol;
};
rsegment_style = {
style = "diamond";
leading_diamond = rpowerline_symbol;
trailing_diamond = "";
};
in
{
options.khscodes.khs.shell.oh-my-posh = {
enable = lib.mkEnableOption "Enables oh my posh khs setup";
};
config = lib.mkIf cfg.enable {
programs.oh-my-posh = {
enable = true;
enableBashIntegration = true;
enableZshIntegration = true;
enableFishIntegration = true;
enableNushellIntegration = true;
useTheme = null;
settings = {
"$schema" = "https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/main/themes/schema.json";
"console_title_template" = "{{ .PWD }} @ {{ .HostName }}";
"blocks" = [
{
"alignment" = "left";
"type" = "prompt";
"segments" = [
{
type = "shell";
style = "diamond";
leading_diamond = "";
trailing_diamond = powerline_symbol;
"template" = "{{ .Name }}";
"background" = colors.base00;
"foreground" = colors.blue;
"background_templates" = [ "{{ if eq .Name \"🐠\" }}${colors.base00}{{ end }}" ];
properties = {
mapped_shell_names = {
"nushell" = "nu";
"fish" = "🐟";
"bash" = "$_";
"zsh" = "%_";
};
};
}
(
{
"type" = "session";
"background" = colors.red;
"foreground" = colors.base07;
"template" = "{{ if .SSHSession }} {{ .HostName }} {{ end }}";
}
// segment_style
)
(
{
"type" = "path";
"background" = colors.blue;
"foreground" = colors.base00;
"template" = " {{ .Path }} ";
"properties" = {
"style" = "full";
};
}
// segment_style
)
(
{
"type" = "git";
"background" = colors.green;
"foreground" = colors.base00;
"background_templates" = [
"{{ if or (.Working.Changed) (.Staging.Changed) }}${colors.yellow}{{ end }}"
];
"template" =
"{{ if .Detached }} {{ trunc 7 .Commit.Sha }}{{ else }}{{ .UpstreamIcon }} {{ .Ref }}{{ end }}{{ if .Merge }}|merge{{ end }}{{ if .Rebase }}|rebase{{ end }}{{ if .CherryPick }}|cherrypick{{ end }}{{ if .Ahead}}{{ .Ahead }}{{ end }}{{ if .Behind }}{{ .Behind }}{{ end }}{{ if .Working.Changed}}{{ end }}{{ if .Staging.Changed }}{{ end }}{{ if .StashCount }} 󰺿{{ end }} ";
"properties" = {
"fetch_status" = true;
"untracked_modes" = {
"/Users/user/Projects/oh-my-posh/" = "no";
};
fetch_upstream_icon = true;
upstream = {
git_icon = "";
};
"source" = "cli";
};
}
// segment_style
)
];
}
{
alignment = "right";
type = "rprompt";
"segments" = [
(
{
type = "status";
background = colors.base01;
background_templates = [ "{{ if .Error }}${colors.red}{{ end }}" ];
foreground = colors.green;
foreground_templates = [ "{{ if .Error}}${bright-yellow-hashtag}{{ end }}" ];
template = " {{ if .Error }} {{ .Code }}{{ else }}{{ end }} ";
properties = {
always_enabled = true;
};
}
// rsegment_style
)
(
{
type = "executiontime";
foreground = colors.base00;
background = colors.yellow;
template = " {{ .FormattedMs }} ";
properties = {
always_enabled = false;
threshold = 3000;
style = "round";
};
}
// rsegment_style
)
# Rendering this screws up spacing of the beginning of the prompt
(
{
type = "nix-shell";
background = colors.blue;
foreground = colors.base00;
template = " {{ .Type }} ";
}
// rsegment_style
)
];
}
];
"terminal_background" = colors.base00;
"disable_notice" = true;
"final_space" = true;
"version" = 2;
};
};
};
}

View file

@ -0,0 +1,505 @@
{
lib,
config,
pkgs,
system,
...
}:
let
isDarwin = lib.strings.hasSuffix "-darwin" system;
in
{
config = lib.mkIf (config.khscodes.khs.enable && config.khscodes.desktop.enable) {
# In built styles look off to me. And when alacritty is themed,
# this appears to not be needed.
stylix.targets.zellij.enable = false;
programs.zellij = {
enable = true;
settings = {
default_shell = lib.getExe pkgs.fish;
copy_on_select = false;
mouse_mode = true;
show_startup_tips = false;
scroll_buffer_size = 100000;
support_kitty_keyboard_protocol = true;
"keybinds clear-defaults=true" = {
normal = lib.attrsets.optionalAttrs isDarwin {
"bind \"Super c\"" = {
Copy = [ ];
};
};
locked = {
"bind \"Ctrl g\"" = {
SwitchToMode = "Normal";
};
};
resize = {
"bind \"Ctrl n\"" = {
SwitchToMode = "Normal";
};
"bind \"h\" \"Left\"" = {
Resize = "Increase Left";
};
"bind \"j\" \"Down\"" = {
Resize = "Increase Down";
};
"bind \"k\" \"Up\"" = {
Resize = "Increase Up";
};
"bind \"l\" \"Right\"" = {
Resize = "Increase Right";
};
"bind \"H\"" = {
Resize = "Decrease Left";
};
"bind \"J\"" = {
Resize = "Decrease Down";
};
"bind \"K\"" = {
Resize = "Decrease Up";
};
"bind \"L\"" = {
Resize = "Decrease Right";
};
"bind \"=\" \"+\"" = {
Resize = "Increase";
};
"bind \"-\"" = {
Resize = "Decrease";
};
};
pane = {
"bind \"Ctrl\ p\"" = {
SwitchToMode = "Normal";
};
"bind \"h\" \"Left\"" = {
MoveFocus = "Left";
};
"bind \"l\" \"Right\"" = {
MoveFocus = "Right";
};
"bind \"j\" \"Down\"" = {
MoveFocus = "Down";
};
"bind \"k\" \"Up\"" = {
MoveFocus = "Up";
};
"bind \"p\"" = {
SwitchFocus = [ ];
};
"bind \"n\"" = {
NewPane = [ ];
SwitchToMode = "Normal";
};
"bind \"d\"" = {
NewPane = "Down";
SwitchToMode = "Normal";
};
"bind \"r\"" = {
NewPane = "Right";
SwitchToMode = "Normal";
};
"bind \"x\"" = {
CloseFocus = [ ];
SwitchToMode = "Normal";
};
"bind \"f\"" = {
ToggleFocusFullscreen = [ ];
SwitchToMode = "Normal";
};
"bind \"z\"" = {
TogglePaneFrames = [ ];
SwitchToMode = "Normal";
};
"bind \"w\"" = {
ToggleFloatingPanes = [ ];
SwitchToMode = "Normal";
};
"bind \"e\"" = {
TogglePaneEmbedOrFloating = [ ];
SwitchToMode = "Normal";
};
"bind \"c\"" = {
SwitchToMode = "RenamePane";
PaneNameInput = 0;
};
};
move = {
"bind \"Ctrl h\"" = {
SwitchToMode = "Normal";
};
"bind \"n\" \"Tab\"" = {
MovePane = [ ];
};
"bind \"p\"" = {
MovePaneBackwards = [ ];
};
"bind \"h\" \"Left\"" = {
MovePane = "Left";
};
"bind \"j\" \"Down\"" = {
MovePane = "Down";
};
"bind \"k\" \"Up\"" = {
MovePane = "Up";
};
"bind \"l\" \"Right\"" = {
MovePane = "Right";
};
};
tab = {
"bind \"Alt t\"" = {
SwitchToMode = "Normal";
};
"bind \"r\"" = {
SwitchToMode = "RenameTab";
TabNameInput = 0;
};
"bind \"h\" \"Left\" \"Up\" \"k\"" = {
GoToPreviousTab = [ ];
};
"bind \"l\" \"Right\" \"Down\" \"j\"" = {
GoToNextTab = [ ];
};
"bind \"n\"" = {
NewTab = [ ];
SwitchToMode = "Normal";
};
"bind \"x\"" = {
CloseTab = [ ];
SwitchToMode = "Normal";
};
"bind \"s\"" = {
ToggleActiveSyncTab = [ ];
SwitchToMode = "Normal";
};
"bind \"1\"" = {
GoToTab = 1;
SwitchToMode = "Normal";
};
"bind \"2\"" = {
GoToTab = 2;
SwitchToMode = "Normal";
};
"bind \"3\"" = {
GoToTab = 3;
SwitchToMode = "Normal";
};
"bind \"4\"" = {
GoToTab = 4;
SwitchToMode = "Normal";
};
"bind \"5\"" = {
GoToTab = 5;
SwitchToMode = "Normal";
};
"bind \"6\"" = {
GoToTab = 6;
SwitchToMode = "Normal";
};
"bind \"7\"" = {
GoToTab = 7;
SwitchToMode = "Normal";
};
"bind \"8\"" = {
GoToTab = 8;
SwitchToMode = "Normal";
};
"bind \"9\"" = {
GoToTab = 9;
SwitchToMode = "Normal";
};
"bind \"Tab\"" = {
ToggleTab = [ ];
};
};
scroll = {
"bind \"Ctrl s\"" = {
SwitchToMode = "Normal";
};
"bind \"e\"" = {
EditScrollback = [ ];
SwitchToMode = "Normal";
};
"bind \"s\"" = {
SwitchToMode = "EnterSearch";
SearchInput = 0;
};
"bind \"Ctrl c\"" = {
ScrollToBottom = [ ];
SwitchToMode = "Normal";
};
"bind \"j\" \"Down\"" = {
ScrollDown = [ ];
};
"bind \"k\" \"Up\"" = {
ScrollUp = [ ];
};
"bind \"Ctrl f\" \"PageDown\" \"Right\" \"l\"" = {
PageScrollDown = [ ];
};
"bind \"Ctrl b\" \"PageUp\" \"Left\" \"h\"" = {
PageScrollUp = [ ];
};
"bind \"d\"" = {
HalfPageScrollDown = [ ];
};
"bind \"u\"" = {
HalfPageScrollUp = [ ];
};
};
search = {
"bind \"Ctrl s\"" = {
SwitchToMode = "Normal";
};
"bind \"Ctrl c\"" = {
ScrollToBottom = [ ];
SwitchToMode = "Normal";
};
"bind \"j\" \"Down\"" = {
ScrollDown = [ ];
};
"bind \"k\" \"Up\"" = {
ScrollUp = [ ];
};
"bind \"Ctrl f\" \"PageDown\" \"Right\" \"l\"" = {
PageScrollDown = [ ];
};
"bind \"Ctrl b\" \"PageUp\" \"Left\" \"h\"" = {
PageScrollUp = [ ];
};
"bind \"d\"" = {
HalfPageScrollDown = [ ];
};
"bind \"u\"" = {
HalfPageScrollUp = [ ];
};
"bind \"n\"" = {
Search = "down";
};
"bind \"p\"" = {
Search = "up";
};
"bind \"c\"" = {
SearchToggleOption = "CaseSensitivity";
};
"bind \"w\"" = {
SearchToggleOption = "Wrap";
};
"bind \"o\"" = {
SearchToggleOption = "WholeWord";
};
};
entersearch = {
"bind \"Ctrl c\" \"Esc\"" = {
SwitchToMode = "Scroll";
};
"bind \"Enter\"" = {
SwitchToMode = "Search";
};
};
renametab = {
"bind \"Ctrl c\"" = {
SwitchToMode = "Normal";
};
"bind \"Esc\"" = {
UndoRenameTab = [ ];
SwitchToMode = "Tab";
};
};
renamepane = {
"bind \"Ctrl c\"" = {
SwitchToMode = "Normal";
};
"bind \"Esc\"" = {
UndoRenamePane = [ ];
SwitchToMode = "Pane";
};
};
session = {
"unbind \"Ctrl o\"" = [ ];
"bind \"Ctrl q\"" = {
SwitchToMode = "Normal";
};
"bind \"Ctrl s\"" = {
SwitchToMode = "Scroll";
};
"bind \"d\"" = {
Detach = [ ];
};
};
tmux = {
"bind \"[\"" = {
SwitchToMode = "Scroll";
};
"bind \"Ctrl b\"" = {
Write = 2;
SwitchToMode = "Normal";
};
"bind \"\\\"\"" = {
NewPane = "Down";
SwitchToMode = "Normal";
};
"bind \"%\"" = {
NewPane = "Right";
SwitchToMode = "Normal";
};
"bind \"z\"" = {
ToggleFocusFullscreen = [ ];
SwitchToMode = "Normal";
};
"bind \"c\"" = {
NewTab = [ ];
SwitchToMode = "Normal";
};
"bind \",\"" = {
SwitchToMode = "RenameTab";
};
"bind \"p\"" = {
GoToPreviousTab = [ ];
SwitchToMode = "Normal";
};
"bind \"n\"" = {
GoToNextTab = [ ];
SwitchToMode = "Normal";
};
"bind \"Left\"" = {
MoveFocus = "Left";
SwitchToMode = "Normal";
};
"bind \"Right\"" = {
MoveFocus = "Right";
SwitchToMode = "Normal";
};
"bind \"Down\"" = {
MoveFocus = "Down";
SwitchToMode = "Normal";
};
"bind \"Up\"" = {
MoveFocus = "Up";
SwitchToMode = "Normal";
};
"bind \"h\"" = {
MoveFocus = "Left";
SwitchToMode = "Normal";
};
"bind \"l\"" = {
MoveFocus = "Right";
SwitchToMode = "Normal";
};
"bind \"j\"" = {
MoveFocus = "Down";
SwitchToMode = "Normal";
};
"bind \"k\"" = {
MoveFocus = "Up";
SwitchToMode = "Normal";
};
"bind \"o\"" = {
FocusNextPane = [ ];
};
"bind \"d\"" = {
Detach = [ ];
};
"bind \"Space\"" = {
NextSwapLayout = [ ];
};
"bind \"x\"" = {
CloseFocus = [ ];
SwitchToMode = "Normal";
};
};
"shared_except \"locked\"" = {
"bind \"Ctrl g\"" = {
SwitchToMode = "Locked";
};
"bind \"Alt n\"" = {
NewPane = [ ];
};
"bind \"Alt h\" \"Alt Left\"" = {
MoveFocusOrTab = "Left";
};
"bind \"Alt l\" \"Alt Right\"" = {
MoveFocusOrTab = "Right";
};
"bind \"Alt j\" \"Alt Down\"" = {
MoveFocus = "Down";
};
"bind \"Alt k\" \"Alt Up\"" = {
MoveFocus = "Up";
};
"bind \"Alt =\" \"Alt +\"" = {
Resize = "Increase";
};
"bind \"Alt -\"" = {
Resize = "Decrease";
};
"bind \"Alt [\"" = {
PreviousSwapLayout = [ ];
};
"bind \"Alt ]\"" = {
NextSwapLayout = [ ];
};
};
"shared_except \"normal\" \"locked\"" = {
"bind \"Enter\" \"Esc\"" = {
SwitchToMode = "Normal";
};
};
"shared_except \"pane\" \"locked\"" = {
"bind \"Ctrl p\"" = {
SwitchToMode = "Pane";
};
};
"shared_except \"resize\" \"locked\"" = {
"bind \"Ctrl n\"" = {
SwitchToMode = "Resize";
};
};
"shared_except \"scroll\" \"locked\"" = {
"bind \"Ctrl s\"" = {
SwitchToMode = "Scroll";
};
};
"shared_except \"session\" \"locked\"" = {
unbind = "Ctrl o";
"bind \"Ctrl q\"" = {
SwitchToMode = "Session";
};
};
"shared_except \"tab\" \"locked\"" = {
"bind \"Alt t\"" = {
SwitchToMode = "Tab";
};
};
"shared_except \"move\" \"locked\"" = {
"bind \"Ctrl h\"" = {
SwitchToMode = "Move";
};
};
"shared_except \"tmux\" \"locked\"" = {
"bind \"Ctrl b\"" = {
SwitchToMode = "Tmux";
};
};
};
plugins = {
tab-bar = {
path = "tab-bar";
};
status-bar = {
path = "status-bar";
};
strider = {
path = "strider";
};
compact-bar = {
path = "compact-bar";
};
};
simplified-ui = false;
pane_frames = false;
};
};
};
}

View file

@ -0,0 +1,9 @@
{ config, lib, ... }:
{
config = lib.mkIf config.khscodes.khs.enable {
programs.zsh = {
enable = true;
shellAliases = config.khscodes.khs.shell.aliases;
};
};
}

View file

@ -1,29 +0,0 @@
{
config,
lib,
...
}:
let
cfg = config.khscodes.fqdn;
in
{
options.khscodes.fqdn = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Sets the FQDN of the machine. This is a prerequisite for many modules to be used";
};
config = lib.mkIf (cfg != null) (
let
hostname = builtins.head (lib.strings.splitString "." cfg);
domain = if hostname == cfg then null else (lib.strings.removePrefix "${hostname}." cfg);
in
{
networking.hostName = lib.mkForce hostname;
networking.domain = lib.mkForce domain;
boot.kernel.sysctl = {
"kernel.hostname" = cfg;
};
}
);
}

View file

@ -6,7 +6,8 @@
}: }:
let let
cfg = config.khscodes.infrastructure.hetzner-instance; cfg = config.khscodes.infrastructure.hetzner-instance;
fqdn = config.khscodes.fqdn; fqdn = config.khscodes.networking.fqdn;
provisioningUserData = config.khscodes.infrastructure.provisioning.instanceUserData;
firewallTcpRules = lib.lists.map (p: { firewallTcpRules = lib.lists.map (p: {
direction = "in"; direction = "in";
protocol = "tcp"; protocol = "tcp";
@ -52,7 +53,7 @@ in
dnsNames = lib.mkOption { dnsNames = lib.mkOption {
type = lib.types.listOf lib.types.str; type = lib.types.listOf lib.types.str;
description = "DNS names for the server"; description = "DNS names for the server";
default = [ fqdn ]; default = lib.lists.unique ([ fqdn ] ++ config.khscodes.networking.aliases);
}; };
bucket = { bucket = {
key = lib.mkOption { key = lib.mkOption {
@ -61,14 +62,6 @@ in
default = "${fqdn}.tfstate"; default = "${fqdn}.tfstate";
}; };
}; };
secretsSource = lib.mkOption {
type = lib.types.enum [
"bitwarden"
"vault"
];
description = "Whether to load opentofu secrets from Bitwarden or Vault";
default = "vault";
};
datacenter = lib.mkOption { datacenter = lib.mkOption {
type = lib.types.str; type = lib.types.str;
description = "The Hetzner datacenter to create a server in"; description = "The Hetzner datacenter to create a server in";
@ -158,26 +151,23 @@ in
inherit labels; inherit labels;
name = fqdn; name = fqdn;
initial_image = "debian-12"; initial_image = "debian-12";
rdns = fqdn; rdns = lib.mkIf cfg.mapRdns fqdn;
ssh_keys = [ config.khscodes.hcloud.output.data.ssh_key.khs.id ]; ssh_keys = [ config.khscodes.hcloud.output.data.ssh_key.khs.id ];
user_data = provisioningUserData;
}; };
khscodes.cloudflare = { khscodes.cloudflare = {
enable = true; enable = true;
dns = { dns = {
enable = true; enable = true;
zone_name = tldFromFqdn fqdn; zone_name = tldFromFqdn fqdn;
aRecords = [ aRecords = lib.lists.map (d: {
{ fqdn = d;
inherit fqdn; content = config.khscodes.hcloud.output.server.compute.ipv4_address;
content = config.khscodes.hcloud.output.server.compute.ipv4_address; }) cfg.dnsNames;
} aaaaRecords = lib.lists.map (d: {
]; fqdn = d;
aaaaRecords = [ content = config.khscodes.hcloud.output.server.compute.ipv6_address;
{ }) cfg.dnsNames;
inherit fqdn;
content = config.khscodes.hcloud.output.server.compute.ipv6_address;
}
];
}; };
}; };
resource.hcloud_firewall.fw = lib.mkIf firewallEnable { resource.hcloud_firewall.fw = lib.mkIf firewallEnable {
@ -205,19 +195,14 @@ in
{ {
assertions = [ assertions = [
{ {
assertion = config.khscodes.fqdn != null; assertion = config.khscodes.networking.fqdn != null;
message = "Must set config.khscodes.fqdn when using opentofu"; message = "Must set config.khscodes.networking.fqdn when using opentofu";
} }
]; ];
khscodes.services.read-vault-auth-from-userdata.url = "http://169.254.169.254/latest/user-data";
khscodes.infrastructure.provisioning.pre = { khscodes.infrastructure.provisioning.pre = {
modules = modules; modules = modules;
secretsSource = cfg.secretsSource;
endpoints = [
"aws"
"cloudflare"
"hcloud"
];
}; };
} }
); );

View file

@ -6,7 +6,8 @@
}: }:
let let
cfg = config.khscodes.infrastructure.khs-openstack-instance; cfg = config.khscodes.infrastructure.khs-openstack-instance;
fqdn = config.khscodes.fqdn; fqdn = config.khscodes.networking.fqdn;
provisioningUserData = config.khscodes.infrastructure.provisioning.instanceUserData;
firewallTcpRules = lib.lists.flatten ( firewallTcpRules = lib.lists.flatten (
lib.lists.map (p: [ lib.lists.map (p: [
{ {
@ -74,7 +75,9 @@ in
dnsNames = lib.mkOption { dnsNames = lib.mkOption {
type = lib.types.listOf lib.types.str; type = lib.types.listOf lib.types.str;
description = "DNS names for the instance"; description = "DNS names for the instance";
default = [ fqdn ]; default = lib.lists.unique (
[ config.khscodes.networking.fqdn ] ++ config.khscodes.networking.aliases
);
}; };
bucket = { bucket = {
key = lib.mkOption { key = lib.mkOption {
@ -83,14 +86,6 @@ in
default = "${fqdn}.tfstate"; default = "${fqdn}.tfstate";
}; };
}; };
secretsSource = lib.mkOption {
type = lib.types.enum [
"bitwarden"
"vault"
];
description = "Whether to load opentofu secrets from Bitwarden or Vault";
default = "vault";
};
flavor = lib.mkOption { flavor = lib.mkOption {
type = lib.types.nullOr lib.types.str; type = lib.types.nullOr lib.types.str;
description = "The server type to create"; description = "The server type to create";
@ -101,65 +96,17 @@ in
description = "SSH key for the server (this only applies to the initial creation, deploying NixOS will render this key useless). Changing this will recreate the instance"; description = "SSH key for the server (this only applies to the initial creation, deploying NixOS will render this key useless). Changing this will recreate the instance";
default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqY0FHnWFKfLG2yfgr4qka5sR9CK+EMAhzlHUkaQyWHTKD+G0/vC/fNPyL1VV3Dxc/ajxGuPzVE+mBMoyxazL3EtuCDOVvHJ5CR+MUSEckg/DDwcGHqy6rC8BvVVpTAVL04ByQdwFnpE1qNSBaQLkxaFVdtriGKkgMkc7+UNeYX/bv7yn+APqfP1a3xr6wdkSSdO8x4N2jsSygOIMx10hLyCV4Ueu7Kp8Ww4rGY8j5o7lKJhbgfItBfSOuQHdppHVF/GKYRhdnK6Y2fZVYbhq4KipUtclbZ6O/VYd8/sOO98+LMm7cOX+K35PQjUpYgcoNy5+Sw3CNS/NHn4JvOtTaUEYP7fK6c9LhMULOO3T7Cm6TMdiFjUKHkyG+s2Mu/LXJJoilw571zwuh6chkeitW8+Ht7k0aPV96kNEvTdoXwLhBifVEaChlAsLAzSUjUq+YYCiXVk0VIXCZQWKj8LoVNTmaqDksWwbcT64fw/FpVC0N18WHbKcFUEIW/O4spJMa30CQwf9FeqpoWoaF1oRClCSDPvX0AauCu0JcmRinz1/JmlXljnXWbSfm20/V+WyvktlI0wTD0cdpNuSasT9vS77YfJ8nutcWWZKSkCj4R4uHeCNpDTX5YXzapy7FxpM9ANCXLIvoGX7Yafba2Po+er7SSsUIY1AsnBBr8ZoDVw=="; default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqY0FHnWFKfLG2yfgr4qka5sR9CK+EMAhzlHUkaQyWHTKD+G0/vC/fNPyL1VV3Dxc/ajxGuPzVE+mBMoyxazL3EtuCDOVvHJ5CR+MUSEckg/DDwcGHqy6rC8BvVVpTAVL04ByQdwFnpE1qNSBaQLkxaFVdtriGKkgMkc7+UNeYX/bv7yn+APqfP1a3xr6wdkSSdO8x4N2jsSygOIMx10hLyCV4Ueu7Kp8Ww4rGY8j5o7lKJhbgfItBfSOuQHdppHVF/GKYRhdnK6Y2fZVYbhq4KipUtclbZ6O/VYd8/sOO98+LMm7cOX+K35PQjUpYgcoNy5+Sw3CNS/NHn4JvOtTaUEYP7fK6c9LhMULOO3T7Cm6TMdiFjUKHkyG+s2Mu/LXJJoilw571zwuh6chkeitW8+Ht7k0aPV96kNEvTdoXwLhBifVEaChlAsLAzSUjUq+YYCiXVk0VIXCZQWKj8LoVNTmaqDksWwbcT64fw/FpVC0N18WHbKcFUEIW/O4spJMa30CQwf9FeqpoWoaF1oRClCSDPvX0AauCu0JcmRinz1/JmlXljnXWbSfm20/V+WyvktlI0wTD0cdpNuSasT9vS77YfJ8nutcWWZKSkCj4R4uHeCNpDTX5YXzapy7FxpM9ANCXLIvoGX7Yafba2Po+er7SSsUIY1AsnBBr8ZoDVw==";
}; };
dns = {
mapIpv4Address = lib.mkEnableOption {
type = lib.types.bool;
description = "Also add the IPv4 address to DNS";
default = false;
};
};
extraFirewallRules = lib.mkOption { extraFirewallRules = lib.mkOption {
type = lib.types.listOf lib.types.attrs; type = lib.types.listOf lib.types.attrs;
description = "Extra firewall rules added to the instance"; description = "Extra firewall rules added to the instance";
default = [ default = [ ];
{
direction = "egress";
ethertype = "IPv4";
protocol = "tcp";
port = 80;
remote_subnet = "0.0.0.0/0";
}
{
direction = "egress";
ethertype = "IPv6";
protocol = "tcp";
port = 80;
remote_subnet = "::/0";
}
{
direction = "egress";
ethertype = "IPv4";
protocol = "tcp";
port = 443;
remote_subnet = "0.0.0.0/0";
}
{
direction = "egress";
ethertype = "IPv6";
protocol = "tcp";
port = 443;
remote_subnet = "::/0";
}
{
direction = "egress";
ethertype = "IPv4";
protocol = "udp";
port = 443;
remote_subnet = "0.0.0.0/0";
}
{
direction = "egress";
ethertype = "IPv6";
protocol = "udp";
port = 443;
remote_subnet = "::/0";
}
{
direction = "egress";
ethertype = "IPv4";
protocol = "icmp";
remote_subnet = "0.0.0.0/0";
}
{
direction = "egress";
ethertype = "IPv6";
protocol = "icmp";
remote_subnet = "::/0";
}
];
}; };
}; };
config = lib.mkIf cfg.enable ( config = lib.mkIf cfg.enable (
@ -188,6 +135,7 @@ in
flavor = cfg.flavor; flavor = cfg.flavor;
ssh_public_key = cfg.ssh_key; ssh_public_key = cfg.ssh_key;
firewall_rules = firewallRules; firewall_rules = firewallRules;
user_data = provisioningUserData;
}; };
khscodes.unifi.enable = true; khscodes.unifi.enable = true;
khscodes.unifi.static_route.compute = { khscodes.unifi.static_route.compute = {
@ -201,18 +149,16 @@ in
dns = { dns = {
enable = true; enable = true;
zone_name = tldFromFqdn fqdn; zone_name = tldFromFqdn fqdn;
aRecords = [ aRecords = lib.mkIf cfg.dns.mapIpv4Address (
{ lib.lists.map (d: {
inherit fqdn; fqdn = d;
content = config.khscodes.openstack.output.compute_instance.compute.ipv4_address; content = config.khscodes.openstack.output.compute_instance.compute.ipv4_address;
} }) cfg.dnsNames
]; );
aaaaRecords = [ aaaaRecords = lib.lists.map (d: {
{ fqdn = d;
inherit fqdn; content = config.khscodes.openstack.output.compute_instance.compute.ipv6_address;
content = config.khscodes.openstack.output.compute_instance.compute.ipv6_address; }) cfg.dnsNames;
}
];
}; };
}; };
output.ipv4_address = { output.ipv4_address = {
@ -232,21 +178,23 @@ in
{ {
assertions = [ assertions = [
{ {
assertion = config.khscodes.fqdn != null; assertion = config.khscodes.networking.fqdn != null;
message = "Must set config.khscodes.fqdn when using opentofu"; message = "Must set config.khscodes.networking.fqdn when using opentofu";
} }
]; ];
khscodes.services.openssh = {
enable = true;
hostCertificate = {
enable = true;
};
};
khscodes.services.read-vault-auth-from-userdata.url = "http://169.254.169.254/openstack/2012-08-10/user_data";
# khs openstack hosted servers are cannot use http-01 challenges (or maybe they can through ipv6?)
# so enable dns-01.
khscodes.security.acme.dns01Enabled = true;
khscodes.infrastructure.provisioning = { khscodes.infrastructure.provisioning = {
pre = { pre = {
modules = modules; modules = modules;
secretsSource = cfg.secretsSource;
endpoints = [
"aws"
"cloudflare"
"openstack"
"unifi"
];
}; };
preImageUsername = "debian"; preImageUsername = "debian";
}; };

View file

@ -0,0 +1,9 @@
{ lib, ... }:
{
options.khscodes.infrastructure.openbao = {
domain = lib.mkOption {
type = lib.types.str;
default = "vault.kaareskovgaard.net";
};
};
}

View file

@ -21,7 +21,73 @@ let
description = "Where to get the secrets for the provisioning from"; description = "Where to get the secrets for the provisioning from";
default = "vault"; default = "vault";
}; };
endpoints = lib.mkOption { };
usesEndpoint =
search: endpoint: config:
if lib.strings.hasInfix search (builtins.readFile config) then [ endpoint ] else [ ];
endpointsMaps = [
{
search = "cloudflare/cloudflare";
endpoint = "cloudflare";
}
{
search = "terraform-provider-openstack/openstack";
endpoint = "openstack";
}
{
search = "paultyng/unifi";
endpoint = "unifi";
}
{
search = "hashicorp/vault";
endpoint = "vault";
}
{
search = ".r2.cloudflarestorage.com";
endpoint = "aws";
}
];
endpointsUsed =
config:
if config == null then
[ ]
else
lib.lists.flatten (lib.lists.map (c: usesEndpoint c.search c.endpoint config) endpointsMaps);
preConfig =
if lib.lists.length cfg.pre.modules > 0 then
inputs.terranix.lib.terranixConfiguration {
system = pkgs.hostPlatform.system;
modules = cfg.pre.modules;
extraArgs = { inherit lib inputs; };
}
else
null;
preEndpoints = endpointsUsed preConfig;
postConfig =
if lib.lists.length cfg.post.modules > 0 then
inputs.terranix.lib.terranixConfiguration {
system = pkgs.hostPlatform.system;
modules = cfg.post.modules;
extraArgs = { inherit lib inputs; };
}
else
null;
postEndpoints = endpointsUsed postConfig;
in
{
options.khscodes.infrastructure.provisioning = {
pre = provisioning;
post = provisioning;
instanceUserData = lib.mkOption {
type = lib.types.str;
description = "User data that should be added to the instance during provisioning";
default = "";
};
preConfig = lib.mkOption {
type = lib.types.nullOr lib.types.path;
description = "The generated config for the pre provisioning, if any was specified";
};
preEndpoints = lib.mkOption {
type = lib.types.listOf ( type = lib.types.listOf (
lib.types.enum [ lib.types.enum [
"openstack" "openstack"
@ -29,21 +95,13 @@ let
"unifi" "unifi"
"hcloud" "hcloud"
"cloudflare" "cloudflare"
"vault"
"authentik"
] ]
); );
description = "Needed endpoints to be used during provisioning"; description = "Needed endpoints to be used during provisioning";
default = [ ]; default = [ ];
}; };
};
in
{
options.khscodes.infrastructure.provisioning = {
pre = provisioning;
post = provisioning;
preConfig = lib.mkOption {
type = lib.types.nullOr lib.types.path;
description = "The generated config for the pre provisioning, if any was specified";
};
preImageUsername = lib.mkOption { preImageUsername = lib.mkOption {
type = lib.types.str; type = lib.types.str;
description = "The username for the image being deployed before being swapped for NixOS"; description = "The username for the image being deployed before being swapped for NixOS";
@ -53,24 +111,27 @@ in
type = lib.types.nullOr lib.types.path; type = lib.types.nullOr lib.types.path;
description = "The generated config for the post provisioning, if any was specified"; description = "The generated config for the post provisioning, if any was specified";
}; };
postEndpoints = lib.mkOption {
type = lib.types.listOf (
lib.types.enum [
"openstack"
"aws"
"unifi"
"hcloud"
"cloudflare"
"vault"
"authentik"
]
);
description = "Needed endpoints to be used during provisioning";
default = [ ];
};
}; };
config = { config = {
khscodes.infrastructure.provisioning.preConfig = khscodes.infrastructure.provisioning.preConfig = preConfig;
if lib.lists.length cfg.pre.modules > 0 then khscodes.infrastructure.provisioning.preEndpoints = preEndpoints;
inputs.terranix.lib.terranixConfiguration { khscodes.infrastructure.provisioning.postConfig = postConfig;
system = pkgs.hostPlatform.system; khscodes.infrastructure.provisioning.postEndpoints = postEndpoints;
modules = cfg.pre.modules;
}
else
null;
khscodes.infrastructure.provisioning.postConfig =
if lib.lists.length cfg.post.modules > 0 then
inputs.terranix.lib.terranixConfiguration {
system = pkgs.hostPlatform.system;
modules = cfg.post.modules;
}
else
null;
}; };
} }

View file

@ -0,0 +1,83 @@
{
config,
lib,
...
}:
let
cfg = config.khscodes.infrastructure.vault-loki-sender;
fqdn = config.khscodes.networking.fqdn;
vaultRoleName = config.khscodes.infrastructure.vault-server-approle.role_name;
client_key = "/var/lib/alloy/loki.key";
client_cert = "/var/lib/alloy/loki.cert";
in
{
options.khscodes.infrastructure.vault-loki-sender = {
enable = lib.mkEnableOption "Configures the server approle to allow sending data to loki";
terranixBackendName = lib.mkOption {
type = lib.types.str;
description = "This should only be configured for the server hosting loki, to allow setting up dependencies in terraform";
default = "loki-mtls";
};
};
config = lib.mkIf cfg.enable {
khscodes.infrastructure.vault-server-approle = {
enable = true;
policy = {
"loki-mtls" = {
capabilities = [ "read" ];
};
"loki-mtls/issue/${fqdn}" = {
capabilities = [
"create"
"update"
];
};
};
stageModules = [
(
{ ... }:
{
khscodes.vault.pki_secret_backend_role."${vaultRoleName}-loki" = {
name = vaultRoleName;
backend = cfg.terranixBackendName;
allowed_domains = [ fqdn ];
allow_bare_domains = true;
enforce_hostnames = true;
server_flag = false;
client_flag = true;
};
}
)
];
};
khscodes.services.vault-agent.templates = [
{
contents = ''
{{- with pkiCert "loki-mtls/issue/${fqdn}" "common_name=${fqdn}" -}}
{{ .Key }}
{{ .Cert }}
{{ .CA }}
{{ .Key | writeToFile "${client_key}" "${config.khscodes.services.alloy.user}" "${config.khscodes.services.alloy.group}" "0600" }}
{{ .Cert | writeToFile "${client_cert}" "${config.khscodes.services.alloy.user}" "${config.khscodes.services.alloy.group}" "0644" }}
{{- end -}}
'';
destination = "/var/lib/alloy/cache.key";
owner = "alloy";
group = "alloy";
perms = "0600";
reloadOrRestartUnits = [ "alloy.service" ];
}
];
khscodes.services.alloy = {
enable = true;
environment = {
LOKI_CLIENT_KEY = client_key;
LOKI_CLIENT_CERT = client_cert;
};
};
environment.etc."alloy/loki.alloy" = {
source = ./loki.alloy;
};
};
}

View file

@ -0,0 +1,78 @@
// Collect logs from systemd journal for node_exporter integration
loki.source.journal "logs_integrations_integrations_node_exporter_journal_scrape" {
// Only collect logs from the last 24 hours
max_age = "24h0m0s"
// Apply relabeling rules to the logs
relabel_rules = discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules
// Send logs to the local Loki instance
forward_to = [loki.write.local.receiver]
}
// Define which log files to collect for node_exporter
local.file_match "logs_integrations_integrations_node_exporter_direct_scrape" {
path_targets = [{
// Target localhost for log collection
__address__ = "localhost",
// Collect standard system logs
__path__ = "/var/log/{syslog,messages,*.log}",
// Add instance label with hostname
instance = constants.hostname,
// Add job label for logs
job = "integrations/node_exporter",
}]
}
// Define relabeling rules for systemd journal logs
discovery.relabel "logs_integrations_integrations_node_exporter_journal_scrape" {
targets = []
rule {
// Extract systemd unit information into a label
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
rule {
// Extract boot ID information into a label
source_labels = ["__journal__boot_id"]
target_label = "boot_id"
}
rule {
// Extract transport information into a label
source_labels = ["__journal__transport"]
target_label = "transport"
}
rule {
// Extract log priority into a level label
source_labels = ["__journal_priority_keyword"]
target_label = "level"
}
rule {
// Set the instance label to the hostname of the machine
target_label = "instance"
replacement = constants.hostname
}
}
// Collect logs from files for node_exporter
loki.source.file "logs_integrations_integrations_node_exporter_direct_scrape" {
// Use targets defined in local.file_match
targets = local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets
// Send logs to the local Loki instance
forward_to = [loki.write.local.receiver]
}
// Define where to send logs for storage
loki.write "local" {
endpoint {
// Send logs to a locally running Loki instance
url = "https://loki.kaareskovgaard.net/loki/api/v1/push"
tls_config {
cert_file = sys.env("LOKI_CLIENT_CERT")
key_file = sys.env("LOKI_CLIENT_KEY")
}
}
}

View file

@ -0,0 +1,83 @@
{
config,
lib,
...
}:
let
cfg = config.khscodes.infrastructure.vault-prometheus-sender;
fqdn = config.khscodes.networking.fqdn;
vaultRoleName = config.khscodes.infrastructure.vault-server-approle.role_name;
client_key = "/var/lib/alloy/prometheus.key";
client_cert = "/var/lib/alloy/prometheus.cert";
in
{
options.khscodes.infrastructure.vault-prometheus-sender = {
enable = lib.mkEnableOption "Configures the server approle to allow sending data to prometheus";
terranixBackendName = lib.mkOption {
type = lib.types.str;
description = "This should only be configured for the server hosting vault, to allow setting up dependencies in terraform";
default = "prometheus-mtls";
};
};
config = lib.mkIf cfg.enable {
khscodes.infrastructure.vault-server-approle = {
enable = true;
policy = {
"prometheus-mtls" = {
capabilities = [ "read" ];
};
"prometheus-mtls/issue/${fqdn}" = {
capabilities = [
"create"
"update"
];
};
};
stageModules = [
(
{ ... }:
{
khscodes.vault.pki_secret_backend_role."${vaultRoleName}-prometheus" = {
name = vaultRoleName;
backend = cfg.terranixBackendName;
allowed_domains = [ fqdn ];
allow_bare_domains = true;
enforce_hostnames = true;
server_flag = false;
client_flag = true;
};
}
)
];
};
khscodes.services.vault-agent.templates = [
{
contents = ''
{{- with pkiCert "prometheus-mtls/issue/${fqdn}" "common_name=${fqdn}" -}}
{{ .Key }}
{{ .Cert }}
{{ .CA }}
{{ .Key | writeToFile "${client_key}" "${config.khscodes.services.alloy.user}" "${config.khscodes.services.alloy.group}" "0600" }}
{{ .Cert | writeToFile "${client_cert}" "${config.khscodes.services.alloy.user}" "${config.khscodes.services.alloy.group}" "0644" }}
{{- end -}}
'';
destination = "/var/lib/alloy/cache.key";
owner = "alloy";
group = "alloy";
perms = "0600";
reloadOrRestartUnits = [ "alloy.service" ];
}
];
khscodes.services.alloy = {
enable = true;
environment = {
PROMETHEUS_CLIENT_KEY = client_key;
PROMETHEUS_CLIENT_CERT = client_cert;
};
};
environment.etc."alloy/prometheus.alloy" = {
source = ./prometheus.alloy;
};
};
}

View file

@ -0,0 +1,72 @@
// This block relabels metrics coming from node_exporter to add standard labels
discovery.relabel "integrations_node_exporter" {
targets = prometheus.exporter.unix.integrations_node_exporter.targets
rule {
// Set the instance label to the hostname of the machine
target_label = "instance"
replacement = constants.hostname
}
rule {
// Set a standard job name for all node_exporter metrics
target_label = "job"
replacement = "integrations/node_exporter"
}
}
//
// Configure the node_exporter integration to collect system metrics
prometheus.exporter.unix "integrations_node_exporter" {
// Disable unnecessary collectors to reduce overhead
disable_collectors = ["ipvs", "btrfs", "infiniband", "xfs", "zfs"]
enable_collectors = ["meminfo"]
filesystem {
// Exclude filesystem types that aren't relevant for monitoring
fs_types_exclude = "^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|tmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"
// Exclude mount points that aren't relevant for monitoring
mount_points_exclude = "^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/)"
// Timeout for filesystem operations
mount_timeout = "5s"
}
netclass {
// Ignore virtual and container network interfaces
ignored_devices = "^(veth.*|cali.*|[a-f0-9]{15})$"
}
netdev {
// Exclude virtual and container network interfaces from device metrics
device_exclude = "^(veth.*|cali.*|[a-f0-9]{15})$"
}
}
// Define how to scrape metrics from the node_exporter
prometheus.scrape "integrations_node_exporter" {
scrape_interval = "15s"
// Use the targets with labels from the discovery.relabel component
targets = discovery.relabel.integrations_node_exporter.output
// Send the scraped metrics to the relabeling component
forward_to = [otelcol.receiver.prometheus.default.receiver]
}
otelcol.receiver.prometheus "default" {
output {
metrics = [otelcol.exporter.otlphttp.default.input]
}
}
// Define where to send the metrics for storage
otelcol.exporter.otlphttp "default" {
client {
endpoint = "https://prometheus.kaareskovgaard.net/api/v1/otlp/"
tls {
cert_file = sys.env("PROMETHEUS_CLIENT_CERT")
key_file = sys.env("PROMETHEUS_CLIENT_KEY")
}
}
encoding = "proto"
}

View file

@ -0,0 +1,141 @@
{
config,
lib,
inputs,
...
}:
let
cfg = config.khscodes.infrastructure.vault-server-approle;
vaultDomain = config.khscodes.infrastructure.openbao.domain;
in
{
options.khscodes.infrastructure.vault-server-approle = {
enable = lib.mkEnableOption "Enables creating an OpenBAO role for the server";
stage = lib.mkOption {
type = lib.types.enum [
"pre"
"post"
];
description = "The provisioning stage that should include the provisioning. This should be pre for every server except the OpenBAO server itself";
default = "pre";
};
path = lib.mkOption {
type = lib.types.str;
default = "approle";
description = "Sets the path, as a terraform expression, for the approle to get created in. Not useful for most instances, but useful when doing bootstrapping, to establish a dependency.";
};
role_name = lib.mkOption {
type = lib.types.str;
description = "Name of the role being created";
default = config.networking.fqdnOrHostName;
};
policy = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
capabilities = lib.mkOption {
type = lib.types.listOf (
lib.types.enum [
"create"
"update"
"patch"
"read"
"delete"
"list"
]
);
};
};
description = "Vault role policy";
}
);
};
stageModules = lib.mkOption {
type = lib.types.listOf lib.types.anything;
description = "Extra modules to add to the configured stage";
default = [ ];
};
};
config = lib.mkIf cfg.enable {
khscodes.services.read-vault-auth-from-userdata.enable = cfg.stage == "pre";
khscodes.services.vault-agent.enable = true;
khscodes.infrastructure.provisioning.${cfg.stage} = {
modules = [
(
{ config, lib, ... }:
{
imports = [ inputs.self.terranixModules.vault ];
output = lib.mkIf (cfg.stage == "post") {
role-id = {
value = config.khscodes.vault.output.approle_auth_backend_role.${cfg.role_name}.role_id;
sensitive = false;
};
secret-id-wrapped = {
value =
config.khscodes.vault.output.approle_auth_backend_role_secret_id.${cfg.role_name}.wrapping_token;
sensitive = true;
};
};
khscodes.vault = {
enable = true;
domain = vaultDomain;
approle_auth_backend_role.${cfg.role_name} = {
backend = cfg.path;
role_name = cfg.role_name;
# Secret IDs never expire, to allow vault agent to restart without issues.
# TODO: Look into doing this in a better way going forward, such that this won't
# be an issue under normal circumstances, but vault-agents (or instances)
# being offline for long periods of time should invalidate the secret id's.
secret_id_ttl = 0;
secret_id_num_uses = 0;
token_ttl = 20 * 60;
token_max_ttl = 30 * 60;
token_policies = [ cfg.role_name ];
};
approle_auth_backend_role_secret_id.${cfg.role_name} = {
backend = cfg.path;
# Not hardcoding the role name here, as reading it like this will create a dependency
# on the role being created first, which is needed.
role_name = config.khscodes.vault.output.approle_auth_backend_role.${cfg.role_name}.role_name;
# Should only be 5-10 mins once done testing
wrapping_ttl = 5 * 60;
# This should simply mean that we never attempt to recreate the secret id, as we don't want a rerun of the
# provisioning to invalidate the existing secret id, nor recreate the entire server.
with_wrapped_accessor = true;
lifecycle = {
ignore_changes = [
"num_uses"
"ttl"
];
};
};
policy.${cfg.role_name} = {
name = cfg.role_name;
policy = lib.strings.concatStringsSep "\n\n" (
lib.lists.map (
{ name, value }:
''
path "${name}" {
capabilities = ${builtins.toJSON value.capabilities}
}
''
) (lib.attrsToList cfg.policy)
);
};
};
}
)
] ++ cfg.stageModules;
};
# I can only provide the user data if the stage is pre (along with the instance creation)
# Also I should probably find a way of injecting this in a nicer way than this mess.
khscodes.infrastructure.provisioning.instanceUserData = lib.mkIf (cfg.stage == "pre") ''
{
"VAULT_ROLE_ID": "''${ vault_approle_auth_backend_role.${lib.khscodes.sanitize-terraform-name cfg.role_name}.role_id }",
"VAULT_SECRET_ID_WRAPPED": "''${ vault_approle_auth_backend_role_secret_id.${lib.khscodes.sanitize-terraform-name cfg.role_name}.wrapping_token }"
}
'';
};
}

View file

@ -0,0 +1,25 @@
{ config, lib, ... }:
let
cfg = config.khscodes.machine;
in
rec {
options.khscodes.machine = {
type = lib.mkOption {
type = lib.types.enum [
"server"
"desktop"
];
description = "The kind of machine that is running";
};
};
config = {
home-manager.sharedModules = [
{
inherit options;
config = {
khscodes.desktop.enable = cfg.type == "desktop";
};
}
];
};
}

View file

@ -0,0 +1,39 @@
{
config,
lib,
...
}:
let
cfg = config.khscodes.networking;
in
{
options.khscodes.networking = {
fqdn = lib.mkOption {
type = lib.types.str;
default = null;
description = "Sets the FQDN of the machine. This is a prerequisite for many modules to be used";
};
aliases = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
};
};
config =
let
hostname = builtins.head (lib.strings.splitString "." cfg.fqdn);
domain = if hostname == cfg then null else (lib.strings.removePrefix "${hostname}." cfg.fqdn);
in
{
networking.hostName = lib.mkForce hostname;
networking.domain = lib.mkForce domain;
networking.fqdn = cfg.fqdn;
# Add the name of the server to the ssh host certificate domains, but let other configs enable getting the host certificates.
khscodes.services.openssh.hostCertificate.hostNames = lib.lists.unique (
[ cfg.fqdn ] ++ cfg.aliases
);
boot.kernel.sysctl = {
"kernel.hostname" = cfg.fqdn;
};
};
}

View file

@ -0,0 +1,15 @@
{ config, lib, ... }:
let
cfg = config.khscodes.nix;
in
{
options.khscodes.nix = {
nix-community.enable = lib.mkEnableOption "Enables nix-community substituters";
};
config = {
nix.settings = lib.mkIf cfg.nix-community.enable {
substituters = [ "https://nix-community.cachix.org" ];
trusted-public-keys = [ "nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=" ];
};
};
}

View file

@ -16,10 +16,14 @@ in
}; };
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
disko = lib.khscodes.disko-root-lvm-bios { disko = lib.mkDefault (
device = "/dev/sda"; lib.khscodes.disko-root-lvm-bios {
diskName = cfg.diskName; device = "/dev/sda";
}; diskName = cfg.diskName;
}
);
# When this is set as the default, outbound ipv6 doesn't work on the instance.
networking.tempAddresses = "disabled";
boot.loader.grub.efiSupport = false; boot.loader.grub.efiSupport = false;
boot.loader.timeout = 1; boot.loader.timeout = 1;
khscodes.virtualisation.qemu-guest.enable = true; khscodes.virtualisation.qemu-guest.enable = true;

View file

@ -1 +0,0 @@
{ pkgs, ... }: { }

View file

@ -0,0 +1,55 @@
{
config,
lib,
inputs,
pkgs,
...
}:
let
cfg = config.khscodes.os.auto-update;
upgradePath = "/var/lib/system-upgrade";
upgradeVersion = "/var/lib/system-upgrade.version";
prepareUpgrade = pkgs.writeShellApplication {
runtimeInputs = [
pkgs.uutils-coreutils-noprefix
pkgs.nix
];
name = "nixos-prepare-upgrade";
text = ''
current_version=""
if [[ -f ${upgradeVersion} ]]; then
current_version="$(cat ${upgradeVersion})"
fi
if [[ "$current_version" != "${inputs.self.outPath}" ]]; then
rm -rf ${upgradePath}
cp -r ${inputs.self.outPath} ${upgradePath}
echo -n ${inputs.self.outPath} > ${upgradeVersion}
fi
cd ${upgradePath}
NIX_CONFIG="extra-experimental-features=flake nix-command" nix flake update
'';
};
in
{
options.khscodes.os.auto-update = {
enable = lib.mkEnableOption "Enables automatic OS updates";
dates = "02:00";
randomizedDelaySec = "45min";
};
config = lib.mkIf cfg.enable {
system.autoUpgrade = {
enable = true;
flake = upgradePath;
};
systemd.services.nixos-upgrade-prepare-flake = {
wantedBy = [ "nixos-upgrade.service" ];
before = [ "nixos-upgrade.service" ];
serviceConfig = {
Type = "oneshot";
ExecStart = lib.getExe prepareUpgrade;
};
};
};
}

View file

@ -0,0 +1,59 @@
{ config, lib, ... }:
let
cfg = config.khscodes.security.acme;
vaultAgentCredentialsFile = "/var/lib/vault-agent/acme/cloudflare-api-token";
cloudflareSecret = "opentofu/data/cloudflare";
acmeServicesToRestart = lib.lists.map (a: "acme-${a}.service") (
lib.attrsets.attrNames config.security.acme.certs
);
in
{
options.khscodes.security.acme = {
enable = lib.mkEnableOption "Enables acme";
dns01Enabled = lib.mkOption {
type = lib.types.bool;
description = "Whether to use DNS01 instead of http-01 challenges. This will make the approle gain policy to retrieve the needed cloudflare secrets to manage dns.";
default = config.khscodes.infrastructure.khs-openstack-instance.enable;
};
};
config = lib.mkIf cfg.enable {
security.acme = {
acceptTerms = true;
defaults =
{
email = "kaare@kaareskovgaard.net";
}
// lib.attrsets.optionalAttrs cfg.dns01Enabled {
dnsProvider = "cloudflare";
dnsResolver = null;
credentialsFile = vaultAgentCredentialsFile;
};
};
khscodes.infrastructure.vault-server-approle = {
enable = true;
policy = {
"${cloudflareSecret}" = {
capabilities = [ "read" ];
};
};
};
khscodes.services.vault-agent = lib.mkIf (cfg.dns01Enabled && acmeServicesToRestart != [ ]) {
enable = true;
templates = [
{
contents = ''
{{- with secret "${cloudflareSecret}" -}}
CLOUDFLARE_DNS_API_TOKEN={{ .Data.data.TF_VAR_cloudflare_token }}
CLOUDFLARE_DNS_EMAIL={{ .Data.data.TF_VAR_cloudflare_email }}
{{- end -}}
'';
destination = vaultAgentCredentialsFile;
perms = "0600";
owner = "acme";
group = "acme";
restartUnits = acmeServicesToRestart;
}
];
};
};
}

View file

@ -0,0 +1,31 @@
{
config,
lib,
pkgs,
...
}:
let
cfg = config.khscodes.security.yubikey;
in
{
options.khscodes.security.yubikey = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
};
};
config = lib.mkIf cfg.enable {
services.pcscd.enable = true;
services.udev.packages = [ pkgs.yubikey-personalization ];
environment.systemPackages = [
pkgs.yubikey-manager
pkgs.yubico-piv-tool
];
programs.gnupg.agent = {
enable = true;
enableSSHSupport = true;
};
};
}

View file

@ -0,0 +1,39 @@
{ config, lib, ... }:
let
cfg = config.khscodes.services.alloy;
in
{
options.khscodes.services.alloy = {
enable = lib.mkEnableOption "Enables alloy";
user = lib.mkOption {
type = lib.types.str;
default = "alloy";
};
group = lib.mkOption {
type = lib.types.str;
default = "alloy";
};
environment = lib.mkOption {
type = lib.types.attrsOf lib.types.str;
default = { };
};
};
config = lib.mkIf cfg.enable {
services.alloy.enable = true;
systemd.services.alloy = {
serviceConfig = {
DynamicUser = lib.mkForce false;
User = "${cfg.user}";
Group = "${cfg.group}";
};
environment = cfg.environment;
};
users.users.${cfg.user} = {
description = "Alloy service user";
isSystemUser = true;
group = cfg.group;
};
users.groups.${cfg.group} = { };
};
}

View file

@ -1,8 +0,0 @@
{ ... }:
{ }
# let
# modules = lib.khscodes.dirsInPath ./.;
# in
# {
# imports = lib.lists.map (d: import d args) modules;
# }

View file

@ -0,0 +1,223 @@
{
config,
lib,
pkgs,
modulesPath,
...
}:
let
cfg = config.khscodes.services.nginx;
locationOptions = import "${modulesPath}/services/web-servers/nginx/location-options.nix" {
inherit lib config;
};
vhostOption = lib.khscodes.mkSubmodule {
description = "nginx vhost";
options = {
acme = lib.mkOption {
description = "If a simple certificate for the virtual host name itself is not desired auto configured, then set this option. If set to a string it will be used as `useAcmeHost` from NixOS nginx service configuration. Otherwise set to the acme submodule and configure the desired certificate that way";
type = lib.types.nullOr (
lib.types.oneOf [
lib.types.str
(lib.khscodes.mkSubmodule {
description = "acme certificate";
options = {
domains = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "Domain names the certificate should be requested for, should include the virtual host itself";
};
};
})
]
);
default = null;
};
globalRedirect = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "If set, all requests for this host are redirected (defaults to 301, configurable with redirectCode) to the given hostname.";
};
redirectCode = lib.mkOption {
type = lib.types.int;
default = 301;
description = "HTTP status used by globalRedirect and forceSSL. Possible usecases include temporary (302, 307) redirects, keeping the request method and body (307, 308), or explicitly resetting the method to GET (303). See https://developer.mozilla.org/en-US/docs/Web/HTTP/Redirections.";
};
mtls = lib.mkOption {
type = lib.types.nullOr (
lib.khscodes.mkSubmodule {
options = {
verify = lib.mkOption {
type = lib.types.enum [
"optional"
"on"
];
default = "on";
};
certificate = lib.mkOption {
type = lib.types.str;
description = "Path to the certificate to verify client certificates against";
};
};
description = "Nginx MTLS settings";
}
);
default = null;
};
extraConfig = lib.mkOption {
type = lib.types.lines;
description = "Extra configuration to inject into the generated nginx config";
default = '''';
};
locations = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
description = "nginx virtual host location";
options = locationOptions.options;
}
);
default = { };
};
};
};
dns01Enabled = config.khscodes.security.acme.dns01Enabled;
useAcmeConfiguration = lib.attrsets.foldlAttrs (
acc: name: item:
acc || (item.acme != null && !lib.attrsets.isAttrs item.acme)
) false cfg.virtualHosts;
modernSslAppendedHttpConfig =
if cfg.sslConfiguration == "modern" then
''
ssl_ecdh_curve X25519:prime256v1:secp384r1;
''
else
'''';
in
{
options.khscodes.services.nginx = {
enable = lib.mkEnableOption "Enables nginx";
sslConfiguration = lib.mkOption {
type = lib.types.enum [
"modern"
"intermediate"
];
description = ''
Which sort of ssl configuration following https://ssl-config.mozilla.org/#server=nginx&version=1.28.0&config=modern&openssl=3.4.1&guideline=5.7 as a baseline to generate.
The generated config is not guarenteed to follow this template specifically. In general, modern is preferred, intermediate should only be used if there's a specific reason to do so.
Do note that intermediate requires generating dhparams of large size, which can take hours to complete.
TODO: Look into OCSP stapling.
'';
default = "modern";
};
virtualHosts = lib.mkOption {
type = lib.types.attrsOf vhostOption;
description = "Virtual hosts settings";
default = { };
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
assertion = !useAcmeConfiguration || dns01Enabled;
message = "Cannot use `config.khscodes.services.nginx.virtualHosts.<name>.acme = {}` without setting config.khscodes.security.acme.dns01Enabled";
}
];
khscodes.networking.aliases = lib.attrsets.attrNames cfg.virtualHosts;
khscodes.security.acme.enable = true;
security.dhparams.enable = lib.mkIf (cfg.sslConfiguration == "intermediate") {
enable = true;
params."nginx" = {
bits = 4096;
};
};
services.nginx = {
enable = true;
package = lib.mkDefault pkgs.nginxStable;
sslDhparam = lib.mkIf (
cfg.sslConfiguration == "intermediate"
) "${config.security.dhparams.params."nginx".path}"; # DHParams only used when using the ciphers of intermediate
sslProtocols = lib.mkIf (cfg.sslConfiguration == "modern") "TLSv1.3"; # The default matches intermediate
sslCiphers = lib.mkIf (cfg.sslConfiguration == "modern") null;
recommendedTlsSettings = lib.mkDefault true;
recommendedGzipSettings = lib.mkDefault true;
recommendedOptimisation = lib.mkDefault true;
recommendedZstdSettings = lib.mkDefault true;
recommendedProxySettings = lib.mkDefault true;
appendHttpConfig = ''
map $scheme $hsts_header {
https "max-age=63072000; preload";
}
add_header Strict-Transport-Security $hsts_header;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
${modernSslAppendedHttpConfig}
'';
virtualHosts = lib.attrsets.mapAttrs (
name: value:
let
mtls =
if value.mtls != null then
''
ssl_client_certificate ${value.mtls.certificate};
ssl_verify_client ${value.mtls.verify};
''
else
'''';
extraConfig = ''
${mtls}
${value.extraConfig}
'';
in
{
inherit (value)
locations
globalRedirect
redirectCode
;
inherit extraConfig;
forceSSL = true;
enableACME = value.acme == null && !dns01Enabled;
useACMEHost =
if lib.strings.isString value.acme then
value.acme
else if lib.attrsets.isAttrs value.acme || dns01Enabled then
name
else
null;
}
) cfg.virtualHosts;
};
networking.firewall.allowedTCPPorts = [
80
443
];
networking.firewall.allowedUDPPorts = [ 443 ];
users.users.nginx.extraGroups = lib.lists.optional dns01Enabled "acme";
security.acme.certs = lib.mkIf dns01Enabled (
lib.attrsets.foldlAttrs (
acc: name: value:
(
acc
// (lib.attrsets.optionalAttrs
(lib.attrsets.isAttrs value.acme || (dns01Enabled && !lib.strings.isString value.acme))
{
"${name}" =
if value.acme == null then
{
domain = name;
reloadServices = [ "nginx" ];
}
else
{
domain = lib.lists.head value.acme.domains;
extraDomainNames = lib.lists.tail value.acme.domains;
reloadServices = [ "nginx" ];
};
}
)
)
) { } cfg.virtualHosts
);
};
}

View file

@ -5,16 +5,85 @@ in
{ {
options.khscodes.services.openssh = { options.khscodes.services.openssh = {
enable = lib.mkEnableOption "Enables openssh service for the instance"; enable = lib.mkEnableOption "Enables openssh service for the instance";
}; hostCertificate = {
enable = lib.mkEnableOption "Enables getting host certificates from OpenBAO";
config = lib.mkIf cfg.enable { path = lib.mkOption {
services.openssh = { type = lib.types.str;
enable = true; default = "ssh-host";
settings = { };
PasswordAuthentication = false; hostNames = lib.mkOption {
PermitRootLogin = "no"; type = lib.types.listOf lib.types.str;
KbdInteractiveAuthentication = false; description = "The list of host names to get certificates for";
default = [ ];
}; };
}; };
}; };
config = lib.mkIf cfg.enable (
let
certificateNames = lib.lists.unique cfg.hostCertificate.hostNames;
hostCertificatEnable = cfg.hostCertificate.enable && cfg.hostCertificate.hostNames != [ ];
vaultRoleName = config.khscodes.infrastructure.vault-server-approle.role_name;
fqdn = config.networking.fqdnOrHostName;
sshHostBackend = "ssh-host";
in
{
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
PermitRootLogin = "no";
KbdInteractiveAuthentication = false;
};
extraConfig = lib.mkIf hostCertificatEnable ''
HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub
'';
};
khscodes.infrastructure.vault-server-approle = {
enable = true;
policy."${sshHostBackend}/sign/${vaultRoleName}" = {
capabilities = [
"read"
"update"
"create"
];
};
stageModules = [
{
khscodes.vault.ssh_secret_backend_role.${vaultRoleName} = {
name = fqdn;
backend = cfg.hostCertificate.path;
key_type = "ca";
allow_host_certificates = true;
allow_bare_domains = true;
allowed_domains = certificateNames;
allowed_user_key_config = [
{
type = "ed25519";
lengths = [ 0 ];
}
];
};
}
];
};
khscodes.services.vault-agent = lib.mkIf hostCertificatEnable {
enable = true;
templates = [
{
contents = ''
{{- $public_key := file "/etc/ssh/ssh_host_ed25519_key.pub" -}}
{{- $public_key = printf "public_key=%s" $public_key -}}
{{- with secret "ssh-host/sign/${fqdn}" "cert_type=host" $public_key "valid_principals=${lib.strings.concatStringsSep "," certificateNames}" -}}
{{ .Data.signed_key }}
{{- end -}}
'';
destination = "/etc/ssh/ssh_host_ed25519_key-cert.pub";
perms = "0644";
restartUnits = [ "sshd.service" ];
}
];
};
}
);
} }

View file

@ -0,0 +1,77 @@
{
config,
lib,
pkgs,
...
}:
let
cfg = config.khscodes.services.read-vault-auth-from-userdata;
in
{
options.khscodes.services.read-vault-auth-from-userdata = {
enable = lib.mkEnableOption "Enables reading vault auth information from instance userdata";
url = lib.mkOption {
type = lib.types.str;
description = "URL to retrieve instance metadata from";
};
};
config = lib.mkIf (cfg.enable && config.khscodes.services.vault-agent.enable) (
let
vault_addr = lib.escapeShellArg config.khscodes.services.vault-agent.vault.address;
secretIdFilePath = lib.escapeShellArg config.khscodes.services.vault-agent.vault.secretIdFilePath;
roleIdFilePath = lib.escapeShellArg config.khscodes.services.vault-agent.vault.roleIdFilePath;
cacheFilePath = lib.escapeShellArg "${config.khscodes.services.vault-agent.vault.secretIdFilePath}.wrapped";
in
{
systemd.services."read-vault-auth-from-userdata" = {
enable = true;
wantedBy = [ "multi-user.target" ];
wants = [ "network-online.target" ];
after = [ "network-online.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = lib.getExe (
pkgs.writeShellApplication {
name = "read-vault-auth-from-userdata";
runtimeInputs = [
pkgs.curl
pkgs.jq
pkgs.openbao
pkgs.getent
pkgs.systemd
];
text = ''
userdata="$(curl ${lib.escapeShellArg cfg.url})"
role_id="$(echo "$userdata" | jq --raw-output '.VAULT_ROLE_ID')"
secret_id_wrapped="$(echo "$userdata" | jq --raw-output '.VAULT_SECRET_ID_WRAPPED')"
if [[ -f ${cacheFilePath} ]]; then
cache_key="$(cat ${cacheFilePath})"
if [[ "$secret_id_wrapped" == "$cache_key" ]]; then
echo "Secret id matched last used value, exiting program"
exit 0
fi
fi
secret_id="$(BAO_ADDR=${vault_addr} bao unwrap -field=secret_id "$secret_id_wrapped")"
mkdir -p "$(dirname ${secretIdFilePath})"
mkdir -p "$(dirname ${roleIdFilePath})"
echo -n "$role_id" > ${roleIdFilePath}
echo -n "$secret_id" > ${secretIdFilePath}
chown root:root ${secretIdFilePath}
chmod 0600 ${secretIdFilePath}
chown root:root ${roleIdFilePath}
chmod 0600 ${roleIdFilePath}
echo -n "$secret_id_wrapped" > ${cacheFilePath}
chmod 0600 ${cacheFilePath}
chown root:root ${cacheFilePath}
echo "Role id and secret id copied, restarting vault-agent"
systemctl restart vault-agent-openbao.service
'';
}
);
};
};
}
);
}

View file

@ -0,0 +1,176 @@
{
lib,
config,
pkgs,
...
}:
let
cfg = config.khscodes.services.vault-agent;
mkSubmodule =
{
options,
description,
}:
lib.types.submoduleWith {
description = description;
shorthandOnlyDefinesConfig = true;
modules = lib.toList { inherit options; };
};
restartUnits =
svcs:
lib.strings.concatStringsSep "\n" (
lib.lists.map (svc: "systemctl restart ${lib.escapeShellArg svc}") svcs
);
reloadOrRestartUnits =
svcs:
lib.strings.concatStringsSep "\n" (
lib.lists.map (svc: "systemctl reload-or-restart ${lib.escapeShellArg svc}") svcs
);
mapTemplate =
template:
let
command = lib.getExe (
pkgs.writeShellApplication {
name = "restart-command";
runtimeInputs = [ pkgs.systemd ];
text = ''
chown ${lib.escapeShellArg template.owner}:${lib.escapeShellArg template.group} ${lib.escapeShellArg template.destination}
${restartUnits template.restartUnits}
${reloadOrRestartUnits template.reloadOrRestartUnits}
${template.exec}
'';
meta = {
mainProgram = "restart-command";
};
}
);
in
{
inherit (template) destination perms contents;
exec = {
command = command;
};
};
settings = {
vault = {
address = cfg.vault.address;
};
auto_auth = {
method = [
{
type = "approle";
config = {
mount_path = "auth/approle";
role_id_file_path = cfg.vault.roleIdFilePath;
secret_id_file_path = cfg.vault.secretIdFilePath;
remove_secret_id_file_after_reading = false;
};
}
];
};
template_config = {
exit_on_retry_failure = true;
static_secret_render_interval = "60m";
max_connections_per_host = 10;
leases_renewal_threshold = 0.5;
};
template = lib.mkIf (cfg.templates != [ ]) (lib.lists.map mapTemplate cfg.templates);
};
unitsDependsOnAgent = lib.lists.unique (
lib.lists.flatten (lib.lists.map (t: t.restartUnits ++ t.reloadOrRestartUnits) cfg.templates)
);
in
{
options.khscodes.services.vault-agent = {
enable = lib.mkEnableOption "Enables the OpenBAO agent";
package = lib.mkOption {
type = lib.types.package;
default = pkgs.openbao;
defaultText = "pkgs.openbao";
};
vault = {
address = lib.mkOption {
type = lib.types.str;
description = "Address of the Vault/OpenBAO service";
default = "https://${config.khscodes.infrastructure.openbao.domain}";
};
roleIdFilePath = lib.mkOption {
type = lib.types.str;
description = "Location of the role id";
default = "/var/lib/vault-agent/role-id";
};
secretIdFilePath = lib.mkOption {
type = lib.types.str;
description = "Location of the secret id";
default = "/var/lib/vault-agent/secret-id";
};
};
templates = lib.mkOption {
default = [ ];
type = lib.types.listOf (mkSubmodule {
description = "List of templates to render";
options = {
contents = lib.mkOption {
type = lib.types.str;
description = "Contents of the template (.ctmpl)";
};
destination = lib.mkOption {
type = lib.types.str;
description = "Destination file for the template";
};
restartUnits = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "List of systemd units to restart when template changes";
default = [ ];
};
reloadOrRestartUnits = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "List of systemd units to reload-or-restart when template changes";
default = [ ];
};
perms = lib.mkOption {
type = lib.types.str;
description = "Permissions of the generated file, by default will only be readable by root";
default = "0600";
};
owner = lib.mkOption {
type = lib.types.str;
description = "Owner (user) of the generated file";
default = "root";
};
group = lib.mkOption {
type = lib.types.str;
description = "Group of the generated file";
default = "root";
};
exec = lib.mkOption {
type = lib.types.lines;
default = '''';
description = "Command to execute when template renders new data";
};
};
});
};
};
config = lib.mkIf cfg.enable {
services.vault-agent.instances.openbao = {
inherit settings;
enable = true;
package = cfg.package;
user = "root";
group = "root";
};
systemd.services."vault-agent-openbao" = {
before = unitsDependsOnAgent;
wantedBy = unitsDependsOnAgent;
unitConfig = {
ConditionPathExists = [
cfg.vault.secretIdFilePath
cfg.vault.roleIdFilePath
];
};
};
};
}

View file

@ -0,0 +1,21 @@
{ lib, config, ... }:
let
cfg = config.khscodes.users.khs;
in
{
options.khscodes.users.khs = {
enable = lib.mkEnableOption "Enables settings for the khs user. This should be used in conjunction with homes";
};
config = lib.mkIf cfg.enable {
snowfallorg.users.khs.admin = true;
users.users.khs = {
# TODO: What should I do wrt. ensuring the passwords are consistent?
# Maybe set them through OpenBAO and some service?
initialPassword = "changeme";
openssh.authorizedKeys.keys = [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqY0FHnWFKfLG2yfgr4qka5sR9CK+EMAhzlHUkaQyWHTKD+G0/vC/fNPyL1VV3Dxc/ajxGuPzVE+mBMoyxazL3EtuCDOVvHJ5CR+MUSEckg/DDwcGHqy6rC8BvVVpTAVL04ByQdwFnpE1qNSBaQLkxaFVdtriGKkgMkc7+UNeYX/bv7yn+APqfP1a3xr6wdkSSdO8x4N2jsSygOIMx10hLyCV4Ueu7Kp8Ww4rGY8j5o7lKJhbgfItBfSOuQHdppHVF/GKYRhdnK6Y2fZVYbhq4KipUtclbZ6O/VYd8/sOO98+LMm7cOX+K35PQjUpYgcoNy5+Sw3CNS/NHn4JvOtTaUEYP7fK6c9LhMULOO3T7Cm6TMdiFjUKHkyG+s2Mu/LXJJoilw571zwuh6chkeitW8+Ht7k0aPV96kNEvTdoXwLhBifVEaChlAsLAzSUjUq+YYCiXVk0VIXCZQWKj8LoVNTmaqDksWwbcT64fw/FpVC0N18WHbKcFUEIW/O4spJMa30CQwf9FeqpoWoaF1oRClCSDPvX0AauCu0JcmRinz1/JmlXljnXWbSfm20/V+WyvktlI0wTD0cdpNuSasT9vS77YfJ8nutcWWZKSkCj4R4uHeCNpDTX5YXzapy7FxpM9ANCXLIvoGX7Yafba2Po+er7SSsUIY1AsnBBr8ZoDVw=="
];
};
};
}

View file

@ -1,10 +0,0 @@
# TODO: Why is this needed just for this directory?
# In the other directories this will create the modules twice.
# Perhaps because there's only a single sub directory here?
args@{ lib, ... }:
let
modules = lib.khscodes.dirsInPath ./.;
in
{
imports = lib.lists.map (d: import d args) modules;
}

View file

@ -1,16 +1,47 @@
{ {
config, config,
lib, lib,
modulesPath,
... ...
}: }:
let let
cfg = config.khscodes.virtualisation.qemu-guest; cfg = config.khscodes.virtualisation.qemu-guest;
rng = "-device virtio-rng-pci,rng=rng0 -object rng-random,id=rng0,filename=/dev/urandom";
spice = [
"-spice disable-ticketing=on,gl=on,unix=on,addr=/tmp/spice.sock"
"-device virtio-serial-pci"
"-chardev socket,id=agent0,path=vm.sock,server=on,wait=off"
"-device virtserialport,chardev=agent0,name=org.qemu.guest_agent.0"
"-chardev spicevmc,id=vdagent0,name=vdagent"
"-device virtserialport,chardev=vdagent0,name=com.redhat.spice.0"
"-chardev spiceport,id=webdav0,name=org.spice-space.webdav.0"
"-device virtserialport,chardev=webdav0,name=org.spice-space.webdav.0"
];
in in
{ {
options.khscodes.virtualisation.qemu-guest = { options.khscodes.virtualisation.qemu-guest = {
enable = lib.mkEnableOption "Configures machine with NixOS profile for qemu guest"; enable = lib.mkEnableOption "Configures machine with NixOS profile for qemu guest";
enableWhenVmTarget = lib.mkEnableOption "Enables some enhancement settings when building as a vm";
}; };
config = lib.mkIf cfg.enable (import "${modulesPath}/profiles/qemu-guest.nix" { }); imports = [ ./profile.nix ];
config = lib.mkIf cfg.enable {
services.qemuGuest.enable = true;
virtualisation = lib.mkIf cfg.enableWhenVmTarget {
vmVariant = {
khscodes.virtualisation.qemu-guest.enable = true;
services.spice-vdagentd.enable = true;
virtualisation = {
memorySize = 1024 * 8;
qemu = {
options = [
"-smp 8"
"-vga none -device virtio-gpu-gl,hostmem=2G,blob=true,venus=true"
rng
] ++ spice;
};
};
};
};
};
} }

View file

@ -0,0 +1,12 @@
{
config,
lib,
modulesPath,
...
}:
let
cfg = config.khscodes.virtualisation.qemu-guest;
in
{
config = lib.mkIf cfg.enable (import "${modulesPath}/profiles/qemu-guest.nix" { });
}

View file

@ -1,4 +1,3 @@
{ inputs, khscodesLib }:
{ config, lib, ... }: { config, lib, ... }:
let let
cfg = config.khscodes.cloudflare; cfg = config.khscodes.cloudflare;
@ -13,7 +12,7 @@ let
"@" "@"
else else
fqdn; fqdn;
dnsARecordModule = khscodesLib.mkSubmodule { dnsARecordModule = lib.khscodes.mkSubmodule {
description = "Module for defining dns A/AAAA record"; description = "Module for defining dns A/AAAA record";
options = { options = {
fqdn = lib.mkOption { fqdn = lib.mkOption {
@ -36,7 +35,7 @@ let
}; };
}; };
}; };
dnsTxtRecordModule = khscodesLib.mkSubmodule { dnsTxtRecordModule = lib.khscodes.mkSubmodule {
description = "Module for defining dns TXT record"; description = "Module for defining dns TXT record";
options = { options = {
fqdn = lib.mkOption { fqdn = lib.mkOption {
@ -54,7 +53,7 @@ let
}; };
}; };
}; };
dnsMxRecordModule = khscodesLib.mkSubmodule { dnsMxRecordModule = lib.khscodes.mkSubmodule {
description = "Module for defining dns MX record"; description = "Module for defining dns MX record";
options = { options = {
fqdn = lib.mkOption { fqdn = lib.mkOption {
@ -126,7 +125,7 @@ in
resource.cloudflare_record = lib.attrsets.optionalAttrs cfg.dns.enable ( resource.cloudflare_record = lib.attrsets.optionalAttrs cfg.dns.enable (
lib.listToAttrs ( lib.listToAttrs (
(lib.lists.map (record: { (lib.lists.map (record: {
name = "${khscodesLib.sanitize-terraform-name record.fqdn}_a"; name = "${lib.khscodes.sanitize-terraform-name record.fqdn}_a";
value = { value = {
inherit (record) content ttl proxied; inherit (record) content ttl proxied;
name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name; name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name;
@ -136,7 +135,7 @@ in
}; };
}) cfg.dns.aRecords) }) cfg.dns.aRecords)
++ (lib.lists.map (record: { ++ (lib.lists.map (record: {
name = "${khscodesLib.sanitize-terraform-name record.fqdn}_aaaa"; name = "${lib.khscodes.sanitize-terraform-name record.fqdn}_aaaa";
value = { value = {
inherit (record) content ttl proxied; inherit (record) content ttl proxied;
name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name; name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name;
@ -146,7 +145,7 @@ in
}; };
}) cfg.dns.aaaaRecords) }) cfg.dns.aaaaRecords)
++ (lib.lists.map (record: { ++ (lib.lists.map (record: {
name = "${khscodesLib.sanitize-terraform-name record.fqdn}_txt"; name = "${lib.khscodes.sanitize-terraform-name record.fqdn}_txt";
value = { value = {
inherit (record) content ttl; inherit (record) content ttl;
name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name; name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name;
@ -156,7 +155,7 @@ in
}; };
}) cfg.dns.txtRecords) }) cfg.dns.txtRecords)
++ (lib.lists.map (record: { ++ (lib.lists.map (record: {
name = "${khscodesLib.sanitize-terraform-name record.fqdn}_mx"; name = "${lib.khscodes.sanitize-terraform-name record.fqdn}_mx";
value = { value = {
inherit (record) content priority; inherit (record) content priority;
name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name; name = nameFromFQDNAndZone record.fqdn cfg.dns.zone_name;

View file

@ -1,5 +1,9 @@
{ inputs, khscodesLib }: {
{ config, lib, ... }: config,
lib,
inputs,
...
}:
let let
cfg = config.khscodes.hcloud; cfg = config.khscodes.hcloud;
serversWithRdns = lib.filterAttrs (_: value: value.rdns != null) cfg.server; serversWithRdns = lib.filterAttrs (_: value: value.rdns != null) cfg.server;
@ -9,7 +13,7 @@ let
lib.map ( lib.map (
{ name, value }: { name, value }:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -20,7 +24,7 @@ let
} }
) (lib.attrsToList list) ) (lib.attrsToList list)
); );
hcloudServerModule = khscodesLib.mkSubmodule { hcloudServerModule = lib.khscodes.mkSubmodule {
description = "Module for defining hcloud server"; description = "Module for defining hcloud server";
options = { options = {
name = lib.mkOption { name = lib.mkOption {
@ -51,9 +55,14 @@ let
default = null; default = null;
description = "FQDN to map rDNS to"; description = "FQDN to map rDNS to";
}; };
user_data = lib.mkOption {
type = lib.types.str;
default = "";
description = "User data for the instance";
};
}; };
}; };
hcloudDataSshKeys = khscodesLib.mkSubmodule { hcloudDataSshKeys = lib.khscodes.mkSubmodule {
description = "SSH Keys"; description = "SSH Keys";
options = { options = {
name = lib.mkOption { name = lib.mkOption {
@ -83,7 +92,7 @@ in
}; };
imports = [ imports = [
inputs.terranix-hcloud.terranixModules.hcloud inputs.terranix-hcloud.terranixModules.hcloud
(import ./output.nix { inherit inputs khscodesLib; }) ./output.nix
]; ];
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
@ -106,6 +115,7 @@ in
ipv6_enabled = true; ipv6_enabled = true;
ipv6 = "\${ hcloud_primary_ip.${name}_ipv6.id }"; ipv6 = "\${ hcloud_primary_ip.${name}_ipv6.id }";
}; };
user_data = builtins.toJSON value.user_data;
lifecycle = { lifecycle = {
ignore_changes = [ ignore_changes = [
"ssh_keys" "ssh_keys"
@ -119,7 +129,7 @@ in
(lib.mapAttrs' ( (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ipv4"; name = "${sanitizedName}_ipv4";
@ -136,7 +146,7 @@ in
// (lib.mapAttrs' ( // (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ipv6"; name = "${sanitizedName}_ipv6";
@ -154,7 +164,7 @@ in
(lib.mapAttrs' ( (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ipv4"; name = "${sanitizedName}_ipv4";
@ -168,7 +178,7 @@ in
// (lib.mapAttrs' ( // (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ipv6"; name = "${sanitizedName}_ipv6";

View file

@ -1,8 +1,7 @@
{ khscodesLib, ... }:
{ config, lib, ... }: { config, lib, ... }:
let let
cfg = config.khscodes.hcloud; cfg = config.khscodes.hcloud;
hcloudOutputServerModule = khscodesLib.mkSubmodule { hcloudOutputServerModule = lib.khscodes.mkSubmodule {
description = "Module defined when a corresponding server has been defined"; description = "Module defined when a corresponding server has been defined";
options = { options = {
id = lib.mkOption { id = lib.mkOption {
@ -19,7 +18,7 @@ let
}; };
}; };
}; };
hcloudDataOutputSshKeyModule = khscodesLib.mkSubmodule { hcloudDataOutputSshKeyModule = lib.khscodes.mkSubmodule {
description = "Module defined when a corresponding ssh key has ben retrieved"; description = "Module defined when a corresponding ssh key has ben retrieved";
options = { options = {
id = lib.mkOption { id = lib.mkOption {
@ -47,7 +46,7 @@ in
name: value: name: value:
( (
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
id = "\${ hcloud_server.${sanitizedName}.id }"; id = "\${ hcloud_server.${sanitizedName}.id }";
@ -59,7 +58,7 @@ in
khscodes.hcloud.output.data.ssh_key = lib.attrsets.mapAttrs ( khscodes.hcloud.output.data.ssh_key = lib.attrsets.mapAttrs (
name: _: name: _:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
id = "\${ data.hcloud_ssh_key.${sanitizedName}.id }"; id = "\${ data.hcloud_ssh_key.${sanitizedName}.id }";

View file

@ -1,32 +0,0 @@
{ khscodesLib, inputs }:
{ lib, config, ... }:
let
cfg = config.khscodes.openbao;
modules = [
./output.nix
./vault_mount.nix
];
in
{
options.khscodes.openbao = {
enable = lib.mkEnableOption "Enables the openbao provider";
};
imports = lib.lists.map (m: import m { inherit khscodesLib inputs; }) modules;
config = lib.mkIf cfg.enable {
provider.vault = {
address = "https://auth.kaareskovgaard.net";
};
terraform.required_providers.vault = {
source = "hashicorp/vault";
version = "5.0.0";
};
resource.vault_mount = lib.mapAttrs' (
name: value: {
name = khscodesLib.sanitize-terraform-name name;
value = value;
}
);
};
}

View file

@ -1,10 +0,0 @@
{ khscodesLib, ... }:
{ config, lib, ... }:
let
cfg = config.khscodes.openbao;
in
{
options.khscodes.openbao = { };
config = {
};
}

View file

@ -1,45 +0,0 @@
{ khscodesLib, ... }:
{ lib, config, ... }:
let
cfg = config.khscodes.openbao;
in
{
options.khscodes.openbao = {
vault_ssh_secret_backend_ca = lib.mkOption {
type = lib.types.attrsOf (
khscodesLib.mkSubmodule {
options = {
backend = lib.mkOption {
type = lib.types.str;
description = "Path of the backend mount";
};
generate_signing_key = lib.mkOption {
type = lib.types.bool;
description = "Generate a signing key on the server";
};
key_type = lib.mkOption {
type = lib.types.str;
description = "The type of the signing key to use/generate";
};
};
description = "vault_ssh_secret_backend_ca";
}
);
};
};
config = lib.mkIf cfg.enable {
provider.vault = {
address = "https://auth.kaareskovgaard.net";
};
terraform.required_providers.vault = {
source = "hashicorp/vault";
version = "5.0.0";
};
resource.vault_ssh_secret_backend_ca = lib.mapAttrs' (
name: value: {
name = khscodesLib.sanitize-terraform-name name;
value = value;
}
);
};
}

View file

@ -1,52 +0,0 @@
{ khscodesLib, ... }:
{ lib, config, ... }:
let
cfg = config.khscodes.openbao;
in
{
options.khscodes.openbao = {
vault_mount = lib.mkOption {
type = lib.types.attrsOf (
khscodesLib.mkSubmodule {
options = {
type = lib.mkOption {
type = lib.types.str;
description = "Type of mount";
};
path = lib.mkOption {
type = lib.types.str;
description = "Path of the mount";
default = null;
};
default_lease_ttl_seconds = lib.mkOption {
type = lib.types.int;
description = "Default lease ttl in seconds";
default = null;
};
max_lease_ttl_seconds = lib.mkOption {
type = lib.types.int;
description = "Max lease ttl in seconds";
default = null;
};
};
description = "vault_mount";
}
);
};
};
config = lib.mkIf cfg.enable {
provider.vault = {
address = "https://auth.kaareskovgaard.net";
};
terraform.required_providers.vault = {
source = "hashicorp/vault";
version = "5.0.0";
};
resource.vault_mount = lib.mapAttrs' (
name: value: {
name = khscodesLib.sanitize-terraform-name name;
value = value;
}
);
};
}

View file

@ -1,11 +1,11 @@
{ khscodesLib, inputs }: {
{ lib, config, ... }: lib,
config,
...
}:
let let
cfg = config.khscodes.openstack; cfg = config.khscodes.openstack;
modules = [ firewallRuleModule = lib.khscodes.mkSubmodule {
./output.nix
];
firewallRuleModule = khscodesLib.mkSubmodule {
description = "Firewall rule"; description = "Firewall rule";
options = { options = {
direction = lib.mkOption { direction = lib.mkOption {
@ -53,7 +53,7 @@ let
port_range_min = rule.port; port_range_min = rule.port;
port_range_max = rule.port; port_range_max = rule.port;
}); });
openstackComputeInstance = khscodesLib.mkSubmodule { openstackComputeInstance = lib.khscodes.mkSubmodule {
description = "Openstack compute instance"; description = "Openstack compute instance";
options = { options = {
name = lib.mkOption { name = lib.mkOption {
@ -85,6 +85,11 @@ let
"1.0.0.1" "1.0.0.1"
]; ];
}; };
user_data = lib.mkOption {
type = lib.types.str;
default = "";
description = "User data for the instance";
};
volume_size = lib.mkOption { volume_size = lib.mkOption {
type = lib.types.int; type = lib.types.int;
description = "Size of the root volume, in gigabytes"; description = "Size of the root volume, in gigabytes";
@ -127,7 +132,7 @@ in
}; };
}; };
imports = lib.lists.map (m: import m { inherit khscodesLib inputs; }) modules; imports = [ ./output.nix ];
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
terraform.required_providers.openstack = { terraform.required_providers.openstack = {
@ -169,7 +174,7 @@ in
data.openstack_compute_flavor_v2 = lib.mapAttrs' ( data.openstack_compute_flavor_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -183,7 +188,7 @@ in
data.openstack_images_image_v2 = lib.mapAttrs' ( data.openstack_images_image_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -209,12 +214,12 @@ in
resource.openstack_compute_keypair_v2 = lib.mapAttrs' ( resource.openstack_compute_keypair_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
value = { value = {
name = khscodesLib.sanitize-terraform-name value.name; name = lib.khscodes.sanitize-terraform-name value.name;
public_key = value.ssh_public_key; public_key = value.ssh_public_key;
}; };
} }
@ -224,7 +229,7 @@ in
resource.openstack_networking_router_v2 = lib.mapAttrs' ( resource.openstack_networking_router_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -240,7 +245,7 @@ in
resource.openstack_networking_network_v2 = lib.mapAttrs' ( resource.openstack_networking_network_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -256,7 +261,7 @@ in
(lib.mapAttrs' ( (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ip4"; name = "${sanitizedName}_ip4";
@ -273,7 +278,7 @@ in
// (lib.mapAttrs' ( // (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ip6"; name = "${sanitizedName}_ip6";
@ -295,7 +300,7 @@ in
(lib.mapAttrs' ( (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ip4"; name = "${sanitizedName}_ip4";
@ -308,7 +313,7 @@ in
// (lib.mapAttrs' ( // (lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = "${sanitizedName}_ip6"; name = "${sanitizedName}_ip6";
@ -323,7 +328,7 @@ in
resource.openstack_networking_floatingip_v2 = lib.mapAttrs' ( resource.openstack_networking_floatingip_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -338,7 +343,7 @@ in
resource.openstack_blockstorage_volume_v3 = lib.mapAttrs' ( resource.openstack_blockstorage_volume_v3 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -355,7 +360,7 @@ in
resource.openstack_networking_secgroup_v2 = lib.mapAttrs' ( resource.openstack_networking_secgroup_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -372,7 +377,7 @@ in
lib.lists.map ( lib.lists.map (
{ name, value }: { name, value }:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
lib.listToAttrs ( lib.listToAttrs (
lib.lists.map ( lib.lists.map (
@ -382,7 +387,7 @@ in
if rule.protocol == "icmp" then "icmp" else "${rule.protocol}_${builtins.toString rule.port}"; if rule.protocol == "icmp" then "icmp" else "${rule.protocol}_${builtins.toString rule.port}";
in in
{ {
name = "${sanitizedName}_${rule.direction}_${rule.ethertype}_${protocol}_${khscodesLib.sanitize-terraform-name rule.remote_subnet}"; name = "${sanitizedName}_${rule.direction}_${rule.ethertype}_${protocol}_${lib.khscodes.sanitize-terraform-name rule.remote_subnet}";
value = mapFirewallRule "\${ resource.openstack_networking_secgroup_v2.${sanitizedName}.id }" rule; value = mapFirewallRule "\${ resource.openstack_networking_secgroup_v2.${sanitizedName}.id }" rule;
} }
) value.firewall_rules ) value.firewall_rules
@ -395,7 +400,7 @@ in
data.openstack_networking_port_v2 = lib.mapAttrs' ( data.openstack_networking_port_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -408,7 +413,7 @@ in
resource.openstack_compute_instance_v2 = lib.mapAttrs' ( resource.openstack_compute_instance_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;
@ -432,6 +437,7 @@ in
uuid = "\${ openstack_networking_network_v2.${sanitizedName}.id }"; uuid = "\${ openstack_networking_network_v2.${sanitizedName}.id }";
} }
]; ];
user_data = value.user_data;
}; };
} }
) cfg.compute_instance; ) cfg.compute_instance;
@ -440,7 +446,7 @@ in
resource.openstack_networking_floatingip_associate_v2 = lib.mapAttrs' ( resource.openstack_networking_floatingip_associate_v2 = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;

View file

@ -1,8 +1,7 @@
{ khscodesLib, ... }:
{ config, lib, ... }: { config, lib, ... }:
let let
cfg = config.khscodes.openstack; cfg = config.khscodes.openstack;
openstackOutputInstanceModule = khscodesLib.mkSubmodule { openstackOutputInstanceModule = lib.khscodes.mkSubmodule {
description = "Instance output"; description = "Instance output";
options = { options = {
id = lib.mkOption { id = lib.mkOption {
@ -41,7 +40,7 @@ in
name: value: name: value:
( (
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
id = "\${ openstack_compute_instance_v2.${sanitizedName}.id }"; id = "\${ openstack_compute_instance_v2.${sanitizedName}.id }";

View file

@ -1,4 +1,3 @@
{ ... }:
{ lib, config, ... }: { lib, config, ... }:
let let
cfg = config.khscodes.s3; cfg = config.khscodes.s3;

View file

@ -1,11 +1,7 @@
{ khscodesLib, inputs }:
{ lib, config, ... }: { lib, config, ... }:
let let
cfg = config.khscodes.unifi; cfg = config.khscodes.unifi;
modules = [ unifiStaticRouteModule = lib.khscodes.mkSubmodule {
./output.nix
];
unifiStaticRouteModule = khscodesLib.mkSubmodule {
description = "Unifi static route"; description = "Unifi static route";
options = { options = {
network = lib.mkOption { network = lib.mkOption {
@ -36,7 +32,7 @@ in
}; };
}; };
imports = lib.lists.map (m: import m { inherit khscodesLib inputs; }) modules; imports = [ ./output.nix ];
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
terraform.required_providers.unifi = { terraform.required_providers.unifi = {
@ -50,7 +46,7 @@ in
resource.unifi_static_route = lib.mapAttrs' ( resource.unifi_static_route = lib.mapAttrs' (
name: value: name: value:
let let
sanitizedName = khscodesLib.sanitize-terraform-name name; sanitizedName = lib.khscodes.sanitize-terraform-name name;
in in
{ {
name = sanitizedName; name = sanitizedName;

View file

@ -1,5 +1,4 @@
{ khscodesLib, ... }: { config, ... }:
{ config, lib, ... }:
let let
cfg = config.khscodes.unifi; cfg = config.khscodes.unifi;
in in

View file

@ -0,0 +1,119 @@
{ lib, config, ... }:
let
cfg = config.khscodes.vault;
in
{
options.khscodes.vault = {
approle_auth_backend_role = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
backend = lib.mkOption {
type = lib.types.str;
description = "Path of the backend";
default = "approle";
};
role_name = lib.mkOption {
type = lib.types.str;
description = "Name of the role";
};
secret_id_ttl = lib.mkOption {
type = lib.types.int;
description = "TTL for the secret id, in seconds";
};
secret_id_num_uses = lib.mkOption {
type = lib.types.int;
description = "Maximum number of uses per secret id";
};
token_ttl = lib.mkOption {
type = lib.types.int;
description = "TTL for the tokens issued, in seconds";
};
token_max_ttl = lib.mkOption {
type = lib.types.int;
description = "Max TTL for the tokens issued, in seconds";
};
token_policies = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "Policies attached to the backend role";
};
};
description = "vault_approle_auth_backend_role";
}
);
description = "Defines an app backend role";
default = { };
};
approle_auth_backend_role_secret_id = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
backend = lib.mkOption {
type = lib.types.str;
description = "Path of the backend";
default = "approle";
};
role_name = lib.mkOption {
type = lib.types.str;
description = "NThe name of the role to create the SecretID for";
};
cidr_list = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "If set, specifies blocks of IP addresses which can perform the login operation using this SecretID";
default = [ ];
};
metadata = lib.mkOption {
type = lib.types.attrsOf lib.types.str;
description = "Metadata associated with tokens issued by this secret";
default = { };
};
num_uses = lib.mkOption {
type = lib.types.int;
description = "Number of uses for the secret id";
default = 300;
};
wrapping_ttl = lib.mkOption {
type = lib.types.nullOr lib.types.int;
description = "If set, the SecretID response will be response-wrapped and available for the duration specified. Only a single unwrapping of the token is allowed.";
default = null;
};
with_wrapped_accessor = lib.mkOption {
type = lib.types.bool;
description = "Set to `true` to use the wrapped secret-id accessor as the resource ID. If `false` (default value), a fresh secret ID will be regenerated whenever the wrapping token is expired or invalidated through unwrapping.";
default = false;
};
lifecycle.ignore_changes = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "Ignores changes to the following properties when rerunning the terraform script";
default = [ ];
};
};
description = "vault_approle_auth_backend_role_secret_id";
}
);
description = "Defines an app backend role secret id";
default = { };
};
};
config = lib.mkIf cfg.enable {
resource.vault_approle_auth_backend_role = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = value;
}) cfg.approle_auth_backend_role;
resource.vault_approle_auth_backend_role_secret_id = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = {
inherit (value)
backend
role_name
cidr_list
wrapping_ttl
num_uses
with_wrapped_accessor
lifecycle
;
metadata = if value.metadata != null then builtins.toJSON value.metadata else null;
};
}) cfg.approle_auth_backend_role_secret_id;
};
}

View file

@ -0,0 +1,51 @@
{ lib, config, ... }:
let
cfg = config.khscodes.vault;
in
{
options.khscodes.vault = {
enable = lib.mkEnableOption "Enables the openbao provider";
domain = lib.mkOption {
type = lib.types.str;
};
policy = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
name = lib.mkOption {
type = lib.types.str;
description = "Name of the policy";
};
policy = lib.mkOption {
type = lib.types.lines;
description = "The policy";
};
};
description = "vault_policy";
}
);
};
};
imports = [
./approle_auth_backend.nix
./output.nix
./mount.nix
./ssh_secret_backend.nix
./pki_secret_backend.nix
];
config = lib.mkIf cfg.enable {
provider.vault = {
address = "https://${cfg.domain}";
};
terraform.required_providers.vault = {
source = "hashicorp/vault";
version = "5.0.0";
};
resource.vault_policy = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = value;
}) cfg.policy;
};
}

View file

@ -0,0 +1,54 @@
{ lib, config, ... }:
let
cfg = config.khscodes.vault;
in
{
options.khscodes.vault = {
mount = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
type = lib.mkOption {
type = lib.types.str;
description = "Type of mount";
};
path = lib.mkOption {
type = lib.types.str;
description = "Path of the mount";
default = null;
};
default_lease_ttl_seconds = lib.mkOption {
type = lib.types.nullOr lib.types.int;
description = "Default lease ttl in seconds";
default = null;
};
max_lease_ttl_seconds = lib.mkOption {
type = lib.types.nullOr lib.types.int;
description = "Max lease ttl in seconds";
default = null;
};
options = lib.mkOption {
type = lib.types.nullOr lib.types.attrs;
description = "Options for the mount";
default = null;
};
description = lib.mkOption {
type = lib.types.nullOr lib.types.str;
description = "Usage description for the mount";
default = null;
};
};
description = "vault_mount";
}
);
description = "Defines a vault mount";
default = { };
};
};
config = lib.mkIf cfg.enable {
resource.vault_mount = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = value;
}) cfg.mount;
};
}

View file

@ -0,0 +1,83 @@
{ config, lib, ... }:
let
cfg = config.khscodes.vault;
in
{
options.khscodes.vault = {
output = {
approle_auth_backend_role = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
role_name = lib.mkOption {
type = lib.types.str;
description = "The name of the role. Can be used instead of hardcoding the role, to create a dependency in OpenTofu";
};
role_id = lib.mkOption {
type = lib.types.str;
description = "ID of the role";
};
};
description = "vault_approle_auth_backend_role output";
}
);
};
approle_auth_backend_role_secret_id = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
wrapping_token = lib.mkOption {
type = lib.types.str;
description = "The generated wrapping token";
};
};
description = "vault_approle_auth_backend_role_secret_id";
}
);
};
mount = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
path = lib.mkOption {
type = lib.types.str;
description = "The path of the mount, this is here mainly to set up dependencies";
};
};
description = "vault_mount output";
}
);
};
};
};
config = {
khscodes.vault.output.approle_auth_backend_role = lib.mapAttrs (
name: value:
let
sanitizedName = lib.khscodes.sanitize-terraform-name name;
in
{
role_name = "\${ vault_approle_auth_backend_role.${sanitizedName}.role_name }";
role_id = "\${ vault_approle_auth_backend_role.${sanitizedName}.role_id }";
}
) cfg.approle_auth_backend_role;
khscodes.vault.output.approle_auth_backend_role_secret_id = lib.mapAttrs (
name: value:
let
sanitizedName = lib.khscodes.sanitize-terraform-name name;
in
{
wrapping_token = "\${ vault_approle_auth_backend_role_secret_id.${sanitizedName}.wrapping_token }";
}
) cfg.approle_auth_backend_role_secret_id;
khscodes.vault.output.mount = lib.mapAttrs (
name: value:
let
sanitizedName = lib.khscodes.sanitize-terraform-name name;
in
{
path = "\${ vault_mount.${sanitizedName}.path }";
}
) cfg.mount;
};
}

View file

@ -0,0 +1,117 @@
{ lib, config, ... }:
let
cfg = config.khscodes.vault;
in
{
options.khscodes.vault = {
pki_secret_backend_root_cert = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
backend = lib.mkOption {
type = lib.types.str;
description = "Path of the backend";
default = "pki";
};
type = lib.mkOption {
type = lib.types.enum [
"exported"
"internal"
"kms"
];
description = "Type of intermediate to create. Must be either \"exported\", \"internal\" or \"kms\"";
};
common_name = lib.mkOption {
type = lib.types.str;
description = "CN of intermediate to create";
};
ttl = lib.mkOption {
type = lib.types.str;
description = "TTL for the root certificate, in seconds";
default = "315360000";
};
key_type = lib.mkOption {
type = lib.types.enum [
"rsa"
"ed25519"
"ec"
];
description = "Specifies the desired key type; must be rsa, ed25519 or ec.";
default = "ed25519";
};
issuer_name = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Name's the issuer when signing new certificates";
};
};
description = "vault_pki_secret_backend_root_cert";
}
);
description = "Generates a new self-signed CA certificate and private keys for the PKI Secret Backend.";
default = { };
};
pki_secret_backend_role = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
backend = lib.mkOption {
type = lib.types.str;
description = "Path of the backend";
default = "pki";
};
name = lib.mkOption {
type = lib.types.str;
description = "The name to identify this role within the backend. Must be unique within the backend.";
};
allowed_domains = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "List of allowed domains for certificates";
};
enforce_hostnames = lib.mkOption {
type = lib.types.nullOr lib.types.bool;
default = null;
description = "Flag to allow only valid host names";
};
allow_bare_domains = lib.mkOption {
type = lib.types.nullOr lib.types.bool;
default = null;
description = "Flag to allow certificates matching the actual domain";
};
server_flag = lib.mkOption {
type = lib.types.nullOr lib.types.bool;
default = null;
description = "Flag to specify certificates for server use";
};
client_flag = lib.mkOption {
type = lib.types.nullOr lib.types.bool;
default = null;
description = "Flag to specify certificates for client use";
};
key_type = lib.mkOption {
type = lib.types.enum [
"rsa"
"ed25519"
"ec"
];
description = "Specifies the desired key type; must be rsa, ed25519 or ec.";
default = "ed25519";
};
};
description = "vault_pki_secret_backend_role";
}
);
default = { };
};
};
config = lib.mkIf cfg.enable {
resource.vault_pki_secret_backend_root_cert = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = value;
}) cfg.pki_secret_backend_root_cert;
resource.vault_pki_secret_backend_role = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = value;
}) cfg.pki_secret_backend_role;
};
}

View file

@ -0,0 +1,128 @@
{ lib, config, ... }:
let
cfg = config.khscodes.vault;
in
{
options.khscodes.vault = {
ssh_secret_backend_role = lib.mkOption {
type = lib.types.attrsOf (
lib.khscodes.mkSubmodule {
options = {
name = lib.mkOption {
type = lib.types.str;
description = "Specifies the name of the role to create.";
};
backend = lib.mkOption {
type = lib.types.str;
description = "The path where the SSH secret backend is mounted.";
};
key_type = lib.mkOption {
type = lib.types.enum [
"otp"
"dynamic"
"ca"
];
description = "Specifies the type of credentials generated by this role.";
};
allow_bare_domains = lib.mkOption {
type = lib.types.nullOr (lib.types.bool);
description = "Specifies if host certificates that are requested are allowed to use the base domains listed in allowed_domains.";
default = false;
};
allow_host_certificates = lib.mkOption {
type = lib.types.nullOr (lib.types.bool);
description = "Specifies if certificates are allowed to be signed for use as a 'host'.";
default = false;
};
allow_subdomains = lib.mkOption {
type = lib.types.nullOr (lib.types.bool);
description = "Specifies if host certificates that are requested are allowed to be subdomains of those listed in allowed_domains.";
default = false;
};
allow_user_certificates = lib.mkOption {
type = lib.types.nullOr (lib.types.bool);
description = "Specifies if certificates are allowed to be signed for use as a 'user'.";
default = false;
};
allowed_critical_options = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "Specifies a list of critical options that certificates can have when signed.";
default = [ ];
};
allowed_users = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "Specifies a list of usernames that are to be allowed, only if certain usernames are to be allowed.";
default = [ ];
};
default_user = lib.mkOption {
type = lib.types.nullOr lib.types.str;
description = "Specifies the default username for which a credential will be generated.";
default = null;
};
allowed_domains = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "The list of domains for which a client can request a host certificate";
default = [ ];
};
allowed_user_key_config = lib.mkOption {
type = lib.types.listOf (
lib.khscodes.mkSubmodule {
options = {
type = lib.mkOption {
type = lib.types.enum [
"rsa"
"ecdsa"
"ec"
"dsa"
"ed25519"
"ssh-rsa"
"ssh-dss"
"ssh-ed25519"
"ecdsa-sha2-nistp256"
"ecdsa-sha2-nistp384"
"ecdsa-sha2-nistp521"
];
description = "The SSH public key type.";
};
lengths = lib.mkOption {
type = lib.types.listOf lib.types.int;
description = "A list of allowed key lengths as integers. For key types that do not support setting the length a value of [0] should be used.";
};
};
description = "allowed_user_key_config";
}
);
description = "Set of configuration blocks to define allowed user key configuration, like key type and their lengths.";
};
};
description = "vault_ssh_secret_backend_role";
}
);
description = "Defines an ssh secret backend";
default = { };
};
};
config = lib.mkIf cfg.enable {
resource.vault_ssh_secret_backend_role = lib.mapAttrs' (name: value: {
name = lib.khscodes.sanitize-terraform-name name;
value = {
inherit (value)
name
backend
key_type
allow_bare_domains
allow_host_certificates
allow_subdomains
allow_user_certificates
default_user
allowed_user_key_config
;
allowed_critical_options = lib.strings.concatStringsSep "," (
lib.lists.unique value.allowed_critical_options
);
allowed_domains = lib.strings.concatStringsSep "," (lib.lists.unique value.allowed_domains);
allowed_users = lib.strings.concatStringsSep "," (lib.lists.unique value.allowed_users);
};
}) cfg.ssh_secret_backend_role;
};
}

View file

@ -21,4 +21,11 @@
UNIFI_PASSWORD = "Terraform password"; UNIFI_PASSWORD = "Terraform password";
UNIFI_API = "Terraform URL"; UNIFI_API = "Terraform URL";
}; };
"auth.kaareskovgaard.net" = {
"AUTHENTIK_TOKEN" = "Admin API Token";
"TF_VAR_authentik_username" = "login.username";
};
"vault.kaareskovgaard.net" = {
"VAULT_TOKEN" = "Initial root token";
};
} }

View file

@ -1,9 +1,17 @@
{ pkgs, ... }: { pkgs, inputs, ... }:
pkgs.writeShellApplication { pkgs.writeShellApplication {
name = "create-instance"; name = "create-instance";
runtimeInputs = [ pkgs.khscodes.pre-provisioning ]; runtimeInputs = [
pkgs.khscodes.provision-instance
pkgs.khscodes.nixos-install
pkgs.jq
];
text = '' text = ''
instance="''${1:-}" hostname="$1"
pre-provisioning "$instance" apply # Build the configuration to ensure it doesn't fail when trying to install it on the host
nix build --no-link '${inputs.self}#nixosConfigurations."'"$hostname"'".config.system.build.toplevel'
output="$(provision-instance "$hostname")"
ipv4_addr="$(echo "$output" | jq --raw-output '.ipv4_address.value')"
nixos-install "$hostname" "$ipv4_addr" "no"
''; '';
} }

View file

@ -17,6 +17,11 @@ pkgs.writeShellApplication {
cat "''${config}" > "$dir/config.tf.json" cat "''${config}" > "$dir/config.tf.json"
tofu -chdir="$dir" init > /dev/null tofu -chdir="$dir" init > /dev/null
tofu -chdir="$dir" "$cmd" if [[ "$cmd" == "apply" ]]; then
tofu -chdir="$dir" "$cmd" >&2
tofu -chdir="$dir" output -json
else
tofu -chdir="$dir" "$cmd"
fi
''; '';
} }

View file

@ -11,10 +11,13 @@ pkgs.writeShellApplication {
# TODO: Use secret source and required secrets to set up the correct env variables # TODO: Use secret source and required secrets to set up the correct env variables
text = '' text = ''
hostname="$1" hostname="$1"
# Build the configuration to ensure it doesn't fail when trying to install it on the host
nix build --no-link '${inputs.self}#nixosConfigurations."'"$hostname"'".config.system.build.toplevel'
# Allow overriding the host to connec tto, this is useful when testing and the DNS entries are stale with older IPs. # Allow overriding the host to connec tto, this is useful when testing and the DNS entries are stale with older IPs.
host="''${2:-$1}" host="''${2:-$1}"
verify="''${3:-yes}"
if [[ "$verify" == "yes" ]]; then
# Build the configuration to ensure it doesn't fail when trying to install it on the host
nix build --no-link '${inputs.self}#nixosConfigurations."'"$hostname"'".config.system.build.toplevel'
fi
baseAttr='${inputs.self}#nixosConfigurations."'"$hostname"'".config.khscodes.infrastructure.provisioning' baseAttr='${inputs.self}#nixosConfigurations."'"$hostname"'".config.khscodes.infrastructure.provisioning'
config="$(nix build --no-link --print-out-paths "''${baseAttr}.preConfig")" config="$(nix build --no-link --print-out-paths "''${baseAttr}.preConfig")"
username="$(nix eval --raw "''${baseAttr}.preImageUsername")" username="$(nix eval --raw "''${baseAttr}.preImageUsername")"

View file

@ -4,4 +4,5 @@ pkgs.opentofu.withPlugins (p: [
pkgs.khscodes.terraform-provider-cloudflare pkgs.khscodes.terraform-provider-cloudflare
pkgs.khscodes.terraform-provider-hcloud pkgs.khscodes.terraform-provider-hcloud
pkgs.khscodes.terraform-provider-openstack pkgs.khscodes.terraform-provider-openstack
pkgs.khscodes.terraform-provider-vault
]) ])

View file

@ -0,0 +1,33 @@
{
inputs,
pkgs,
}:
pkgs.writeShellApplication {
name = "pre-provisioning";
runtimeInputs = [
pkgs.nix
pkgs.khscodes.bw-opentofu
pkgs.khscodes.instance-opentofu
pkgs.khscodes.openbao-helper
pkgs.jq
];
# TODO: Use secret source and required secrets to set up the correct env variables
text = ''
hostname="$1"
cmd="''${2:-apply}"
baseAttr='${inputs.self}#nixosConfigurations."'"$hostname"'".config.khscodes.infrastructure.provisioning'
config="$(nix build --no-link --print-out-paths "''${baseAttr}.postConfig")"
secretsSource="$(nix eval --raw "''${baseAttr}.post.secretsSource")"
endpoints="$(nix eval --show-trace --json "''${baseAttr}.postEndpoints")"
if [[ "$config" == "null" ]]; then
echo "No preprovisioning needed"
exit 0
fi
if [[ "$secretsSource" == "vault" ]]; then
readarray -t endpoints_args < <(echo "$endpoints" | jq -cr 'map(["-e", .])[][]')
openbao-helper wrap-program "''${endpoints_args[@]}" -- instance-opentofu "$hostname" "$config" "$cmd"
exit 0
fi
bw-opentofu "$hostname" "$config" "$cmd"
'';
}

View file

@ -15,10 +15,10 @@ pkgs.writeShellApplication {
text = '' text = ''
hostname="$1" hostname="$1"
cmd="''${2:-apply}" cmd="''${2:-apply}"
baseAttr='${inputs.self}#nixosConfigurations."'"$hostname"'".config.khscodes.infrastructue.provisioning' baseAttr='${inputs.self}#nixosConfigurations."'"$hostname"'".config.khscodes.infrastructure.provisioning'
config="$(nix build --no-link --print-out-paths "''${baseAttr}.preConfig")" config="$(nix build --no-link --print-out-paths "''${baseAttr}.preConfig")"
secretsSource="$(nix eval --raw "''${baseAttr}.pre.secretsSource")" secretsSource="$(nix eval --raw "''${baseAttr}.pre.secretsSource")"
endpoints="$(nix eval --json "''${baseAttr}.pre.endpoints")" endpoints="$(nix eval --show-trace --json "''${baseAttr}.preEndpoints")"
if [[ "$config" == "null" ]]; then if [[ "$config" == "null" ]]; then
echo "No preprovisioning needed" echo "No preprovisioning needed"
exit 0 exit 0

View file

@ -0,0 +1,9 @@
{ pkgs, ... }:
pkgs.writeShellApplication {
name = "provision-instance";
runtimeInputs = [ pkgs.khscodes.pre-provisioning ];
text = ''
instance="''${1:-}"
pre-provisioning "$instance" apply
'';
}

View file

@ -0,0 +1,24 @@
{ pkgs, inputs, ... }:
pkgs.writeShellApplication {
name = "start-vm";
runtimeInputs = [
pkgs.spice-gtk
pkgs.uutils-findutils
];
text = ''
host="''${1:-}"
clean="''${2:-no}"
if [[ "$clean" == "clean" ]]; then
find . -type f -name '*.qcow2' -delete
fi
run_vm="$(nix build --no-link --print-out-paths '${inputs.self}#nixosConfigurations."'"$host"'".config.system.build.vm' --show-trace)"
# shellcheck disable=SC2211
# shellcheck disable=SC2086
$run_vm/bin/* &
pid=$!
trap 'kill $pid' EXIT
sleep 2
spicy --title "$host" --uri=spice+unix:///tmp/spice.sock
'';
}

View file

@ -0,0 +1,10 @@
{ pkgs }:
pkgs.terraform-providers.mkProvider {
hash = "sha256-Vqnmw69fktBQhSkj/W0legJ4sHOQP9Moqqi6Z5qYFy4=";
homepage = "https://registry.terraform.io/providers/hashicorp/vault";
owner = "hashicorp";
repo = "terraform-provider-vault";
rev = "v5.0.0";
spdx = "MPL-2.0";
vendorHash = "sha256-6gWw4ypQZWPX7VC9cZxHiU/KhTYEdMTZ276B9neGAiI=";
}

View file

@ -5,6 +5,6 @@ pkgs.writeShellApplication {
text = '' text = ''
instance="''${1:-}" instance="''${1:-}"
connect_host="''${2:-$1}" connect_host="''${2:-$1}"
nixos-rebuild switch --flake "${inputs.self}#$instance" --target-host "$connect_host" --build-host "localhost" nixos-rebuild switch --flake "${inputs.self}#$instance" --target-host "$connect_host" --build-host "$connect_host" --use-remote-sudo
''; '';
} }

View file

@ -1,7 +1,7 @@
{ ... }: { ... }:
{ {
imports = [ ./khs-server.nix ];
config.khscodes = { config.khscodes = {
hetzner.enable = true; hetzner.enable = true;
services.openssh.enable = true;
}; };
} }

View file

@ -0,0 +1,18 @@
{ pkgs, config, ... }:
{
imports = [ ./nix-base.nix ];
snowfallorg.users.khs.admin = true;
users.users.khs = {
# TODO: Figure out how to provision password changes to servers from VAULT
initialPassword = "changeme";
openssh.authorizedKeys.keys = [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqY0FHnWFKfLG2yfgr4qka5sR9CK+EMAhzlHUkaQyWHTKD+G0/vC/fNPyL1VV3Dxc/ajxGuPzVE+mBMoyxazL3EtuCDOVvHJ5CR+MUSEckg/DDwcGHqy6rC8BvVVpTAVL04ByQdwFnpE1qNSBaQLkxaFVdtriGKkgMkc7+UNeYX/bv7yn+APqfP1a3xr6wdkSSdO8x4N2jsSygOIMx10hLyCV4Ueu7Kp8Ww4rGY8j5o7lKJhbgfItBfSOuQHdppHVF/GKYRhdnK6Y2fZVYbhq4KipUtclbZ6O/VYd8/sOO98+LMm7cOX+K35PQjUpYgcoNy5+Sw3CNS/NHn4JvOtTaUEYP7fK6c9LhMULOO3T7Cm6TMdiFjUKHkyG+s2Mu/LXJJoilw571zwuh6chkeitW8+Ht7k0aPV96kNEvTdoXwLhBifVEaChlAsLAzSUjUq+YYCiXVk0VIXCZQWKj8LoVNTmaqDksWwbcT64fw/FpVC0N18WHbKcFUEIW/O4spJMa30CQwf9FeqpoWoaF1oRClCSDPvX0AauCu0JcmRinz1/JmlXljnXWbSfm20/V+WyvktlI0wTD0cdpNuSasT9vS77YfJ8nutcWWZKSkCj4R4uHeCNpDTX5YXzapy7FxpM9ANCXLIvoGX7Yafba2Po+er7SSsUIY1AsnBBr8ZoDVw=="
];
};
environment = {
systemPackages = [ pkgs.openbao ];
variables = {
BAO_ADDR = "https://${config.khscodes.infrastructure.openbao.domain}";
};
};
}

View file

@ -0,0 +1,39 @@
{
lib,
pkgs,
inputs,
...
}:
{
imports = [ ./khs-base.nix ];
khscodes.virtualisation.qemu-guest.enableWhenVmTarget = true;
khscodes.machine.type = "desktop";
services.desktopManager.cosmic.enable = true;
services.displayManager.cosmic-greeter.enable = true;
stylix = {
enable = true;
autoEnable = false;
image = "${inputs.self}/assets/khs-desktop-wallpaper.jpg";
base16Scheme = lib.mkDefault "${pkgs.base16-schemes}/share/themes/solarized-dark.yaml";
targets.console.enable = true;
fonts = {
monospace = {
package = pkgs.nerd-fonts.inconsolata;
name = "Inconsolata Nerd Font";
};
sizes = {
terminal = 14;
};
};
};
console = {
enable = true;
font = "${pkgs.powerline-fonts}/share/consolefonts/ter-powerline-v24b.psf.gz";
packages = [
pkgs.terminus_font
pkgs.powerline-fonts
];
};
}

View file

@ -1,7 +1,7 @@
{ ... }: { ... }:
{ {
imports = [ ./khs-server.nix ];
config.khscodes = { config.khscodes = {
openstack.enable = true; openstack.enable = true;
services.openssh.enable = true;
}; };
} }

View file

@ -0,0 +1,21 @@
{ lib, pkgs, ... }:
{
imports = [ ./nix-base.nix ];
config = {
khscodes = {
services.openssh.enable = true;
machine.type = "server";
os.auto-update.enable = true;
infrastructure = {
vault-server-approle.enable = lib.mkDefault true;
vault-loki-sender.enable = lib.mkDefault true;
};
};
stylix = {
enable = true;
autoEnable = false;
base16Scheme = lib.mkDefault "${pkgs.base16-schemes}/share/themes/solarized-dark.yaml";
targets.console.enable = true;
};
};
}

View file

@ -3,13 +3,12 @@
... ...
}: }:
{ {
imports = [ "${inputs.self}/nix/profiles/hetzner-server.nix" ]; imports = [ "${inputs.self}/nix/profiles/nixos/hetzner-server.nix" ];
khscodes.infrastructure.hetzner-instance = { khscodes.infrastructure.hetzner-instance = {
enable = true; enable = true;
mapRdns = true; mapRdns = true;
server_type = "cax11"; server_type = "cax11";
secretsSource = "bitwarden";
}; };
khscodes.fqdn = "khs.codes"; khscodes.networking.fqdn = "khs.codes";
system.stateVersion = "25.05"; system.stateVersion = "25.05";
} }

View file

@ -0,0 +1,7 @@
# After creating the instance
Open https://vault.kaareskovgaard.net and initialize OpenBAO. Remember to get some sort of auto unsealing set up afterwards, currently this is implemented with a cronjob on TrueNAS. Doing it this way allows various certificates to continue getting issued, even as OpenBAO gets sealed (due to auto updates).
After this, run the post provisioning script to initialize the various OpenBAO parts needed.
In order for `security.kaareskovgaard.net` to authenticate itself with OpenBAO
Then `nix run '.#bitwarden-to-vault` can transfer the needed Bitwarden secrets to vault.

View file

@ -0,0 +1,60 @@
{ config, ... }:
let
secretsFile = "/var/lib/authentik/authentik-env";
domain = "auth-test.kaareskovgaard.net";
in
{
config = {
khscodes.nix.nix-community.enable = true;
services.authentik = {
enable = true;
environmentFile = secretsFile;
settings = {
email = {
host = "smtp.soverin.net";
port = 587;
username = "kaare@kaareskovgaard.net";
use_tls = true;
use_ssl = false;
from = "kaare@kaareskovgaard.net";
};
disable_startup_analytics = true;
avatars = "initials";
};
};
khscodes.services.nginx.virtualHosts.${domain} = {
locations."/" = {
proxyPass = "https://localhost:9443";
recommendedProxySettings = true;
};
};
services.postgresqlBackup = {
enable = true;
databases = [ "authentik" ];
};
systemd.services = {
authentik-migrate = {
unitConfig = {
ConditionPathExists = secretsFile;
};
};
authentik-worker = {
unitConfig = {
ConditionPathExists = secretsFile;
};
serviceConfig = {
LoadCredential = [
"${domain}.pem:${config.security.acme.certs.${domain}.directory}/fullchain.pem"
"${domain}.key:${config.security.acme.certs.${domain}.directory}/key.pem"
];
};
};
authentik = {
unitConfig = {
ConditionPathExists = secretsFile;
};
};
};
};
}

View file

@ -0,0 +1,30 @@
{
inputs,
...
}:
{
imports = [
"${inputs.self}/nix/profiles/nixos/hetzner-server.nix"
./authentik.nix
./openbao.nix
./post/openbao
];
khscodes.services.nginx.enable = true;
khscodes.infrastructure.hetzner-instance = {
enable = true;
server_type = "cax11";
};
# Cannot use vault for secrets source, as this is the server containing vault.
khscodes.infrastructure.provisioning.pre.secretsSource = "bitwarden";
khscodes.infrastructure.provisioning.post.secretsSource = "bitwarden";
khscodes.infrastructure.vault-server-approle.stage = "post";
khscodes.networking.fqdn = "security.kaareskovgaard.net";
users.users.khs = {
initialPassword = "changeme";
openssh.authorizedKeys.keys = [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqY0FHnWFKfLG2yfgr4qka5sR9CK+EMAhzlHUkaQyWHTKD+G0/vC/fNPyL1VV3Dxc/ajxGuPzVE+mBMoyxazL3EtuCDOVvHJ5CR+MUSEckg/DDwcGHqy6rC8BvVVpTAVL04ByQdwFnpE1qNSBaQLkxaFVdtriGKkgMkc7+UNeYX/bv7yn+APqfP1a3xr6wdkSSdO8x4N2jsSygOIMx10hLyCV4Ueu7Kp8Ww4rGY8j5o7lKJhbgfItBfSOuQHdppHVF/GKYRhdnK6Y2fZVYbhq4KipUtclbZ6O/VYd8/sOO98+LMm7cOX+K35PQjUpYgcoNy5+Sw3CNS/NHn4JvOtTaUEYP7fK6c9LhMULOO3T7Cm6TMdiFjUKHkyG+s2Mu/LXJJoilw571zwuh6chkeitW8+Ht7k0aPV96kNEvTdoXwLhBifVEaChlAsLAzSUjUq+YYCiXVk0VIXCZQWKj8LoVNTmaqDksWwbcT64fw/FpVC0N18WHbKcFUEIW/O4spJMa30CQwf9FeqpoWoaF1oRClCSDPvX0AauCu0JcmRinz1/JmlXljnXWbSfm20/V+WyvktlI0wTD0cdpNuSasT9vS77YfJ8nutcWWZKSkCj4R4uHeCNpDTX5YXzapy7FxpM9ANCXLIvoGX7Yafba2Po+er7SSsUIY1AsnBBr8ZoDVw=="
];
};
khscodes.infrastructure.openbao.domain = "vault-test.kaareskovgaard.net";
system.stateVersion = "25.05";
}

View file

@ -0,0 +1,51 @@
{ pkgs, config, ... }:
let
domain = config.khscodes.infrastructure.openbao.domain;
in
{
config = {
services.openbao = {
enable = true;
package = pkgs.openbao;
settings = {
ui = true;
listener.tcp = {
type = "tcp";
tls_cert_file = "${config.security.acme.certs.${domain}.directory}/fullchain.pem";
tls_key_file = "${config.security.acme.certs.${domain}.directory}/key.pem";
};
api_addr = "https://${domain}";
storage.postgresql.connection_url = "postgres://openbao?host=/run/postgresql";
};
};
security.acme.certs.${domain}.reloadServices = [ "openbao.service" ];
systemd.services.openbao.after = [ "postgresql.service" ];
# Allow openbao to read the certificate file
users.groups.nginx.members = [ "openbao" ];
services.postgresql = {
enable = true;
ensureDatabases = [ "openbao" ];
ensureUsers = [
{
name = "openbao";
ensureDBOwnership = true;
}
];
};
services.postgresqlBackup = {
enable = true;
databases = [ "openbao" ];
};
khscodes.services.nginx.virtualHosts.${domain} = {
locations."/" = {
proxyPass = "https://${config.services.openbao.settings.listener.tcp.address}/";
recommendedProxySettings = true;
};
};
};
}

View file

@ -0,0 +1,10 @@
{
khscodes.infrastructure.vault-server-approle.path = "\${ vault_auth_backend.approle.path }";
khscodes.infrastructure.provisioning.post.modules = [
{
resource.vault_auth_backend.approle = {
type = "approle";
};
}
];
}

View file

@ -0,0 +1,29 @@
{
imports = [
./approle.nix
./ssh-host.nix
./loki-mtls.nix
./prometheus-mtls.nix
];
khscodes.infrastructure.vault-server-approle.path = "\${ vault_auth_backend.approle.path }";
khscodes.infrastructure.provisioning.post.modules = [
(
{ config, ... }:
{
khscodes.vault.mount.opentofu = {
path = "opentofu";
type = "kv";
options = {
version = "2";
};
description = "Secrets used during provisioning";
};
resource.vault_kv_secret_backend_v2.opentofu = {
mount = config.khscodes.vault.output.mount.opentofu.path;
max_versions = 5;
cas_required = false;
};
}
)
];
}

View file

@ -0,0 +1,26 @@
# This should go into the setup of the vault server itself, as the vault server also needs stuff that depends on this.
{
khscodes.infrastructure.vault-loki-sender = {
terranixBackendName = "\${ vault_mount.loki-mtls.path }";
};
khscodes.infrastructure.provisioning.post.modules = [
(
{ config, ... }:
{
khscodes.vault.enable = true;
khscodes.vault.mount.loki-mtls = {
type = "pki";
path = "loki-mtls";
max_lease_ttl_seconds = 10 * 365 * 24 * 60 * 60;
default_lease_ttl_seconds = 60 * 60;
};
khscodes.vault.pki_secret_backend_root_cert.loki-mtls = {
backend = config.khscodes.vault.output.mount.loki-mtls.path;
type = "internal";
common_name = "loki.kaareskovgaard.net";
issuer_name = "loki-mtls-root-ca";
};
}
)
];
}

Some files were not shown because too many files have changed in this diff Show more