2025-02-03 20:39:51 -08:00
2025-02-03 20:39:51 -08:00
2025-02-03 20:39:51 -08:00
2025-02-03 20:39:51 -08:00
2025-02-03 20:39:51 -08:00
2025-02-03 20:39:51 -08:00
2025-02-03 20:39:51 -08:00

My Homelab Setup

Getting started

Dependencies

Install dependencies (Arch):

pacman -Sy opentofu kubectl helm helmfile python

Promxox

We first need to configure a Proxmox user for terraform to act on behalf of and a token for the user.

# Create the user
pveum user add terraform@pve

# Create a role for the user above
pveum role add Terraform -privs "Datastore.Allocate Datastore.AllocateSpace Datastore.AllocateTemplate Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify SDN.Use VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt User.Modify Pool.Audit"

# Assign the terraform user to the above role
pveum aclmod / -user terraform@pve -role Terraform

# Create the token and save it for later
pveum user token add terraform@pve provider --privsep=0

Provisioning with OpenTofu/Terraform

Create a file proxmox/tf/credentials.auto.tfvars with the following content, making sure to replace as necessary:

proxmox_api_endpoint = "https://<domain or ip>"
proxmox_api_token    = "terraform@pve!provider=<token from last step>"

Customize the other variables in proxmox/tf/vars.auto.tfvars and double check the configuration.

When ready, run opentofu apply. The command might fail the first time if provisioning from scratch, but it seems to be fine when running it a second time.

Creating a Docker swarm

The Docker swarm acts as a launchpad for the rest of the infrastructure. It bootstraps a Portainer, Traefik, and Gitea deployment so that remaining configuration can be done through Portainer and Git.

# Add SSH keys to known_hosts
ansible-inventory -i inventory/dolo --list |\
  jq -r '._meta.hostvars | keys[]' |\
  grep 'stingray' |\
  while read -r line; do
    ssh-keygen -R "$line"
    ssh-keyscan -H "$line" >> ~/.ssh/known_hosts
  done

# Initialize swarm
ansible-playbook -i inventory/stingray swarm.yml

Traefik will be listening on hosts:

  • git.mnke.org
  • git.stingray.mnke.org
  • portainer.stingray.mnke.org

Set DNS records or edit your hosts file to point those domains to a swarm node.

Creating a k3s cluster

Set up Ansible:

# Tested on Python 3.13.1
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
ansible-galaxy collection install -r proxmox/ansible/collections/requirements.yml

Set up the k3s cluster:

# Necessary because the hosts.yml file contains a relative path to the terraform
# project directory
cd proxmox/ansible
# Remove/scan keys
ansible-inventory -i inventory/dolo --list |\
  jq -r '._meta.hostvars | keys[]' |\
  while read -r line; do
    ssh-keygen -R "$line"
    ssh-keyscan -H "$line" >> ~/.ssh/known_hosts
  done
ansible-playbook lvm.yml site.yml -i inventory/dolo
# You should be left with a kubeconfig. Move it to ~/.kube/config. If you
# already have a ~/.kube/config file, make sure to back it up first.
mv kubeconfig ~/.kube/config
# Verify that you can connect to the cluster
kubectl get nodes

# Back to root repo directory
cd -
# Verify deployment and service
kubectl apply -f proxmox/k8s/examples/001-example.yml
# This should succeed, and an IP should have been allocated by metallb. Check
# with the following command:
kubectl describe nginx
# Now try checking that the deployment works:
curl http://[allocated-ip]

Install Helm charts

kubectl create secret generic regcred \
  --from-file=.dockerconfigjson=$HOME/.docker/config.json \
  --type=kubernetes.io/dockerconfigjson
# Assuming from the repo root
cd proxmox/k8s/helmfile
helmfile sync -f proxmox/k8s/helmfile.d

Credits

Description
No description provided
Readme 2.8 MiB
Languages
HCL 80.5%
Jinja 17.8%
Nix 1.7%