- ansible: Provisioning of wordspeak webserver and home firewall
- packer: Creation of OpenBSD virtualbox images
- vagrant: Vagrantfiles for each type of machine
Creation of virtualbox images is done via packer.
This setup relies on packer being able to accept incoming connections, so
review firewall on the host if it's not working. It also requires the
variables["mirror"]
setting in the packer json file correspond to the HTTP
server in the http/install.conf
.
As this setup uses OpenBSD autoinstall, it's necessary to set DHCP variables:
option tftp-server-name "openbsd-mirror.soyuz.local:8080";
option bootfile-name "auto_install";
(the type-server-name corresponds, again, to the variables["mirror"]
setting
in the packer json file. The dhcp options (for VMWare Fusion) are set in
/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf
(then restart the
process to apply changes. Probably /Applications/VMware Fusion.app/Contents/Library/vmnet-dhcpd -s 6 -cf /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf -lf /var/db/vmware/vmnet-dhcpd-vmnet8.leases -pf /var/run/vmnet-dhcpd-vmnet8.pid vmnet8
)
- Setup or review the file (the varfile) containing the private packer variables (
ssh_private_key_file
,ssh_public_key_str
,root_bcrypt_hash
) - cd ~/Code/setup-scripts/packer
packer build -var-file=<path-to-varfile> openbsd.json
- The ovf and vmdk files will be found in the
output_directory
specified in the openbsd.json file. Take note of the path to the ovf file that's been created, as it is necessary below. vagrant box add --name openbsd64 build/openbsd64-amd64-vmware.box
cd
into a directory under vagrant, that defines the type of machine that you want, and look at theconfig.vm.box
. Vagrant won't pull in the machine that you've just built if it's already been important. Ifvagrant box list
shows a box with the same name as theconfig.box.vm
directive in theVagrantfile
then runvagrant box delete <name-of-box>
- run
vagrant up
- If you haven't done a DHCP mapping, find the new IP address on the DHCP server (look for recent DHCPACK log line in
/var/log/daemon
if it's OpenBSD) - Login as root, using the private key associated with the
ssh_private_key_file
that was used in the packer setup phase
OpenBSD requires hand-installation on Vultr, even though they offer pre-build images because their partitioning scheme only has a single partition.
- boot from ISO that has installation packages e.g.
install70.iso
- do an auto-install, using an auto-install conf. Note that noVNC (used by
Vultr) has a paste option in the client's pop-out, so you don't need to
type in the autoinstall URL -
https://raw.githubusercontent.com/edwinsteele/setup-scripts/master/autoinstall/gemini-install.conf
- detach the ISO (which triggers a reboot on Vultr - manual reboot may be necessary)
Once the base OS has been setup, we do further setup using ansible.
Assumes that your default ssh public key is installed on the server under
the account that you'll be using for provisioning (root), or that you provide
a different key to ansible with --private-key=PRIVATE_KEY_FILE
workon ansible
(the virtualenv should already exist from previous work)cd ~/Code/setup-scripts/ansible
- Replace the host in
hosts
with the IP address of the newly provisioned host, placing it in the group section that corresponds to the--limit
argument used in theansible-playbook
commands for the appropriate type of VM install e.g.ansible-playbook -u root -i inventory site.yml --limit=192.168.20.254
Note that it's not possible to test ansible connectivity on OpenBSD hosts until they have a python interpreter, which is the first step in the common playbook.
In the ansible
directory at the same level as this README.md
file run:
ansible-playbook -u root -i hosts --limit <limit-criteria> site.yml
Where the limit criteria is something like:
- 192.168.56.101 (an IP address)
- webservers (a single group name)
- 'webservers:&192.168.56.101' (the union of a group and an IP address)
- On the newly provisioned VM as root (in an ssh session with agent forwarding enabled):
openrsync -av www.wordspeak.org:/etc/ssl/wordspeak.org/ /etc/ssl/wordspeak.org/
openrsync -av www.wordspeak.org:/etc/ssl/private/wordspeak.org/ /etc/ssl/private/wordspeak.org/
rcctl restart nginx
- On the newly provisioned VM as esteele (in an ssh session with agent forwarding enabled):
for d in images.wordspeak.org language-explorer.wordspeak.org staging.wordspeak.org www.wordspeak.org; do openrsync -av www.wordspeak.org:/home/esteele/Sites/$d/ /home/esteele/Sites/$d/; done
cd ~/Code/dotfiles && ./make.sh
flip the DNS to point to the new host
doas acme-client -v wordspeak.org && rcctl restart nginx
- Update DNS record for staging.wordspeak.org (to simplify final setup, knowing that nobody is looking at staging)
rsync -av --rsync-path=/usr/bin/openrsync /usr/local/var/www/lex-mirror/ staging.wordspeak.org:/var/www/htdocs/language-explorer.wordspeak.org/
- in images.wordspeak.org checkout, run
./images_tool.py sync
- RUn github actions