Migrated to a new server!
For the past 20 years I been running various linux server to host my websites and mail. It evolved over time and the very last server was installed about 5 years ago. I decided that a new server was in order and decided to redesign some things and now here we are.
Just some personal notes on how the new server is configured and setup. You can follow my steps or make changes but if you do anything thats better or noteworthy please let me know!
This will be a multi-part article so that I can deep dive into some areas (mainly so that I can remember what I did and to reproduce it if needed)
- Install and configure the host machine - we are here
- Install and configure a database and webserver
- Install and configure a mailserver
- Install and configure vaultwarden
- Tie everything back to 1. for backups, misc, etc - and here!
Basic server install
I been a loyal CentOS user for years but due to all the things going on with CentOS and RedHat I have migrated to Rocky Linux.
- Just for sizing my server is 2vCPU and 4GB of memory wtih 80GB SSD
- Installed minimal OS of Rocky Linux 8
- add yourself as a user and set secure password
- some basic settings
# sed -i 's/enforcing/permissive/g' /etc/selinux/config # turn SELINUX to permissive # sed -i 's/PermitRootLogin yes/PermitRootLogin no/g' /etc/ssh/sshd_config # turn off remote ssh for root # systemctl disable auditd; systemctl disable sssd # disable auditd & sssd since I wouldn't use it # yum install -y epel-release # enable epel # yum install -y vim-enhanced git rsync wget lftp bash-completion bind-utils fail2ban fail2ban-firewalld dnf-automatic postfix sqlite # install some packages # yum update -y # update
- Install docker
# curl -fsSL https://get.docker.com -o get-docker.sh # vi get-docker.sh # edit and add on line 631 lsb_dist=centos # sh ./get-docker.sh # install docker # usermod -aG docker $USER # add myself to the docker group # systemctl enable docker # enable the docker service
- Create a internal docker network - I want to create an internal network for my containers to talk to
# docker network create internal
-
Configure fail2ban for sshd
- Create /etc/fail2ban/jail.local
[DEFAULT] # time is in seconds. 3600 = 1 hour, 86400 = 24 hours (1 day) findtime = 1h bantime = 4h maxretry = 3 ignoreip = 127.0.0.1/8 ::1 {Your Home IP if you want to add it}
- Ensure that /etc/fail2ban/jail.d/sshd.local and sshd-ddos.local is enabled
- enable and start fail2ban
# systemctl enable fail2ban # systemctl start fail2ban
-
Our basic server is complete so lets restart it. We will revisit once we are done with
Install and configure a database and webserver
Install and configure a mailserver
Install and configure vaultwarden
# shutdown -r now
Post app install tidy ups
Configure postfix to send mail
We installed postfix during our basic setup. We will need to configure it so that it does not start smtpd on port 25 since it is already being used by our mailserver container and we want to configure it for relayhost only so that it only forwards it to our MTA.
- To disable smtpd from starting with postfix edit /etc/postfix/master.cf line 11 and comment it out
#smtp inet n - n - - smtpd
- To configure relayhost edit /etc/postfix/main.cf and edit the following - please fill in the items in
{ }
mydestination = $myhostname, localhost.$mydomain, localhost, {FQDN}, {HOSTNAME} relayhost = {mail.domain.name}
- Enable and start postfix
# systemctl enable postfix # systemctl start postfix
Configure watchtower
Watchtower is a docker container that updates other containers whenever a new image is found. I want watchtower to update my containers whenever there are updates. Again I am setting this up as a service.
- Create the service file
/etc/systemd/system/docker-watchtower.service
(Please remove the # I could not get pre to work ok with the empty spaces)
# cat /etc/systemd/system/docker-watchtower.service [Unit] Description=watchtower docker container Requires=docker.service After=docker.service [Service] Restart=always RestartSec=90 ExecStartPre=-/usr/bin/docker kill watchtower ExecStartPre=-/usr/bin/docker rm watchtower ExecStart=/usr/bin/docker run \ --name watchtower \ -e TZ=America/Chicago \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ v2tec/watchtower --cleanup --schedule "0 0 8 * * *" ExecStop=/usr/bin/docker stop watchtower [Install] WantedBy=multi-user.target
- enable and start the service
# systemctl daemon-reload # systemctl enable docker-watchtower # systemctl start docker-watchtower
Configure backups
I want to have my backups stored on a remote location. For this I am going to use rclone container to mount a local filesystem so that my backups scripts can write to it. I will be using the dropbox integration but you can select from a wide range of remote filesystems
Configure rclone
- Create the directory structure - I want to use /opt/rclone/config as my config directory and /opt/rclone/data as my rclone dropbox mount
# mkdir -p /opt/rclone/config # mkdir -p /opt/rclone/data
- Now to configure rclone
docker pull rclone/rclone:latest docker run --rm -it \ > --net host \ > -v /opt/rclone/config:/config/rclone \ > -v /opt/rclone/data:/data:shared \ > -u $(id -u):$(id -g) \ > rclone/rclone config 2021/12/18 17:52:51 NOTICE: Config file "/config/rclone/rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> mybackup Option Storage. Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value. 1 / 1Fichier \ "fichier" 2 / Alias for an existing remote \ "alias" 3 / Amazon Drive \ "amazon cloud drive" ...{truncated} Storage> 11 Option client_id. OAuth Client Id. Leave blank normally. Enter a string value. Press Enter for the default (""). client_id> Option client_secret. OAuth Client Secret. Leave blank normally. Enter a string value. Press Enter for the default (""). client_secret> Edit advanced config? y) Yes n) No (default) y/n> Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> 2021/12/18 17:53:48 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=u2IlZ3PmSZbqQHjl0ix8_Q 2021/12/18 17:53:48 NOTICE: Log in and authorize rclone for access 2021/12/18 17:53:48 NOTICE: Waiting for code...
- Now rclone is wanting you to visit the URL listed so that you can login to dropbox so that it can create the configs for it. I dont have a local browser so I created an ssh tunnel to get me in. From a remote host with GUI
ssh -L 53682:localhost:53682 ssh -l {USERNALE} {Your Servers Host}
- Now I am able to open a browser to the URL and configure it. Once this is done the docker command will exit and you will find a rclone.conf in
/opt/rclone/config
# cat rclone.conf [mybackup] type = dropbox token = {xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
- Configure rclone as a service by creating
/etc/systemd/system/docker-rclone.service
# cat /etc/systemd/system/docker-rclone.service [Unit] Description=rclone docker container Requires=docker.service After=docker.service [Service] Restart=always RestartSec=90 ExecStartPre=-/usr/bin/docker kill rclone ExecStartPre=-/usr/bin/docker rm rclone ExecStartPre=-/usr/bin/umount /opt/rclone/data ExecStart=/usr/bin/docker run \ --name rclone \ -v /etc/localtime:/etc/localtime:ro \ -v /opt/temp:/temp \ -v /opt/rclone/config:/config/rclone \ -v /opt/rclone/data:/data:shared \ -v /etc/passwd:/etc/passwd:ro \ -v /etc/shadow:/etc/shadow:ro \ -v /etc/group:/etc/group:ro \ --device /dev/fuse \ --cap-add SYS_ADMIN \ rclone/rclone:latest mount gooksu:gooksu /data \ --config=/config/rclone/rclone.conf \ --allow-other --allow-non-empty \ --vfs-read-chunk-size=64M --vfs-read-chunk-size-limit=2048M \ --buffer-size=64M --poll-interval=1m --dir-cache-time=12h \ --timeout=10m --drive-chunk-size=64M --umask=002 ExecStop=/usr/bin/docker stop rclone ExecStop=-/usr/bin/umount /opt/rclone/data [Install] WantedBy=multi-user.target
- Enable and start the service
# systemctl daemon-reload # systemctl enable docker-rclone # systemctl start docker-rclone
- Now create files in
/opt/rclone/data
and see if you can see it in your dropbox folder
configure backup script and crontab
All of my scripts will reside in /opt/bin
and if you followed the other parts to this whole story all of my directories reside within /opt
I created a backup script for my own needs and it is pasted below
# cat backup-gooksu.sh #!/bin/sh # script to create backups # set vars DAYS=15 MYSQL_ROOT_PASSWORD=XXXXXXXX WORKDIR=/opt/backup BACKUPLOCATION=/opt/rclone/data DATETIME=date +"%Y.%m.%d.%H.%M.%S"
## clean out old backup if it exists if [ -d ${WORKDIR} ]; then echo "[BACKUP] Cleaning old backup directory" rm -rf ${WORKDIR} mkdir -p ${WORKDIR} fi ## backup root's crontab echo "[BACKUP] Backing up roots crontab" mkdir -p ${WORKDIR}/crontab crontab -l -u root > ${WORKDIR}/crontab/root ## backup services files - all of my important services files start with docker-* and is in /etc/systemd/system echo "[BACKUP] Backing up systemd files" mkdir -p ${WORKDIR}/systemd rsync -raqtog /etc/systemd/system/docker-* ${WORKDIR}/systemd/ ## backup local files echo "[BACKUP] Backing up local files" # fail2ban configs mkdir -p ${WORKDIR}/local rsync -raqtog /etc/fail2ban/ ${WORKDIR}/local/fail2ban/ ## backup /opt - exclude /opt/backup and /opt/mariadb echo "[BACKUP] Backup up /opt" mkdir -p ${WORKDIR}/opt rsync -raqtog --exclude "backup" --exclude "mariadb" --exclude "mail-state/" --exclude "rclone/data" /opt/ ${WORKDIR}/opt/ ## backup mariadb echo "[BACKUP] Backup of mariadb" mkdir -p ${WORKDIR}/mariadb for database indocker exec mariadb mysql -s -r -u root --password=${MYSQL_ROOT_PASSWORD} -e 'show databases'
do docker exec mariadb sh -c "mysqldump -u root --password=${MYSQL_ROOT_PASSWORD} --single-transaction --routines --triggers $database > /temp/$database.sql" done mv /opt/temp/*.sql ${WORKDIR}/mariadb ## backup vaultwarden database echo "[BACKUP] Creating backup of vaultwarden database" mkdir -p ${WORKDIR}/vwdb /usr/bin/sqlite3 /opt/vaultwarden/db.sqlite3 ".backup '${WORKDIR}/vwdb/db-${DATETIME}.sql3'" /usr/bin/sqlite3 ${WORKDIR}/vwdb/db-${DATETIME}.sql3 "VACUUM" ## create tarball tar -zcf ${BACKUPLOCATION}/backup-${DATETIME}.tgz ${WORKDIR} > /dev/null 2>&1 ## cleanup old files echo "[BACKUP] Deleting old files" find ${BACKUPLOCATION} -mtime +${DAYS} -type f -exec rm {} \; -print ## cleanup of backup dir rm -rf ${WORKDIR}
- add crontab to make a backup every night
# daily backup 10 1 * * * /opt/bin/backup-gooksu.sh | mailx -s "Daily backup report" {my@email.address}
3 Comments