Upgrading can be tricky if services are still running. But most servers have services running, right? Tried it, ran into some problems, drank coffee, pulled my hair (not really), drank some more coffee, and succeeded. But then I thought, instead of going to init 1 or how is it called with systemd these days (its systemctl isolate runlevel1.target), and risk losing my precious ssh connection, why not use a rescue system, no services running?
Would that be easier? Anyway, a long upgrade version with some errors you might encounter is in the works.
When it comes to upgrading your Debian system, you have a few options:
- Risk it and go all in (after making and verifying your back-ups off course...)
- Boot the system, stop all unnecessary services and upgrade
- Boot a rescuecd, mount the partitions, chroot and update.
- Ignore the upgrade, and be happy with your current system (until security support runs out, yikes!)
On this server, upgrading took about 30 minutes. Mind, no significant errors appeared. If you do encounter some system upgrade booboo, the upgrade time will increase fa-frown red
We'll use systemrescuecd to accomplish the upgrade and minimize the problems in large part due to (almost) no running services.
The path for a system upgrade without systemrescuecd is largely the same, just skip the first 2 steps.
1. Start systemrescuecd
In VMware I mounted the systemrescuecd, booted the machine and went from there. Systemrescuecd will detect the partitions of the device.
At this point, you might as well check the filesystems to minimize upgrade problems or detect if the harddisk might be faulty:
e2fsck /dev/vg0/root ... e2fsck /dev/sda1
2. Mount and chroot
In this case, there were lvm paritions for root, home, var, tmp and swap. This requires some additional work. Make the mountpoints:
mkdir -p /mnt/new/boot /mnt/new/var /mnt/new/tmp /mnt/new/home
Mount the volumes:
cd /mnt mount /dev/vg0/root new cd new mount /dev/sda1 boot mount /dev/vg0/var var mount /dev/vg0/tmp tmp mount /dev/vg0/home home
Bind dev, proc and sys:
mount --bind /dev /mnt/new/dev mount --bind /proc /mnt/new/proc mount --bind /sys /mnt/new/sys
Set a root passwd in the rescue environment:
Log into the rescue environment from another (linux) system. If you didn't start the graphical rescue environment, you can skip this step.
It's usually better to not perform the upgrade from a graphical environment, to prevent a high number of running services, remember.
If you like pain, use Putty to login to the rescue environment. It's actually quite good but we do like Linux, right?):
Chroot into the mounted partitions:
chroot /mnt/new /bin/bash
3. Upgrade Jessie
First we make sure our Jessie system is up to date:
apt-get update apt-get upgrade apt-get clean apt-get autoremove
Change the apt sources.list to point to the new install:
sed -i s/jessie/stretch/g /etc/apt/sources.list sed -i s/ftp.belnet.be/deb.debian.org/g /etc/apt/sources.list
An example sources.list:
deb https://deb.debian.org/debian stretch main non-free contrib deb-src https://deb.debian.org/debian stretch main non-free contrib deb https://security.debian.org/debian-security stretch/updates main contrib non-free deb-src https://security.debian.org/debian-security stretch/updates main contrib non-free deb https://deb.debian.org/debian stretch-updates main contrib non-free deb-src https://deb.debian.org/debian stretch-updates main contrib non-free deb https://deb.debian.org/debian stretch-backports main deb-src https://deb.debian.org/debian stretch-backports main
If you aren't upgrading from a rescue cd, an upgrade from a rescue environment is still possible by going to runlevel 1. Mind that the network and ssh service will also be stopped so only do this if you have access to a console or a rescue console in case of a hosted VPS/dedicated server.
If the system goes to runlevel 1, you should then be able to access the system from the rescue console. In case you have access to a console, you won't need to mount the partitions as they will still be mounted. When using a rescue system (most of the time this requires a system reboot), you might have to mount the partitions manually.
If the console you get is kind of quirky (my hosting company provides a vnc console with a very weird keyboard layout), you might get away with just starting the networking and ssh again, and then login over ssh to perform the upgrade.
3.1. Adjust runlevel 1 to start ssh
Another way is to adjust the runlevels to start ssh in runlevel 1. Edit init.d/ssh to start ssh when going to init 1. Add 1 to the "Default-Start" LSB info:
vi /etc/init.d/ssh ... #! /bin/sh ### BEGIN INIT INFO # Provides: sshd # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 1 2 3 4 5 # Default-Stop:
Now going to runlevel 1 allows ssh access to update the system:
ssh still works! Now upgrade as usual.
4. Upgrade to Stretch
Next, the real fun starts, do a minimal upgrade to stretch:
apt-get update apt-get upgrade
I got a debconf questions about issue.net, I went for the default. The result was no errors! fa-shower blue Meh. No problems to solve.
Compared to the several errors challenges I got when updating a live system, this is fun. On a live system, you might want to:
- Go to runlevel 1 but prevent the network from going down so you don't lose ssh connectivity
- Stop as many services as you can such as nginx, postfix, supervisor, postgres, ...
Because you stop services, it's a good thing to communicate to customers (internal and external) and plan ahead. The final step is a dist-upgrade:
Debconf questions I got this time:
- ssh > keep your currenly-installed version
- nginx > keep your currenly-installed version
Again, this resulted in, drumroll: no errors Say what now?! fa-thumbs-up green
Further, we remove old packages:
After upgrading, change any scripts you have that uses the old nic names (eth0, ...) as the naming scheme is now different on new systems. Upgraded systems should be ok, but I would advise to go with the new naming scheme. Do this before you reboot the system.
To finish, exit the chroot environment and unmount the mounted partitions. Before exiting, the upgrade probably started some services. You will need to stop them if you want to exit the chroot properly.
Those services are started in the chroot (systemd?), but ironically systemd refuses to stop them because you run in a chroot. Use invoke-rc.d <service> stop to cleanly stop the services:
invoke-rc.d <service> stop exit
Now unmount whatever is mounted:
umount dev proc sys umount var tmp home boot cd .. umount new
Before rebooting the system, check the next topic for certain software specific issues.
5. Package specific notes
For some packages debconf asks you if you want to install the new package config file or keep your own. If you decided to keep your own config file it might still be interesting to see the default config file info. There are a couple of ways to do this. An example conf file is often kept in the package directory /usr/share/<package>.
First find the package that contains the config file:
dpkg -S <file>
Before restoring the default config, you should copy your config file to another directory. Run following command to restore the conf file:
sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall <package-name>
5.1. interface names
As mentioned in this article, the naming of the interfaces changed for new systems. Older upgraded systems will still use the old naming convention. To prevent problems for future upgrades, it might be wise to bite the bullet and change now to the new naming scheme.
Check the new interface name you devices would get. Do this for all the interface you want to change:
udevadm test /sys/class/net/eth0 2>/dev/null | grep ID_NET_NAME
Edit following files:
- firewall script if any
- saved firewall rules (for instance /etc/iptables.up.rules)
- interface file (vi /etc/network/interfaces)
- find relevant files to change: grep -r eth0 /etc
Next move the file in /etc/systemd overriding the default policy:
mv /etc/systemd/network/99-default.link /root update-initramfs -u
After reboot, and if you use iptables-save, you might want to save the iptables rules again:
iptables-save > /etc/iptables.up.rules
pdns server failed to start. The documentation /usr/share/doc/pdns-server/README.Debian provides more info:
The configuration for PowerDNS is separated in different files. In /etc/powerdns/pdns.conf are the base server settings, the configuration for specific backends could go into any other file (ending in .conf) in /etc/powerdns/pdns.d/. launch= settings can be chained by using the launch+= syntax.
Fatal error: Refusing to launch multiple backends with the same name 'bind', verify all 'launch' statements in your configuration
/etc/powerdns/pdns.d/ is now used with the bind backend file:
cd /etc/powerdns/pdns.d grep ^[^#] bind.conf ... launch+=bind bind-config=/etc/powerdns/named.conf bind-supermaster-config=/var/lib/powerdns/supermaster.conf bind-supermaster-destdir=/var/lib/powerdns/zones.slave.d
To solve the issue, remove "bind" from the launch parameter in pdns.conf:
vi /etc/powerdns/pdns.conf ... launch=
You might also get an error if the permissions of pdns.conf aren't correct.
Even with a working network, systemd reported an error in networking.service:
systemctl list-units --state=failed
networking.service loaded failed failed Raise network interfaces
Using journalctl to check for the error:
journalctl -xe systemd: networking.service: Main process exited, code=exited, status=1/FAILURE systemd: Failed to start Raise network interfaces. -- -- Unit networking.service has failed. -- -- The result is failed. systemd: networking.service: Unit entered failed state. systemd: networking.service: Failed with result 'exit-code'. systemd: Reached target Network.
Not much useful info. Let's try syslog:
grep ifup /var/log/syslog ifup: run-parts: /etc/network/if-pre-up.d/iptables exited with return code 1
Apparently an error occured with a line of the iptables rules that were loaded in if-pre-upd. I reran my firewall script to get the correct behaviour, then saved the rules and rechecked if they loaded ok:
iptables-save > /etc/iptables.up.rules iptables-restore < /etc/iptables.up.rules
No errors. Let's check the service:
systemctl status networking
zcat /usr/share/doc/openssh-server/README.Debian.gz SSH protocol 1 server support removed ------------------------------------- sshd(8) no longer supports the old SSH protocol 1, so all the configuration options related to it are now deprecated and should be removed from /etc/ssh/sshd_config. These are: KeyRegenerationInterval RSAAuthentication RhostsRSAAuthentication ServerKeyBits The Protocol option is also no longer needed, although it is silently ignored rather than deprecated.
relevant syslog entries:
sshd: /etc/ssh/sshd_config line 19: Deprecated option KeyRegenerationInterval sshd: /etc/ssh/sshd_config line 20: Deprecated option ServerKeyBits sshd: /etc/ssh/sshd_config line 31: Deprecated option RSAAuthentication sshd: /etc/ssh/sshd_config line 38: Deprecated option RhostsRSAAuthentication
To correct this:
vi /etc/ssh/sshd_config # Comment the above lines containing those 4 options
Also check that /etc/hosts.allow and /etc/hosts.deny are configured so that ssh is not blocked. Other useful options:
UsePrivilegeSeparation on PermitRootLogin without-password (even better no)
When running ssh from an upgraded system, you might have some bad key types defined:
/etc/ssh/ssh_config line 60: Bad key types 'firstname.lastname@example.org,email@example.com,firstname.lastname@example.org,ssh-ed25519,ssh-rsa'.
To check the available keys:
ssh -Q key ... ssh-ed25519 email@example.com ssh-rsa firstname.lastname@example.org ...
Edit /etc/ssh/ssh_config to reflect these changes:
... HostKeyAlgorithms email@example.com,firstname.lastname@example.org,ssh-ed25519,ssh-rsa ...
gpg 2 uses kbx to store publix keys and doesn't use secring.gpg anymore for private keys. Fortunately there is a migrate script:
/usr/bin/migrate-pubring-from-classic-gpg default Usage: /usr/bin/migrate-pubring-from-classic-gpg [GPGHOMEDIR|--default] Migrate public keyring in GPGHOMEDIR from "classic" to "modern" GnuPG using gpg version 2.1. --default migrates the GnuPG home directory at "/root/.gnupg"
Because of the changes in gpg, this might have an impact on duply/duplicity. If you're having duply sign the archives, add the following GPG option:
vi /root/.duply/<profile>/conf ... GPG_OPTS='--pinentry-mode=loopback'
6. After the reboot
After rebooting, check the services. List the failed services:
systemctl list-units --state=failed
List the active services:
systemctl list-units --state=active
Check if systemd init is used. Some ways to do this:
dpkg -S /sbin/init systemd-sysv: /sbin/init cat /proc/1/status Name: systemd Umask: 0000 State: S (sleeping) pidof /sbin/init 27982 27981 1 ps aux | grep /sbin/init root 1 0.0 0.3 57164 6812 ? Ss jul25 0:03 /sbin/init ls -la /sbin/init lrwxrwxrwx 1 root root 20 jul 5 22:31 /sbin/init -> /lib/systemd/systemd
Check kernel and Debian version:
uname -a Linux host 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (<date>) x86_64 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 9.1 (stretch) Release: 9.1 Codename: stretch
That's it. As IT systems get more and more complex, you have to admire and love the way these Debian upgrades can go. Yeah, stable is miles behind current, but it just works. It's one of the main reasons I chose to work with Debian many moons/ages ago. It was a time of RPM hell. If you remember this, you're euh, older wiser? fa-smile orange