CentOS 8.x Securing a Linux machine
CentOS 8.x Securing a Linux machine
This article is written assuming CentOS 8.x OS. However the general principles of hardening will apply to any other system. Only the commands need to be changed based on the flavor / distribution of Linux being hardened.
This article is work in progress
Securing a Linux system is a must for any system that is accessible publically or is in potentially hostile environment. General tips for securing a Linux machine have two parts:
- Initial security hardening of the system
- Regular tasks to monitor and ensure system is safe
Initial security hardening of system
It makes sense to keep system fully updated with latest patches and security updates. This is easier for enterprise distributions such as CentOS which are purpose built for servers and hence typically have older version of software compared to Desktop editions (eg Fedora) which might come with latest versions. Using slightly older version of software that has been tested for more than a year is safer for servers.
To update the system fully use:
dnf update -y
It is important for system to have host based firewall setup. To setup firewalld Refer:
- Basic setup
- CentOS 7.x Basic firewalld configuration
- Advanced details about firewalld
- CentOS 8.x firewalld
Network firewall managed by someone else OR Microsegmentation
If it is possible for system to be protected by a network firewall (such as servers in DMZ) or firewall provided by cloud providers (eg Security groups for EC2 VMS in AWS) or NSX firewall in virtualized environments, then such second set of firewall managed by a different team can add considerably to security. In this case even if the administrator of the system makes wrong modification to default OS firewall, the second firewall can ensure that system is still protected. Any coordinated change in both host firewall and network firewall should be governed by change management, approvals, ticketing systems, etc. to make the whole process more transparent.
Setup logwatch and outgoing alert emails
Any Linux system generates many useful logs that go to various different log files. It is not possible for human administrators to go through many log files of many systems and look for issues and anomalies. Hence, it makes sense to configure Logwatch to go through various log files and send one email per day related to events / logs that were seen on that system in past 24 hours.
Since logwatch can send email enable outgoing email from system using CentOS 8.x postfix send email through relay or smarthost with smtp authentication if required
To install logwatch use:
dnf -y install logwatch
After default setup consider:
- Increasing Logwatch detail
- Increasing detail of logwatch output
- Configure From and To address for logwatch
- CentOS 7.x Zimbra command line for sending logwatch email
- Disable too much logs from kernel when log detail is high
- Disable too much logs from kernel when log Detail is high
- Create customer logwatch service or scripts
- Creating new logwatch service or scripts
Disable IPv6 connectivity if not required
If you are confident that IPv6 is not being used and not required, then it can be disabled to avoid IPv6 related attacks / entry points using:
Configure longer history retention with date/time
By default only last 1000 commands are stored in history without any timestamps. It makes sense to have system store longer history and along with timestamps.
If server has public FQDN, it makes sense to have recognized SSL Certificates purchased from a provider such as Installing lets-encrypt SSL certificate
Use of recognized SSL certificate in place of self-signed certificates, makes it easy for users to know that they are connecting to the right machine
If there is public site test SSL certificate setup using: https://www.ssllabs.com/ssltest/
Containers or virtualization for isolation (lxc, kvm)
If the machine is having multiple functionalities such as DNS, Web server, database etc. then it makes sense to separate them among various containers or virtual machines. This ensures that compromise of one of these systems does not affects others.
To learn about containers refer CentOS 8.x lxc
For virtual machines refer CentOS 8.x KVM
If the base machine is only used for virtualization (and not as Desktop with GUI) and all functionality is served by containers / VMs then it makes sense to have specialized distributions such as Proxmox virtual environment on base machine.
Install file integrity monitor
It makes sense to keep a check on critical system configuration files, libraries and binaries using file integrity monitor. For this one can use: CentOS 8.x Basic AIDE setup and usage
Note that for libraries and binaries installed using package managners (rpm / yum / dnf) we can use:
rpm -V --all
to check each and every file installed via package manager against the checksum contained in origianl rpm package. For understanding the output refer Advantages of using package managers
If you are running custom programs on the server then it is important to have those files versioned. This will allow checking whether there are any unintended modifications to the sources by any adversary. This assumes there are remote version control systems on which modification to code branches is protected with appropriate authentication mechanisms.
LDAP instead of local logins when there are many systems (Data-center)
It makes sense to have all the login information come from a central system such as LDAP or AD. Operating system and any applications being used can be integrated with such directory systems for centralized authentication. This will ensure that all account information is maintained centrally. If a user leaves we only need to disable / delete their information from the central server. This will also ensure that UID/GID of users are consistent across all servers.
Strong password policy including aging
Ideally one should use LDAP for account information and avoid local logins. However, if local logins are not avoidable then we should configure strong password policy and optional password aging for more security.
Personally I am not in favor of password aging and forcing users to change passwords frequently. I have seen users use increasing systematic passwords (Serial 123 or date based) the more often they are asked to change.
For various password policies refer:
- CentOS 8.x Configure password complexity
- CentOS 8.x Configure password aging
- Remote bad logins get locked via [[CentOS 7.x fail2ban as explained later. For local failed login related locking refer https://www.tecmint.com/lock-user-accounts-after-failed-login-attempts-in-linux/
Central logging / remote logging for incident analysis purposes
If a system is compromised then attacker can delete the logs and traces to make it difficult to understand the entry point. Hence it can be useful to have a copy of logs being forwarded to another systems in real time. This way attacker can atmost send more fake logs later on but cannot modify / alter the real logs sent by system, before it was compromised.
For remote logging refer Rsyslog configuration
Default Linux security follows Discretionary Access Control (DAC) model. There are options for using Role-Based Access Control (RBAC) using options such as SELinux. These provide a greater degree of security to system ensuring that attacker if they gain access to system via compromised services (eg faulty web site), cannot get root access easily.
However, it takes considerable expertise to use these systems properly. Normally if a program / process / command fails to work due to SELinux, we do not see the corresponding reason in the same process log files / commmand output. We have to remember that it might have been caused by SELinux and manually check SELinux logs / alerts. Thus, it might make debugging difficult and hence on many systems people disable SELinux immediately after OS installation.
Avoid Using FTP, Telnet, And Rlogin / Rsh Services
Although obvious, still mentioning for sake of completeness that we should avoid legacy services such as ftp / telnet, rlogin and rsh. Instead of these use:
etc. services for remote logging and file sharing
Mount filesystems with noexec option
It is possible to mount filesystems in /etc/fstab via noexec option. This ensures that if any file is uploaded on those filesystems, they cannot be executed. This applies only to binaries that need to be executed directly. Any scripts such as python, perl, bash or php which are interpreted (executed) via the compilers / interpreters would still get executed.
We should definitely consider doing this on folders which are writable from other machines during to sharing (eg NFS, Samba, Owncloud, etc.)
Setup anti-virus scan via clamav
Setup clamav to scan entire filesystem (/) as explained at CentOS 8.x clamav. After that create a file in '/etc/crond.daily' to run clamscan daily (Or choose other period eg weekly based on requirements)
Securing SSH service
Change SSH port from default 22
For this use following steps:
- Edit /etc/ssh/sshd_config and use
- Port 22 #Dont remove this yet
- Port 5000
- Replace 5000 with desired port no.
- Restart sshd
- systemctl restart sshd
- Try to connect to SSH over new port (eg 5000 in above example)
- If connection is not working check firewall. By default firewall rules allow only connection to port 22. Enable connections to port 5000 from your IP. Refer CentOS 8.x firewalld
- Once connection is wroking edit /etc/ssh/sshd_config and comment port 22:
- #Port 22
- Port 5000
- Restart sshd for new settings to take effect
- systemctl restart sshd
- Validate that you are able to connect on different port and that connection to port 22 is not working.
Protect SSH port access via firewall
Based on machines which need SSH access to server, protect access to SSH port via firewall.
If SSH port is changed from 22 to some other port, by default firewall will not have exception for that port. In such cases instead of adding exception for port, allow access to all / desired port only from specific IPs / subnets of admin stations. Example
#Allow all ports firewall-cmd --add-rich-rule 'rule family="ipv4" source address="188.8.131.52/32" accept' --permanent #Allow access to port 5000 firewall-cmd --add-rich-rule 'rule family="ipv4" source address="184.108.40.206/32" port port=5000 accept' --permanent #Reload and validate firewall-cmd --reload firewall-cmd --zone=public --list-all
Refer: CentOS 8.x firewalld
Allow SSH only for required users
You can limit users who have SSH access to system using options such as:
This will allow only root user to login into the system. If you have another user with sudo access and want to disable root access then enable only that user in Allowusers.
First test the setting by connecting in another terminal without closing current root ssh connection. If you are again able to get remote root access directly or via another users, then only close the current connection
- Restricting SSH access to given users
Restrict users who need file transfer to their home folder using SFTP chroot
If users need abilitty to transfer files via scp/sftp (but not rsync) then they can be restricted to the home folders using sftp chroot. Refer Chrooting sftp users to home directory with openSSH
There is also option of changing user shell to rssh. This shell will only allow scp and sftp for the given users.
Use key based authentication for SSH. At least for root user disable password based SSH
Ideally we should disable password based SSH for all users using:
- Edit /etc/ssh/sshd_config and set
- PasswordAuthentication no
- Restart sshd service
- systemctl restart sshd
However, if above is not practical then at least disable password based SSH for root user using following in '/etc/ssh/sshd_config':
and reload sshd service
To understand key based authentication Refer Configuring authorized_keys file for public key based access
Secure SSH keys with password
If key based SSH is used, it makes sense to secure SSH keys via password. This way if someone has access to your system temporarily and they copy your ssh private keys, they cannot use them without knowing the password. Most Linux systems allow automatic unlock of ssh identity when doing a GUI login onto system, by saving ssh key password in keyring. Thus, this does not leads to any inconvinience.
Refer Passphrase for ssh-keys
Any system which is exposed to public Internet starts getting attacks immediately. If we leave SSH port open then we can see thousands of bad login attempts on any system per day. To ensure that such attackers get only limited no. of chances (Bad password attempts) to attack system, we can setup fail2ban. Fail2ban will ban IP for some duration (default 900 seconds) if it makes more than a fixed no. of bad login attempts. The no. of attempts allowed for root user are typically lesser than no. of attempts allowed for other users. Limiting only a few (9-10) attempts every 900 seconds (15 minutes), is more than enough to ensure that system cannot be exploited using dictionary based attacks / bruteforce attacks.
To setup fail2ban refer:
Note that fail2ban supports many other applications such as dovecot, postfix apart from sshd. Hence we should try to secure as many applications via fail2ban as possible.
Earlier versions of OS used to use Denyhosts, which is now deprecated and we should use fail2ban instead.
Secure other services eg (Web, proxy, DNS, MySQL etc.) setup on that server.
- OWASP for web applications
It is important to focus on physical security of the system along with software security hardening. Physical security is important because it can prevent accidental /intentional shutdown / reboot / network disconnection. Note following points that explain either potential security issue if someone gets physical access to system or option to secure system from physical access:
- BIOS boot passwords
- It is possible to configure password in BIOS to be asked during boot. This can be circumvented by doing BIOS reset using jumpers but this requires opening the system and know-how of hardware. This would also take longer compared to booting the system without password by adversary.
- Grub single mode root access
- It is possible on most systems to edit grub and go to single mode. Then in single mode unrestricted root access is available to copy files / reset root password / change security settings, etc.
- Boot from live CD/DVD/USB
- A person can boot system from live CD/DVD/USB and mount file-systems. Then settings can be changed, files can be copied, passwords can be reset etc. without leaving any logs on original OS installed on physical hard-drive.
- Take hard-disk and put in another system
- This has same risks as booting with live CD/DVD/USB. Once the hard-disk is inserted into another system and booted using another hard-drive, the partitions and files of current system can be accessed on remote system without any problem.
- BIOS hard-disk encryption
- It should be possible to set password on hard-disk on BIOS. This makes hard-disk unusable unless same password is supplied during boot. Even if this hard-disk is taken to another system, the data would remain secured unless the attacker knows the configured hard-disk password.
- File and folder encryption
- : If encryption / protecting entire disk is not possible / practical then there are encrypted filesystems that can be used to encryt data when system is running. If someone steals hard-disk or boots system using live CD/DVD they cannot get the data unless they know the encryption keys / passwords to decrypt the files. Refer Ecryptfs or EncFS
One time initial configuration TODO
- Configure audit daemon
- Configure two factor authentication for applications including SSH (Google authenticator)
- Remove unwanted packages
- Many people suggest removing X11 or graphical packages, if they are not required
- Removing cc,gcc, etc. should make it difficult for attacker to compile programs
- Monitor User Activities (psacct, acct)?
- Record system CPU, Memory, Disk usage statistics?
- Zabbix montioring?
- CIS Linux benchmark??
- Disable access to USB or CD if not required
Regular tasks to monitor and ensure system is safe
Do Vulnerability Assessment and Penetration Testing (VA-PT)
Based on the type of server and services running do regular VA-PT. For this Refer:
Check if system has more updates
Update system regularly using:
dnf -y update --skip-broken
Check package installation history
While doing this also check package installation history using:
Go through logwatch reports
For every system configured to use logwatch as explained at #Setup logwatch and outgoing alert emails, go through daily logwatch reports and look for:
- Hacking attempts to site with HTTP response codes (40* - Denied, 50* Server error, etc.)
- Look for outgoing email statistics
- Look for free space on partitions
List of open ports
Check list of open ports using:
Wherever possible avoid plaintext version of services (eg SMTP / HTTP) and prefer their TLS counter parts eg (SMTPS / HTTPS) etc.
Make sure there is nothing suspicious listening on any public port. This should be checked along with firewall configuration:
Refer CentOS 8.x firewalld for detailed information on firewalld
Validate sudo access
Run following to see effective sudo configuration:
grep '^[^\#].*' /etc/sudoers
and ensure that nothing suspicious is present.
Validate list of users
Validate list of effective users on this system using
Here ensure that system users (Typically UID<500 or UID<1000) do not have valid shell.
getent passwd | grep bash <pre> * Note there can be other valid shell other than bash also. * By default postgresql seems to be having /bin/bash shell Here you can also validate users who are part of root group or have root UID (0) using: <pre> getent passwd | grep :0:
Typical output is
root:x:0:0:root:/root:/bin/bash sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt operator:x:11:0:operator:/root:/sbin/nologin
Check authorized keys for root user using:
and make sure all of them are legitimate.
Validate cron files and crontab entries
Check root user crontab
As root user use:
Check other user cron
Look at cron files of other users at:
If any files are present we can 'cat' them and see entries of that particular users cron
Check system cron entries
Look at following files / folders to validate system cron settings:
Validate fail2ban is working
Validate fail2ban is working and look at blocked IPs using:
fail2ban-client status fail2ban-client status sshd
If there are other jails apart from sshd, we can look at their status as well.
Look at aide reports and validate packages
Once AIDE is setup, validate that it is running regularly and look at AIDE reports. Also validate all installed packaes using:
rpm -V --all
and investigate anything suspicous.
Look at CPU, RAM, SWAP and disk usage history
Look at system history via CentOS 8.x System Activity Reporter (sar). If required review this for a past few days.
- CPU usage %idle should be high (At least 30-40%)
- RAM %commit should be max 60-70%. Also %memused should overtime increase to close to 80-90%
- Swap %swpused should be low (0-10%), or at least %swpcad should be low (0-10%)
- Disk usage should be uniform over time. You can expect to see spikes of disk usage around major disk I/O activity eg backup schedules.
Regular tasks TODO
- Validate atd entries or disable at daemon
- Validate kernel modules (Honeypots, keyloggers, etc.)
- Validate backups are happening properly (Including application / DB backups)
- If possible restore backup (Note steps)
- Disable unwanted services
- Disable SUID and SGID Permission
- Maintain Word-Writable Files and Directories list. Perhaps world-writable directories should have sticky bit set
- Look for files modified using chattr (lsattr)
- Look at relevant application logs (/var/log/httpd, /var/log/maillog, etc.) and OS logs (/var/log/messages)
- Look at audit logs
- Try to crack existing passwords
- Look for rootkits using chkrootkit and rkhunter (
Reformat compromised system
Once a Linux system is compromised, it is almost impossible to clean it up with high confidence. It is always better to take data backup, validate that backup is only data (not executables / scripts, etc.) and setup a fresh system with restore.