I was listening to the Late Night Linux podcast and a question was asked of the viewers, how do you keep your Linux systems secure. As someone who works in information security what stuck out me was that a most people took a very passive approach to security, they either trusted the distribution to provide packages with secure defaults or they isolated the system assuming it is not secure. There were some great answers like using the Center for Internet Security (CIS) Benchmarks but benchmarks alone may provide a false sense of security in that they may not mitigate the risks that are most important for the specific system and it’s use case.
To help others better understand how to think about security I wanted to outline how I secure my systems. To further contextualise the topic I will talk about my instance of Nextcloud. My Nextcloud server has been in operation for ~4 years, undergone multiple operating system upgrades and application updates all without issue or incident.
I try implement my systems in the same manner that they would be implemented in the enterprise. I am a big believer that the best experience is lived experience. For my specific use case I do not want to use private overlay technologies such as WireGuard or ZeroTier. By exposing Nextcloud to the Internet I am forcing myself to take it seriously, no half implemented solutions, no shortcuts.
Before we get into the details I was to caveat this post by saying security is a mixture of objective and subjective considerations. The landscape changes by the minute and you need to make the best judgements that you can with the information you have and account for the fact that you may need to make changes when new considerations come to hand.
With that out the way, let’s get into into it!
High Level Considerations
When building systems I work to the following high level questions;
- Is the software designed for the use cases for which I will be using it, i.e fit for purpose
- Is the software intended to operate in the environment I will be using it in
- Is there applicable security guidance
- Am I able to implement a backup and recovery plan
Selecting Software that is Fit for Purpose
When I was considering the options for a self-hosted Google Workspace like services there are only a handful of products that have a significant user base, Nextcloud, Owncloud, and Seafile.
Nextcloud was my first choice and had the following criteria that that gave me a sufficient level of comfort that the software is fit for purpose;
- Nextcloud has been an active project since 2016 and is by design is intended to operate on the Internet
- The project is targeting the commercial market and should therefore be delivering to a standard of security and quality expected of their target market
- The project is maintained and has a regular release cadence
- The project is transparent about vulnerabilities and actively addresses them and
- My uses cases fall within the core product offering giving me assurance that my system configuration will not be a unique snowflake and will be tested by Nextcloud and the wider community.
Having settled on Nextcloud I want to now consider the deployment model. Nextcloud comes in various forms including an all-in-one Docker image, an all-in-one virtual machine, a Snap package, various community maintained options such as NextcloudPi and finally the Nextcloud application source in a compressed archive.
Reading over the options provided today, the recommended installation is to use the Nextcloud All-in-One. Overall this looks like a great approach but my preference is to understand my systems to the greatest extent possible. I don’t like turnkey solutions, if I am going to rely on a system 24/7 I want to be familiar with how it is put together and configured. I have read too many forum posts of people losing critical data because they have relied on a turnkey solution only to make mistake by blindly following advice resulting in data loss.
When considering what makes something fit for purpose part of that consideration is your alignment with intended use case. I want to ensure my use case and deployment model align with that of the software developers. The Nextcloud documentation is well written and contains all the information required to build from source. The Nextcloud documentation is written from the perspective of installing from source and I would argue that an Enterprise would likely build from source due to the flexibility is provides.
Installing from source will allow me to be deliberate in which components of Nextcloud I deploy as well as enabling me to configure and verify each component. I will also benefit from Nextclouds in-house testing. In summing up the options the only benefit that I would like that I may not get when installing from source is the benefits provided by containerisation which may prove useful in the event that the Nextcloud application gets compromised. However, I can manage this risk through selecting an operating system with SELinux and applying the principal of least privilege when configuring folder permissions and account privileges.
Reading over the Nextcloud documentation there is a section that provides an example installation. This is what I will use to base my installation around. Reading over the recommended components I am satisfied that the recommended components are well established in the industry and are being used for the purpose for which they have been designed.
The following diagram breaks down the components of the system.
Applying Security Guidance in Consideration of the Systems Threat Profile
The threats you are subject to will be dependent on your individual circumstance. For the 99.99% of us I would argue that the most likely threats will be malware, bots and general drive by cybercrime as a result of a publicly disclosed vulnerability being exploited on mass.
I am taking the position that I am not expecting to be individually targeted and therefore just need to not be a soft target. From a controls perspective that means;
- Positioning myself to be able to apply operating system and application updates as soon as possible
- Minimising the attack surface and only exposing the necessary services to the Internet
- Using complex passwords and implementing MFA for all user accounts
- Implementing brute force deterrence
Installation and Applying Security Guidance
When it comes to installation you should always follow the vendor documentation. If you have trouble understanding the vendor documentation and need some help refer to blogs or YouTube but as your knowledge builds, you should be cross-checking the vendor documentation to ensure that what is being advised on YouTube or the blog aligns with the vendor documentation and that the content author has not overlooked the security guidance in an effort to reduce complexity. There have been numerous occasions where I have built a system using online tutorials in order to gain a sufficient level of understanding and a working solution to then rebuild the system again using the vendors documentation and applicable security guidance to arrive the final product.
When applying security guidance I take a two step approach;
- I apply a reasonable level of hardening across the board regardless of exposure
- then, I focus on the exposed services with the aim to apply the highest degree of security whilst balancing usability and maintenance overhead
So where do we find security guidance. Back to my principles, if the software is fit for purpose it is reasonable to expect that the product has an associated vendor developed hardening guide.
As expected, Nextloud publish hardening and security guidance. The guide states Nextcloud aims to ship with secure defaults that do not need to get modified by administrators. However, in some cases some additional security hardening can be applied in scenarios were the administrator has complete control over the Nextcloud instance.
Taking into consideration that the Nextcloud application is the primary point of exposure my preference is to aim for the stars with the expectation that there may be controls that simply don’t meet the usability and maintenance overhead targets. That said, having knowledge that the Nextcloud team have addressed security out of the box is encouraging and gives me a level of assurance that we’re starting from a reasonable baseline.
In order to get a feel for how expansive the security guidance is, I like to break down the table of contents and make a note of where the suggested controls fit in the solution stack. Once complete we can review the controls to make a assessment at to whether security has been considered holistically with a focus on the areas of exposure.
The Nextcloud example installation when read in conjunction with the hardening and security guidance has done a good job in applying a baseline level of security to all the system components. The following diagram builds upon the component breakdown layering on the control recommendations taken from the Nextcloud hardening and security guidance.
To ensure that there are no additional controls that would be off benefit I like to review each components security guidance.
MariaDB
MariaDB has associated security documentation. Reviewing the documentation I am in a good position of out the box. When using Red Hat Enterprise Linux, MariaDB has the appropriate SELinux policies defined and operates as a non-privileged user. Taking into consideration that the data I care about most is the files that are not stored within the database and that communications between the Nextcloud application and the database are occurring over the localhost network I will not be applying either encryption in transit or at rest. The only outstanding item is running the mysql_secure_installation
script which already recommended in the Nextcloud documentation. The mysql_secure_installation
script will apply basic good practices like removing default databases, applying passwords, ensuring MariaDB is listening on localhost only and more. The script is quiet verbose and seeks your approval at each step.
Redis
Redis provides a good example of a product that has clear constraints around the threat environment in which it is intended to operate in. The Redis security documentation states that Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket.
Taking into consideration that the system will only run Nextcloud and that communications between Nextcloud and Redis occur locally I am comfortable in using Redis. To balance security and usability I will go with the option of using unix domain socket rather than binding Redis to a network address. Whilst this is not the most secure option, it reduces the maintenance overhead and there is a level of authorisation in that the apache
user will be required to be a member of the redis
group to access the socket. Redis does offer a more secure deployment using granular access control list but this is an area that requires domain expertise, is not covered in the Nextcloud documentation and is not tested by Nextcloud and may therefore impact reliably which hits up against my maintenance overhead targets.
Apache
Apache is designed to operate as an Internet exposed service and is designed to be secure out of the box. The Red Hat Linux Security Hardening Guide (link) states that the Apache HTTP Server is one of the most stable and secure services that ships with Red Hat Enterprise Linux. In general terms I am confident that Apache as a service is secure however there are a number of security measures for web applications that are recommended in the Nextcloud documentation and Apache guidance, specifically;
- Implementing transport Layer Security (TLS) to secure communications over the Internet
- Implementing HTTP Strict-Transport-Security to informs browsers that the Nextcloud should only be accessed using HTTPS
- Enforcing only TLS 1.2 and newer
- Using only secure ciphers
- Using Online Certificate Status Protocol stapling for real-time check of the status of the validity of the certificate
- Applying the
X-XSS-Protection
, response header to act upon detected reflected cross-site scripting (XSS) attacks and - Applying the
X-Content-Type-Option
to indicate to the browsers that the MIME types advertised in the Content-Type headers should be followed and not guessed
With all that said and done we end up with the following security overlay.
Implementing a Security Monitoring Plan
Reflecting back on our desired controls principles perspective we have now implemented a good baseline of security and minimised our attack surface. We now need to implement a level of security monitoring being sure that we have a mechanism to be notified of the need to apply operating system and Nextcloud application updates.
Security Baseline Monitoring
When it comes to general security baseline monitoring I am most concerned with ensuring that all critical services are running. I have made the decision that I will not be monitoring every configuration file however this could be done using a combination of git and Ansible to alert on changes, I am taking the position that I will only be monitoring for service failure. My reasoning here is that exploitation of the Nextcloud, my most likely threat, will be contained by SELinux and the permissions of the apache
user. Furthermore, I am the only administrator of the server and it is reasonable to assume that the configuration I have defined will remain.
I have used a combination of systemd service units and timers with Uptime-Kuma to monitor for failure of services including, dnf-automatic
, ntp
, firewalld
, auditd
, php-fpm
, httpd
and ssh
. If there is interest I can write dedicated blog covering my monitoring setup.
Operating System and Application Updates
This will take some testing and experience but I am comfortable that this specific server is robust enough that I can apply operating system updates automatically and be notified when a restart is required to ensure updates that require a reboot are applied. To enable this I have used a combination of dnf-automatic
to install updates and the needs-restarting
utility in combination with Uptime-Kuma to notify me when a reboot is required.
In regard to Nextcloud application updates, reading over the forums I can see some users have had issues with Nextcloud updates completing successfully and there are some cases where updates require manual intervention such as updating the database schema. With that in mind I am reluctant to install updates automatically but I do want to be notified as soon as an update is required. To enable this I have used Nextclouds occ update:check
utility in combination with Uptime-Kuma to notify me when updates are required at which point I can log into the server and apply the updates.
Post Installation Verification
At this point we have a sufficiently robust system but it never hurts to get a third party perspective on how you are tracking. We are fortunate in that there are a number of tools to help us evaluate the security posture of our system both internally from Nextcloud and outwardly from the perspective of the Internet. These include;
Qualys SSL Labs
Qualys SSL Labs, a free online service performs a deep analysis of the configuration TLS web server on the public Internet. The screenshot below provides an overall score for my configuration however there is much more detail on the full report. I do recommend scanning your infrastructure regularly to stay abreast or vulnerabilities and changes in best practice in this space. The following scan shows the results of a Qualys scan at the time of writing.
Nextcloud Security Scanner
Nextcloud Security Scan will scan your Nextcloud instance and advise on any issues that are identified. The scan strictly based on publicly available information and is very useful to get an understanding of what is observable by the various bots and services that scan the public IP space. The following scan shows the results of a Nextcloud Security scan at the time of writing.
Nextcloud Administrative Console
The Nextcloud Administrative Console provides an internal scanner that is focused identifying issues with the Nextcloud application. The scanner will alert you to errors or misconfiguration. This is my first stop post install to ensure there are no outstanding issues. This is an area that should be checked at least every update.
Backup and Recovery
My backup and recovery plan is fairly simple. The mechanisms to backup Nextcloud all exist within the component technologies. The of data requires that you backup the Nextcloud data directory, Apache can be backed up by copying the associated .conf
file and the database can be backed up using the mysqldump
utility. Once backup is completed the fails are aggregated and shipped up to a central server using restic. From there the backup target is snapshotted using ZFS and replicated to two other destinations as opposite ends of the country.
In addition to backing up the contents of the virtual machine I also backup the virtual machine itself using ZFS snapshots. These snapshots are sycned to a secondary local target for quick recovery. My systems are generally quick to reinstall and have largely been automated so the loss of the virtual machine backup while not desirable is not an issue.
Over the years I have tested my recovery plan to retrieve a single file and a full recovery. I have migrated from RHEL 7 to 8 and finally 9 using my application level backups as the source. Overall I have been satisfied with this approach and it meets my recovery time objectives.
Conclusion
I hope this post has been useful. If you have any questions please touch base.