Conducting A Vulnerability Assessment: A Step-By-Step Guide For Linux Workloads In The Cloud
Being proactive about protecting your systems, networks, applications and critical data is a cornerstone of a robust, successful security program. Having a vulnerability assessment plan is a way of doing just that—proactively identifying weaknesses within your systems, so you can shore them up before attackers find and take advantage of them.
However, conducting a vulnerability assessment on workloads within a cloud environment is different than doing so in a traditional environment. This article examines those differences; we also outline a vulnerability assessment process to help you create stronger security hygiene for your Linux-based systems operating within the cloud.
What is a vulnerability assessment?
Defined broadly, a vulnerability assessment is the process of identifying, analyzing and prioritizing vulnerabilities that exist in the software or system components that are present in your infrastructure. A vulnerability may be any type of weakness or even misconfiguration in the software that allows for exploitation or misuse by a malicious actor. The output of a vulnerability assessment is a set of findings that allow for your teams to know which vulnerabilities should be the focus of your remediation efforts to reduce the risk associated with these vulnerabilities. Generally, a vulnerability assessment is an example of how you can be proactive in your security program to make it harder for an attacker to compromise your systems.
Approaches To Vulnerability Assessments
Vulnerability assessment approaches generally fall into two categories: network scanning or agent-based. (Tweet this!) Either a tool is being used to scan systems remotely over the network (network scanning) or a piece of software is installed locally on the host to collect the necessary data (agent-based). There are advantages and disadvantages to each approach. However, in cloud environments, using an agent-based approach is usually the best route.
- Cloud environments are by their nature more ephemeral than traditional on-premise server environments. This means the environment is going to change much more rapidly. Having software running on an instance in the cloud is going to allow for continuous data collection while the instance is online vs. occasional scanning (and the potential for missing an instance if it’s been brought down). As a bonus, building agent software into the underlying image used to deploy cloud instances provides an easy way to ensure you have the visibility you need.
- The agent approach doesn’t require opening any type of inbound network connection to the cloud instance. In cloud environments, it may not always be easy or desired to open the appropriate ports to allow for communication from a network scanner. Again, having software that runs locally on the instance would not have this same requirement. In fact, with this approach it’s typical to see only a single outbound connection required—which makes network management much easier.
- Having an agent running on the cloud instance doesn’t require a service account that has the ability to remotely authenticate to the instance. In order to get the same level of visibility as a local agent, network scanners require a service account to authenticate and access the local system. Because server workloads in the cloud are so often running production software that is critical to the business, this opens another possible attack vector. If inbound communication AND an account with privilege is required to access these instances remotely, that would be a valuable target for attackers looking to compromise production cloud workloads. Having software running locally on the instance removes this possibility.
How To Conduct A Vulnerability Assessment
For security analysts and decision makers, here are the steps for conducting a vulnerability assessment that works for your cloud infrastructure that run on Linux. For the purpose of this agent-based vulnerability assessment example we’ll use the open source tool osquery, and we’ll focus on potentially vulnerable software installed on our Linux instances. This general process can be followed using any type of local agent that collects the necessary data though.
1. Identify the scope of the assessment.
First, you need to know which systems should be considered in scope for your assessment, as well as any specific objectives of the assessment. Think about questions like:
- Is the full set of Linux workloads in scope or only a subset?
- Are there any workloads explicitly out of scope?
- Do you have access and/or control of the workloads in scope?
- Are there specific compliance standards you need to adhere to that require a vulnerability assessment? If so, are there specific requirements to consider?
Defining the scope of the assessment in cloud infrastructure can also be aided by tools that connect directly to your IaaS provider (i.e., AWS EC2) and provide visibility into the instances currently running.
2. Deploy osquery.
Once you identify the scope, deploy osquery to the cloud instances in scope. This could be done with common tooling such as Ansible, Chef, or Puppet. If you’re able to implement a process from the beginning, it’s usually best to have osquery built into the CI/CD process. That way, any new system considered in scope will already have the necessary software to perform this assessment in the future.
3. Collect the data required (performed by osquery) and correlate to known vulnerabilities
This is the meat of the vulnerability assessment, which contains several sub-steps:
- First, osquery is used to identify which software packages are actually installed on the instances in scope. This should include both standard package managers as well as any 3rd party software installed outside of a package manager.
- Once you know what software is installed, compare them against known vulnerabilities. This is a step that is best performed using a platform that does the correlation of published vulnerabilities (i.e., CVEs) against the software packages installed locally vs. manual correlation. Automated correlation will save you time and ensure the resulting output is as accurate as possible.
- Prioritize your findings from the assessment. Once the vulnerabilities present are determined, you need to make sure the right vulnerabilities are being prioritized for next steps. One common way to think about prioritization is by using the common vulnerability scoring system (CVSS). CVSS is an industry standard score associated with every vulnerability. This can be used in conjunction with other information known to the business (e.g. the role of the server, the data stored on the server, the application group the server is part of, etc.) to adjust priority and planned next steps accordingly. This step is often times a combination of both automated and manual processes.
4. Report on and communicate the findings.
In order for any of this information to be useful for your organization, you need a way to report on findings and distribute it to the parties in charge of securing those systems within your company. Having the ability to filter and report on those systems and/or vulnerabilities that are most critical is key. Often times with a vulnerability assessment the amount of data produced can be overwhelming. Because of that it is vital that there is some type of logic applied to the raw results so that when the remediation work is handed off, it is something that can be acted on and completed in a reasonable amount of time.
5. Remediate vulnerabilities.
This is where your team should actually fix the issues identified in the assessment. This step is critical. It does no good to assess systems for vulnerabilities and then report on those findings, if action is not being taken. Remediation of vulnerabilities in cloud environments tends to be different than traditional on-premise network as well. Many times, software is not actually patched as a traditional approach would call for. But instead, the entire instance (or container) is likely to be rebuilt and re-deployed. This is why integration with the CI/CD process is so important. If a cloud instance is suddenly terminated and a new one stood up, how do you ensure that the new instance is being monitored for vulnerabilities going forward?
6. Validate remediation was effective.
Once the identified issues have been fixed, validate that the findings from the vulnerability assessment have been resolved. This should be repetition of steps 3-5 as necessary. If step 3 shows the remediation of the previous findings was effective, then there is no need to continue to steps 4 and 5. But, if some or all of the findings continue to exist in the environment, then steps 4 and 5 should be followed again with special focus on the communication to remediation teams. If remediation was not performed effectively the first time, then the teams should work together to ensure the appropriate steps are followed to resolve the vulnerabilities in question.
Using Uptycs For Vulnerability Assessments
Above we have outlined how to conduct a vulnerability assessment using a process targeting Linux cloud workloads using osquery, as an example. If you’re looking to take this process to the next level, with improved intelligence and reduced manual effort, then we invite you to take a look at the Uptycs platform.
The Uptycs platform is designed to provide endpoint visibility and security analytics out of the box including what we’ve described in this vulnerability assessment process. It helps organizations with their security by doing two major things necessary for vulnerability assessments:
- We collect the necessary telemetry and inventory software packages installed on Linux systems, so we can identify all packages that are installed (even down to the the version of those packages being used).
- We have intelligence of what vulnerabilities exist for those software, which we bring into our platform; we then correlate that data with the software packages that are found on our customer's hosts, and we can then use that to report on findings.
We help to automate the process of data collection from the host and correlation with the known vulnerabilities, so they can be remediated—and your systems protected. You can see it in action here or read more about it in this Linux security @scale case study.