<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=645174729237247&amp;ev=PageView&amp;noscript=1">
We are updating the structure and design of KernelCare blog for your convenience. Today, you may experience some text formatting inconvenience which will be fixed shortly.

Mmap kernel vulnerability is relisted

Mmap kernel vulnerability is relisted - and what that means for vulnerability management

We’ve covered brand new Linux kernel vulnerabilities in a few of our past articles, but in this article we’ll take a look at a vulnerability that’s been re-listed accidentally. Both reports – the erroneous relisting, and the original listing – point to a vulnerability in Linux kernel memory mapping where a race condition can develop when a memory expansion function is used.

We’ll cover the vulnerability as it stands. But we’ll also look at a key issue revealed by the double listing: if security experts can so easily lose sight of an existing vulnerability to the extent that a vulnerability is relisted as “new” and “just discovered” – what does it say about the state of vulnerability management?

And what does it mean for Linux users around the globe, vulnerable to countless offensive strategies – but dependent on the security experts for assistance?

 

Content

  1. Understanding the mmap kernel vulnerability

  2. Memory expansion

  3. A quick overview of race conditions and use-after-free

  4. What is the impact of this particular vulnerability?

  5. Wait, we’ve been here before…

  6. There is a larger issue at stake

  7. Effective vulnerability management is extremely difficult

  8. Managing vulnerabilities effectively

  9. Patching is critical

  10. Automated, live patching is key

  11. Conclusion



Understanding the mmap kernel vulnerabilityUnderstanding the mmap kernel vulnerability

 

Applications almost always need to hold something in computer memory while the application runs – they wouldn’t be very useful otherwise. In turn, the operating system kernel needs to assign memory space to an application. The function used to request this memory allocation is called mmap, a memory mapping function.

There is, of course, a finite amount of memory in any computer. In assigning memory, the kernel must carefully manage demands – and re-assign unused memory to another application if needed.

In this specific vulnerability, there is an edge case where two different applications could request access to the same memory. The application that gets there last will fail in its request. The reason it creates a vulnerability is that this second application would still be able to read from a now invalid memory location.

This would in turn trigger a kernel crash which could mean that the information in that memory location is disclosed. The information could include anything from something completely innocuous to an encryption key – and that is where the risk lies.

 

Memory expansion

Memory expansion

 

It’s worth noting that the cause for this vulnerability lies in the way memory expansion is handled. The kernel manages the allocated memory by maintaining a list of memory pages. At times, an application may require more memory – or indeed, surrender some of this memory.

For example, if the user of an application opens a large file the application may need to expand its memory allocation. This expansion can be “up” or “down” from its existing assigned memory. This would ordinarily not be an issue, but expansion needs to be handled carefully or it can lead to issues – interactions can occur amongst multiple threads of the same application or in different applications.

Unfortunately, as the researchers that discovered this vulnerability found, in some cases affected versions of the Linux kernel do not handle memory expansion correctly. Due to this vulnerability, a race condition can emerge between certain expansion functions – expand_downwards and expand_upwards – and page-table free operations.

 

A quick overview of race conditions and use-after-free

A quick overview of race conditions and use-after-free

 

It’s worth quickly reviewing the two common security issues that surround this vulnerability. First, many security issues revolve around race conditions. We’ve outlined above how the race condition works in this vulnerability – two applications requesting access to the same memory space. The application that arrives “late” can erroneously get access to memory allocated to another application.

That is just one example of a race condition. In general, race conditions occur where two or more threads – from the same, or different applications – try to access the same data. This could be shared data, or it may be something like an allocated memory space. Attackers can exploit the errors created by race conditions (including kernel crashes) for a range of attacks – from denial-of-service to siphoning off data.

A user-after-free or UAF scenario is where an application tries to access a piece of memory after it has been freed. For example, a pointer in an application points to a data set in dynamic memory that is no longer in use – and hence free – when in fact the pointer should have been updated.

Again, UAF vulnerabilities give attackers room to exploit programming errors to trigger a security breach – crashing a system, or stealing data.

 

What is the impact of this particular vulnerability?

What is the impact of this particular vulnerability?

 

Currently, there are no known exploits for this vulnerability out in the wild, but as always there is a risk that an exploit can emerge. It is medium to low risk that it will emerge given the need for local account access to exploit the vulnerability. Nonetheless, with local account access, an attacker can craft a specially coded program that triggers a use-after-free that crashes the kernel.

As a side-effect of that crash, the attacker can design their program to steal information – for example, by grabbing the error message generated by the crash, which contains the content of the affected memory.  Attackers can also reference the “core dump” created whenever the kernel crashes. A properly executed attack can exfiltrate this information to another machine.

This vulnerability affects kernel versions prior to 5.7.11. Most distributions have released fixes for the vulnerability – if you have applied the associated patches your workloads will be protected against this vulnerability. And, of course, KernelCare is providing live patching for this vulnerability across the distributions it supports.

 

Wait, we’ve been here before…

Wait, we’ve been here before…

 

The CVE for the vulnerability in question, CVE-2020-20200, recently emerged as reserved (in other words, a vulnerability has possibly been identified – but not confirmed). As it turns out, it was a duplicate report. The exact same vulnerability was in fact reported in November last year as CVE-2020-29369.

The researchers that reserved CVE-2020-20200 clearly didn’t know that the vulnerability has already been reported. As a result, CVE-2020-20200 was simply folded into CVE-2020-29369. It’s not the first time that a security vulnerability has been reported in duplicate, and this latest example brings the issue of double reporting – and what it implies, back into the spotlight.

Even people who are closely involved with Linux kernel security accepted the second disclosure. Yes, it’s positive that different individuals and teams are checking for these vulnerabilities, but it is worrying that existing vulnerabilities can so easily be forgotten in the process.

 

There is a larger issue at stake

There is a larger issue at stake

 

One can argue that the double reporting suggests a lack of awareness of vulnerabilities in the wider security community. This particular vulnerability affects fundamental kernel features, but the fact that it was reported twice suggests that its existence is not as widely known as it should have been.

It’s easy to blame a lack of awareness on the security experts, but the fact is that these individuals and teams are simply getting lost in a flood of vulnerabilities. The growing wave of security problems simply washes away any real opportunity for a single individual or even a team to consistently be aware of critical, reported vulnerabilities.

In other words, we’re saying that even the experts can’t consistently cope with the flood of vulnerabilities that are being discovered.

 

Effective vulnerability management is extremely difficult

Effective vulnerability management is extremely difficult

 

If security experts struggle with vulnerability management, it goes without saying that staff in charge of everyday IT operations will struggle much more. A typical sysadmin is already swamped with day to day duties – they will be lucky to remember the last five or ten vulnerabilities that recently required patching. But dozens or hundreds? It is unlikely.

The real-life implication is that vulnerabilities will be poorly managed. Many vulnerabilities will simply fly under the radar. Others will be patched – but in all likelihood, long after the patch was released.

This “imperfect” treatment of vulnerabilities is what leaves an opportunity for malevolent actors. Worse, attackers commonly use automated tools to search for unpatched vulnerabilities. In other words, piecemeal patching is not that much different from no patching.

 

Managing vulnerabilities effectively

Managing vulnerabilities effectively

 

Vulnerability management benefits from a range of tools. Close management of credentials and permissions would be one key tool, for example – limiting the damage an attacker can do with stolen credentials.

Similarly, security monitoring can help spot an attack in progress and give you the opportunity to limit the damage. And, of course, firewalls and other security tools can stop automatic attacks before these take root.

 

Patching is critical

Patching is critical

 

Yet highly consistent and rapid patching is by far the most effective way to manage vulnerabilities. In many cases a patch is released for a vulnerability long before an exploit emerges. Even where exploits are out in the wild before a patch is available, the window between exploit and patch would be relatively narrow.

In contrast, with ineffective patching, the window between an exploit emerging in the wild and the patch finally being applied could be years – or indefinite.

We know that patching is difficult. Patching is disruptive, for example – often requiring a server restart to complete. Patching during maintenance windows helps, but critical patches must be applied even outside of a maintenance window.

 

Automated, live patching is key

Automated, live patching is key

 

Patching is time-intensive if done manually. Automated patching is a better way forward – ensuring that patches are replied consistently and without the usual drain on IT resources.

The most effective way to manage patching is, of course, automated patching combined with rebootless patches. In other words, patches that are applied live without the need for restarts. Live, rebootless patching implies that servers are kept up to date – without disrupting the underlying services.

That service is what KernelCare live patching is all about – automatically, seamlessly patching your servers without requiring disruptive restarts.

Conclusion

Conclusion

 

The flood of vulnerabilities is extremely difficult to keep track of. Even the experts most closely involved with vulnerability management get it wrong sometimes. There’s little question then that sysadmins and enterprise security experts will make similar errors, unless they make use of cutting-edge, automated tools.

KernelCare’s live, rebootless and automated patching applies patches for the above mmap vulnerability and many other types of vulnerabilities as soon as the patch is released. Tools like KernelCare is, therefore, critical in the ongoing fight against vulnerabilities – and the associated exploits.

Mmap kernel vulnerability is relisted

Mmap kernel vulnerability is relisted - and what that means for vulnerability management

We’ve covered brand new Linux kernel vulnerabilities in a few of our past articles, but in this article we’ll take a look at a vulnerability that’s been re-listed accidentally. Both reports – the erroneous relisting, and the original listing – point to a vulnerability in Linux kernel memory mapping where a race condition can develop when a memory expansion function is used.

We’ll cover the vulnerability as it stands. But we’ll also look at a key issue revealed by the double listing: if security experts can so easily lose sight of an existing vulnerability to the extent that a vulnerability is relisted as “new” and “just discovered” – what does it say about the state of vulnerability management?

And what does it mean for Linux users around the globe, vulnerable to countless offensive strategies – but dependent on the security experts for assistance?

 

Content

  1. Understanding the mmap kernel vulnerability

  2. Memory expansion

  3. A quick overview of race conditions and use-after-free

  4. What is the impact of this particular vulnerability?

  5. Wait, we’ve been here before…

  6. There is a larger issue at stake

  7. Effective vulnerability management is extremely difficult

  8. Managing vulnerabilities effectively

  9. Patching is critical

  10. Automated, live patching is key

  11. Conclusion



Understanding the mmap kernel vulnerabilityUnderstanding the mmap kernel vulnerability

 

Applications almost always need to hold something in computer memory while the application runs – they wouldn’t be very useful otherwise. In turn, the operating system kernel needs to assign memory space to an application. The function used to request this memory allocation is called mmap, a memory mapping function.

There is, of course, a finite amount of memory in any computer. In assigning memory, the kernel must carefully manage demands – and re-assign unused memory to another application if needed.

In this specific vulnerability, there is an edge case where two different applications could request access to the same memory. The application that gets there last will fail in its request. The reason it creates a vulnerability is that this second application would still be able to read from a now invalid memory location.

This would in turn trigger a kernel crash which could mean that the information in that memory location is disclosed. The information could include anything from something completely innocuous to an encryption key – and that is where the risk lies.

 

Memory expansion

Memory expansion

 

It’s worth noting that the cause for this vulnerability lies in the way memory expansion is handled. The kernel manages the allocated memory by maintaining a list of memory pages. At times, an application may require more memory – or indeed, surrender some of this memory.

For example, if the user of an application opens a large file the application may need to expand its memory allocation. This expansion can be “up” or “down” from its existing assigned memory. This would ordinarily not be an issue, but expansion needs to be handled carefully or it can lead to issues – interactions can occur amongst multiple threads of the same application or in different applications.

Unfortunately, as the researchers that discovered this vulnerability found, in some cases affected versions of the Linux kernel do not handle memory expansion correctly. Due to this vulnerability, a race condition can emerge between certain expansion functions – expand_downwards and expand_upwards – and page-table free operations.

 

A quick overview of race conditions and use-after-free

A quick overview of race conditions and use-after-free

 

It’s worth quickly reviewing the two common security issues that surround this vulnerability. First, many security issues revolve around race conditions. We’ve outlined above how the race condition works in this vulnerability – two applications requesting access to the same memory space. The application that arrives “late” can erroneously get access to memory allocated to another application.

That is just one example of a race condition. In general, race conditions occur where two or more threads – from the same, or different applications – try to access the same data. This could be shared data, or it may be something like an allocated memory space. Attackers can exploit the errors created by race conditions (including kernel crashes) for a range of attacks – from denial-of-service to siphoning off data.

A user-after-free or UAF scenario is where an application tries to access a piece of memory after it has been freed. For example, a pointer in an application points to a data set in dynamic memory that is no longer in use – and hence free – when in fact the pointer should have been updated.

Again, UAF vulnerabilities give attackers room to exploit programming errors to trigger a security breach – crashing a system, or stealing data.

 

What is the impact of this particular vulnerability?

What is the impact of this particular vulnerability?

 

Currently, there are no known exploits for this vulnerability out in the wild, but as always there is a risk that an exploit can emerge. It is medium to low risk that it will emerge given the need for local account access to exploit the vulnerability. Nonetheless, with local account access, an attacker can craft a specially coded program that triggers a use-after-free that crashes the kernel.

As a side-effect of that crash, the attacker can design their program to steal information – for example, by grabbing the error message generated by the crash, which contains the content of the affected memory.  Attackers can also reference the “core dump” created whenever the kernel crashes. A properly executed attack can exfiltrate this information to another machine.

This vulnerability affects kernel versions prior to 5.7.11. Most distributions have released fixes for the vulnerability – if you have applied the associated patches your workloads will be protected against this vulnerability. And, of course, KernelCare is providing live patching for this vulnerability across the distributions it supports.

 

Wait, we’ve been here before…

Wait, we’ve been here before…

 

The CVE for the vulnerability in question, CVE-2020-20200, recently emerged as reserved (in other words, a vulnerability has possibly been identified – but not confirmed). As it turns out, it was a duplicate report. The exact same vulnerability was in fact reported in November last year as CVE-2020-29369.

The researchers that reserved CVE-2020-20200 clearly didn’t know that the vulnerability has already been reported. As a result, CVE-2020-20200 was simply folded into CVE-2020-29369. It’s not the first time that a security vulnerability has been reported in duplicate, and this latest example brings the issue of double reporting – and what it implies, back into the spotlight.

Even people who are closely involved with Linux kernel security accepted the second disclosure. Yes, it’s positive that different individuals and teams are checking for these vulnerabilities, but it is worrying that existing vulnerabilities can so easily be forgotten in the process.

 

There is a larger issue at stake

There is a larger issue at stake

 

One can argue that the double reporting suggests a lack of awareness of vulnerabilities in the wider security community. This particular vulnerability affects fundamental kernel features, but the fact that it was reported twice suggests that its existence is not as widely known as it should have been.

It’s easy to blame a lack of awareness on the security experts, but the fact is that these individuals and teams are simply getting lost in a flood of vulnerabilities. The growing wave of security problems simply washes away any real opportunity for a single individual or even a team to consistently be aware of critical, reported vulnerabilities.

In other words, we’re saying that even the experts can’t consistently cope with the flood of vulnerabilities that are being discovered.

 

Effective vulnerability management is extremely difficult

Effective vulnerability management is extremely difficult

 

If security experts struggle with vulnerability management, it goes without saying that staff in charge of everyday IT operations will struggle much more. A typical sysadmin is already swamped with day to day duties – they will be lucky to remember the last five or ten vulnerabilities that recently required patching. But dozens or hundreds? It is unlikely.

The real-life implication is that vulnerabilities will be poorly managed. Many vulnerabilities will simply fly under the radar. Others will be patched – but in all likelihood, long after the patch was released.

This “imperfect” treatment of vulnerabilities is what leaves an opportunity for malevolent actors. Worse, attackers commonly use automated tools to search for unpatched vulnerabilities. In other words, piecemeal patching is not that much different from no patching.

 

Managing vulnerabilities effectively

Managing vulnerabilities effectively

 

Vulnerability management benefits from a range of tools. Close management of credentials and permissions would be one key tool, for example – limiting the damage an attacker can do with stolen credentials.

Similarly, security monitoring can help spot an attack in progress and give you the opportunity to limit the damage. And, of course, firewalls and other security tools can stop automatic attacks before these take root.

 

Patching is critical

Patching is critical

 

Yet highly consistent and rapid patching is by far the most effective way to manage vulnerabilities. In many cases a patch is released for a vulnerability long before an exploit emerges. Even where exploits are out in the wild before a patch is available, the window between exploit and patch would be relatively narrow.

In contrast, with ineffective patching, the window between an exploit emerging in the wild and the patch finally being applied could be years – or indefinite.

We know that patching is difficult. Patching is disruptive, for example – often requiring a server restart to complete. Patching during maintenance windows helps, but critical patches must be applied even outside of a maintenance window.

 

Automated, live patching is key

Automated, live patching is key

 

Patching is time-intensive if done manually. Automated patching is a better way forward – ensuring that patches are replied consistently and without the usual drain on IT resources.

The most effective way to manage patching is, of course, automated patching combined with rebootless patches. In other words, patches that are applied live without the need for restarts. Live, rebootless patching implies that servers are kept up to date – without disrupting the underlying services.

That service is what KernelCare live patching is all about – automatically, seamlessly patching your servers without requiring disruptive restarts.

Conclusion

Conclusion

 

The flood of vulnerabilities is extremely difficult to keep track of. Even the experts most closely involved with vulnerability management get it wrong sometimes. There’s little question then that sysadmins and enterprise security experts will make similar errors, unless they make use of cutting-edge, automated tools.

KernelCare’s live, rebootless and automated patching applies patches for the above mmap vulnerability and many other types of vulnerabilities as soon as the patch is released. Tools like KernelCare is, therefore, critical in the ongoing fight against vulnerabilities – and the associated exploits.