Kernel patching is a never-ending job. Why? Because Linux is king of the OSes. But it is very, very complicated. The master branch of the Linux kernel git repository contains more than 20,000,000 lines of human-written code. This complexity makes vulnerabilities inevitable. There are hundreds every year, some of them very serious.
How are these vulnerabilities combated? Via constant kernel patching. Linux vendors are constantly providing partial patch updates for the kernel. Some patches modify a single line of code, whereas others add missing checks, or change data structure or functions.
Here’s the thing: right now, for 99% of organizations, kernel patching can only happen one way: by rebooting servers.
But SysAdmins are (understandably) very reluctant to do this rebooting until they absolutely have to. Rebooting can be a slow and laborious process. To minimize negatively impacting peaktime services, it is usually done late at night, often on weekends. While the servers are being rebooted, the websites they host will go down; error messages replace pretty, functioning websites. And even after rebooting, it can take a while for performance to stabilize. Sometimes, environments never properly come back up after a reboot.
Because of all this, SysAdmins delay rebooting, and thus they delay kernel patching. They tend to wait until patch releases have piled up to the point where they can’t be ignored anymore. They bundle fixes together, meaning that the earliest ones can be available for a long time before they are actually applied. Sometimes the gap between patch release and patch application can stretch to weeks or even months.
But failing to perform all kernel patching as early as possible is asking for trouble.
- For one, it’s dangerous. In an open source environment, as soon as a kernel vulnerability becomes public knowledge, it becomes public knowledge. Meaning that Linux operators know about it, but so do bad actors – hackers and other digital attackers. This is doubly true when it is common practice in the cybersecurity community to combine the announcement of a vulnerability with the release of a detailed case study. These case studies are highly useful to those working in security; but they are just as useful to hackers.
- Secondly, delaying kernel patching puts organisations at risk of noncompliance. Most companies’ errors and omissions (E&O) insurance policies, and the clauses in their SLA contracts, define adherence to best practices. Usually, reflecting industry best-practices, this period is no more than one month. The pressures of rebooting mean that organisations frequently take (much) longer than a month, and so therefore in breach of their insurance policies.
Rebooting is a headache, which is why people put it off for as long as they can. But: if you don’t have to reboot, you don’t have to delay your kernel patching. The solution? Rebootless, live kernel patching.
With KernelCare, when a new patch is available for the active kernel, the agent downloads it and applies it, right away. With this system, kernel updates are applied as quickly as possible, protecting you from bad actors, and keeping you compliant. This happens without a moment of kernel downtime or any disruption of its operation.
But at KernelCare, we have 300,000 servers that haven’t needed to reboot to do their kernel patching for four years. KernelCare is simple, takes five minutes to install, and then ticks over in the background without anyone having to even think about it.
To get the full lowdown on why rebooting your servers is making you insecure and noncompliant – and why it’s a matter of time until you discover this the hard way – read our full whitepaper here.