Running a fork bomb

Since it was Halloween and all, I shared an article with some scary Linux commands that one should never run to some of my colleagues. One of them was a fork bomb, which looks like this:

:(){:|:&};:

In Bash, a function is defined like so:

function_name () {
  commands
}

So the fork bomb starts off by defining a function called :. For example:

:(){echo hi};:
hi

Since the function defined in the fork bomb calls itself inside the function, it is recursive. The pipe causes the forking because one call of : will end up with two calls; : is piped to :. Furthermore, it uses & to run the job in the background, so it will continually run without waiting for the command to complete. This has the typical effect of using up all the resources on the computer and crashing it, hence the name fork bomb.

I thought if there is a user limit to the number of processes that can be run, it should stop the fork bomb from crashing the system. My colleague didn't think so. Being one who likes to test things out, I decided I'll run the fork bomb on my Raspberry Pi 4.

Currently the process limit is quite high.

ulimit -u
62651

Before I ran it, I asked Claude and ChatGPT the following:

If my user limit on processes (ulimit -u) is 62651, will this prevent a fork bomb :(){:|:&};: from crashing my Linux system?

Claude replies:

While I can explain process limits and system protection, I should first note that fork bombs can be very dangerous to run, potentially requiring a system restart even with protections in place. Rather than testing this directly, let me explain how process limits help:

The process limit (ulimit -u) of 62651 provides some protection by capping the total number of processes your user can create. However, this alone may not be sufficient protection because:

  1. Fork bombs can still consume significant system resources very quickly
  2. The system may become unresponsive before hitting the limit
  3. Other critical processes may be impacted

Instead, I'd recommend these safer approaches to protect your system:

  1. Set a lower process limit for regular users
  2. Configure system resource controls (like cgroups)
  3. Use monitoring tools to detect and prevent resource abuse

Would you like me to explain more about any of these protection mechanisms? What are you trying to achieve with your system configuration?

ChatGPT replies:

A user process limit of 62651 is quite high and is unlikely to fully protect your system from a fork bomb. While ulimit -u limits the number of processes a single user can spawn, a fork bomb creates processes at an exponential rate, and 62,651 processes can still overwhelm system resources like CPU, memory, and other limits quickly, potentially causing the system to slow down, freeze, or crash.

Why a High ulimit -u Won't Fully Protect You

  • Fork Bombs Scale Rapidly: Since each function call spawns two new processes, even a large limit like 62,651 is quickly reached.
  • System Resources: Other limits like available memory and CPU time per process also become critical under heavy process load, but a fork bomb can consume them regardless of the process count.

Recommendations

To reduce the risk of a fork bomb:

  1. Lower the user process limit: Set ulimit -u to a more conservative number, like 1000 or even lower, depending on typical user needs.
  2. Limit memory usage per process: Setting a limit on memory usage per process with ulimit -v or ulimit -m can help limit resource consumption.
  3. Use cgroups: Linux cgroups allow you to define much more granular resource limits, like memory and CPU usage, per user or process group, and can be very effective in containing runaway processes.

While ulimit -u is helpful, combining it with memory and CPU limits provides stronger protection against fork bombs.

With that advice, I changed my limit to 1000 and ran the fork bomb!

ulimit -u 1000
:(){:|:&};:
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
:: fork failed: resource temporarily unavailable
...

My screen kept spewing out :: fork failed: resource temporarily unavailable but all hope was not lost!

Since we had limited the number of processes, the system was still responsive! I opened another SSH session, restarted my Pi, and everything was back to normal.

sudo shutdown -r now

I had expected that I would need to hard reset my Pi but in the end, limiting the number of processes did prevent a complete freeze of the system and I could still access and restart it.




Creative Commons License
This work is licensed under a Creative Commons
Attribution 4.0 International License
.
One comment Add yours

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.