Skip to main content

File Descriptor Limits (ulimit -n)

Linux controls how many network connections, sockets, and files a process can keep open using file descriptor limits. On many systems the default per-process limit is low (often 1024), which can lead to "Too many open files" errors during traffic spikes. Raising the nofile limits helps your web server and PHP stack handle concurrency without failing on an artificial cap.

Quick Summary

For WooCommerce or API-heavy sites, increase nofile limits so your web server and related processes can handle higher concurrency. On systemd-based distros, you often need both global limits and a per-service override.

Core Kernel Commands

Check the Current Limit

Determine exactly how many file descriptors your current terminal session is explicitly allowed to generate:

check-active-ulimit.sh
ulimit -n

Diagnostic Output:

ulimit-output.txt
1024

(If the output reads 1024 on a production web server, your software is structurally at risk of connection drops).

Check the Kernel-Wide Ceiling

Determine the maximum theoretical descriptors the physical kernel can handle globally across all processes combined:

check-kernel-max.sh
cat /proc/sys/fs/file-max

Diagnostic Output:

file-max-output.txt
9223372036854775807

(This value varies by host. The bottleneck is usually the per-process limit, not the global maximum.)

Configure nofile Limits

You must configure both the Soft Limit (the daily operating envelope) and the Hard Limit (the absolute root-level ceiling the soft limit cannot exceed).

Edit the global system security configurations:

edit-security-limits.sh
sudo nano /etc/security/limits.conf

Append the new high-concurrency parameter array exactly:

limits.conf
* soft nofile 65535
* hard nofile 65535

Systemd Application (LiteSpeed/Nginx)

Merely adjusting the global server limits does not automatically force active execution services (like the web server array) to inherit them. You must explicitly override the service parameters.

Example for OpenLiteSpeed (lsws):

override-litespeed-systemd.sh
sudo systemctl edit lsws

Inject the service instruction block:

systemd-override.conf
[Service]
LimitNOFILE=65535

Force the Linux systemctl architecture to reload the initialization rules immediately:

restart-services.sh
sudo systemctl daemon-reexec
sudo systemctl restart lsws

Architectural Load Analysis

  • CPU overhead: Higher concurrency can increase CPU/network overhead, but it prevents error conditions caused by running out of descriptors.
  • RAM Footprint: Each open descriptor allocates roughly several dozen bytes. Expanding the ceiling to 65,000 active descriptors consumes less than 15 MB of total kernel memory overhead.
  • Storage Independence: Disk architecture (HDD vs NVMe) has zero effect on networking file descriptors, though high-speed storage generates and destroys file-reads faster, freeing up limits quicker naturally.

Common Mistakes & Troubleshooting

Engineering OversightTechnical RamificationRectification Action
Modifying limits.conf exclusivelyThe web server ignores the limit increase completely and continues throwing "Too many open files" blocks.You failed to run systemctl edit. Modern Linux executes services isolated from user-shell boundaries.
Forgetting PAM Module TriggersLogging out and re-authenticating reveals ulimit -n still states 1024 despite file edits.Ensure session required pam_limits.so is structurally present within /etc/pam.d/common-session.
Excessive Blind Blanket AllocationsSetting fs.file-max globally into the millions against heavily limited 512MB RAM machines.Scale your limits according to actual traffic realities. 65k is a safe baseline for standard VPS structures.

Target Quick Reference

Descriptor Diagnostic Runbook

Verify the limits applied properly to the running daemon without shutting the server down.

verify-running-process.sh
# Extract the specific limits currently enveloping the OpenLiteSpeed daemon
cat /proc/$(pgrep lshttpd)/limits | grep "open files"

# Total real-time active descriptor consumption globally
lsof | wc -l

What's Next