File descriptor limit exceeded

File descriptor limit exceeded is a global runtime error that occurs when an operating system process reaches the maximum number of file descriptors it is allowed to open. File descriptors are used for files, network sockets, pipes, and devices. This error is commonly seen on Linux and Unix-like systems, but it can also surface in Java applications, Spring Boot services, Docker containers, databases, and high-traffic servers.

When does this error occur?

  • A server application opens many network connections without closing them.
  • A program continuously opens files or logs in a loop.
  • A Java or Spring Boot service leaks sockets or streams.
  • A Docker container inherits low file descriptor limits.
  • A database or proxy handles more concurrent clients than allowed.

Root cause of File descriptor limit exceeded

The operating system enforces a per-process and system-wide limit on open file descriptors to protect system stability. When a process exceeds this limit, the kernel refuses to allocate new descriptors, resulting in File descriptor limit exceeded. This usually happens due to resource leaks, insufficient limits, or unexpected load.

How to fix the error (step-by-step)

Linux / macOS

Check the current file descriptor limit for the shell or process:

ulimit -n

Temporarily increase the limit for the current session:

ulimit -n 65535

For permanent changes, update system limits:

/etc/security/limits.conf

Add entries such as:

* soft nofile 65535
* hard nofile 65535

Java / Spring Boot

Ensure all streams, files, and sockets are properly closed. In Java, use try-with-resources:

try (FileInputStream fis = new FileInputStream("data.txt")) {
    // use the stream
}

Monitor open file descriptors for the JVM process:

lsof -p <pid>

Docker / containers

Containers may inherit low limits from the host. Start the container with higher limits:

docker run --ulimit nofile=65535:65535 image-name

Database / network services

Reduce connection leaks and configure connection pools correctly. For example, limit maximum connections to a safe number supported by the OS.

Verify the fix

After applying the fix, restart the affected application or service. Monitor open file descriptors using system tools and confirm that the count stabilizes under load. The application should handle new connections and files without triggering File descriptor limit exceeded.

Common mistakes to avoid

  • Increasing limits without fixing resource leaks.
  • Setting extremely high limits without capacity planning.
  • Ignoring container-level limits.
  • Not monitoring file descriptor usage in production.

Quick tip

Always close files, sockets, and streams as soon as they are no longer needed to prevent descriptor leaks.

FAQ

Q: Is File descriptor limit exceeded a hardware issue?

A: No, it is an operating system resource limit, not a hardware failure.

Q: Can increasing ulimit alone permanently solve this?

A: Increasing limits helps, but the root cause must also be fixed to avoid leaks.

Conclusion

File descriptor limit exceeded indicates exhausted OS-level resources; correcting leaks and setting proper limits ensures stable, long-running systems. Explore related root errors on ErrorFixHub for deeper system reliability insights.

Comments

Popular posts from this blog

Permission denied

Connection aborted

Port already in use