Hey @Tomas Touceda, thanks for the reply. Turns out we missed a systemd setting to increase the open file limit for the process itself. Also, we found out that the root of the problem was that we were using T type instances in AWS which are burstable. We had a large group of systems try to register all at once and it caused the systems to burst, then run out of burstable credits. Once that happened, they barely processed anything, which led to DB connection stacking up and eventually running out of open files. We are all set now though. 🙂