Hi Fleet team, we are suffering a problem that we have 3 nodes in our cluster of fleet behind the lb, and we found one of them has much more http conns than others, and after debug, I saw there are more than 14K http conns in CLOSE_WAIT state. plz advice.
for other hosts, they are 0.
is there any update for this thread?
09/27/2022, 6:34 PM
What are you using as a load balancer? Nginx?
09/27/2022, 6:49 PM
nginx i think so
09/27/2022, 7:09 PM
Can you share some of your nginx config?
09/27/2022, 7:31 PM
what kind of cfg?
09/27/2022, 9:51 PM
Nginx config file that defines the routes and forwarding rules
09/27/2022, 10:36 PM
i will check it out and go back to you, the lb is not controlled by us, i need to check it with our lb team.
09/27/2022, 10:37 PM
Yeah to be honest this doesn't sound like a fleet problem.
Happy to take a look at any configuration though
09/28/2022, 2:28 AM
@Benjamin Edwards it looks like we have a portal for lb cfg which is the only source of truth, and i need to know which cfg you want to check?
09/30/2022, 12:54 AM
It looks like @Benjamin Edwards was looking for information on how Fleet’s routes and forwarding were set up in your load balancer.
Since this is something that's occurring outside of Fleet, we may not have a direct answer for you, but we might be able to help get you pointed in the right direction if we know what the setup looks like there.