Hello everyone! Please tell me what could be the ...
# fleet
a
Hello everyone! Please tell me what could be the reason for such errors? I see there is a problem with websockets, but I am not quite sure how to fix it. We use Nginx as LB before Fleet, as far as I know, websockets are included there. Actually, I started digging this question due to the fact that I noticed the incorrect execution of interactive queries in the GUI of the Fleet. For example, more than 3k hosts are online, but the results for the simplest query “osquery_info” come from less than a one thousand. distributed_interval is set at 30 seconds. logger_tls_period is 10 seconds. I don’t see any other errors in the browser console and Nginx during the request. An interesting point: the query results are returned within about 10 seconds after the request is sent, then the flit hangs a little and no longer returns new results, although the waiting time can be 10 minutes or more.
1
And one more moment, I don’t know why there is a mention of Kolide in the wss url, if the Kolide company no longer supports the fleet. Also, after updating the Fleet to the current version in accordance with the instructions https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/updating-fleet.md, the Kolide emblems are still displayed in our interface.
z
Are you able to successfully run this query via
fleetctl
? We sometimes see the browser UI having trouble rendering large result sets (though this doesn't sound like a particularly large result set). Looking at your error message though it does seem likely to be a websocket issue -- possibly a misconfiguration of nginx. Are you able to look in the network tab of the dev tools and see whether there is a successful websocket connection?
Regarding the Kolide branding, we will be removing the rest of that with the 3.5.0 release. For
kolide
in the URL paths that will wait until 4.0 as it's a breaking change for anyone using the API.
a
@zwass thank you for your answer! Yes, you’re right, we have no websocket connection. When we remove nginx, the connection is established without any problems. Could you help please with right nginx options so we can fix this problem? I can show you our current config in DM.
z
Can you share here redacting any confidential info? I am not super familiar with nginx but I know lots of folks here use it successfully with Fleet. I will still try to help.
a
@zwass ok, I agree. Our nginx config is based on the article https://defensivedepth.com/2020/04/02/kolide-fleet-breaking-out-the-osquery-api-web-ui/ Our goal was to separate the ports for sending messages from the server and the web administration interface (to which we were later paired with 2FA).
z
@defensivedepth don't wanna bother you, but I see your config is being used here and perhaps you know what the issue might be?
@Artem I took a look at https://stackoverflow.com/a/22750356/491710 and I think you are missing the lines
Copy code
proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
d
I dont see the grpc passes?
Copy code
location ~ ^/kolide.agent.Api/(RequestEnrollment|RequestConfig|RequestQueries|PublishLogs|PublishResults|CheckHealth)$ {
        grpc_pass  grpcs://{{ MAINIP }}:8080;
        grpc_set_header Host $host;
        grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_buffering off;
    }
z
Maybe @Artem is using plain osquery (not Launcher)?
I am looking closer and I see
Copy code
proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
within the
location /api/v1/osquery/
block, but I think the websocket proxy config needs to go into the other
location /
block.
d
ya possibly
a
@zwass yes, you’re right again, now we use native osquery with fleet. But we are testing the launcher too.
z
Sounds like you will probably need to use the recommendations from both of us to resolve the issue. Good luck 🙂
👍 1
a
Adding these options to the second location helped, missed this point. Thank you so much @zwass @defensivedepth! You are cool! Now we get query results from all users who are online. Plus, now the results are added smoother, without jumps of 100-200 users per second.
🍻 1