Hi fleet, We are using fleetctl to run a live que...
# fleet
j
Hi fleet, We are using fleetctl to run a live query against our fleet endpoint. The query should return about 10k lines, but stops printing results after a few hundred lines. The same query works fine when executed via browser. Any thoughts on what might cause this behavior?
b
Might be timing out prematurely. IIRC there is a default 25 second timeout. https://fleetdm.com/docs/using-fleet/rest-api#run-live-query You can adjust this timeout.
j
Thanks. I'll bump this and see if that fixes the problem.
👍 1
z
Just wanted to follow up on some research we did internally on this. It might be related to redis's buffer size (and in smaller part to the latency between the host making the live query and fleet). Can you try applying this redis config and see if the issue persists:
Copy code
client-output-buffer-limit pubsub 0 0 60s
@Jason Cetina
j
@Zachary Winnerman Thanks for your work so far on this. We're on a shared instance, so I'll need to work with the team that owns the cluster to get their take before we deploy a change like this since I believe it affects all pubsub consumers on the cluster. Couple of questions: 1/ What's the expectation here with setting the limits to zero (particularly the hard limit)? Mine is that the hard limit will go to zero and nothing will work over pubsub. 2/ Has this been tested, and if so what impact have you seen in testing?
z
1) Per the documentation, 0 disables the buffer limit, being treated as a special value rather than literally setting the limit to 0. This is also consistent with our internal testing 2) This has been tested internally and it worked as expected. Turn the config to default and we only get a partial result. Turn the config to 0 and we get the full result (eventually). The main issue is that this means that (almost) the entire query result needs to be buffered in the pubsub output buffer on redis. This can take considerable memory depending on how many hosts are being queried and how big the result from each host is. Back of the napkin math suggests that ~1m hosts each returning a result of ~1kb would take up ~1GB of memory on redis.
j
Thanks for this. I finally found the client-output-buffer-limit directive documentation, and I didn't realize the hard limit could be disabled (seems like a weird thing to allow). As for buffer size concerns - won't each response be removed from the pubsub channel as we read them over a websocket, thus reducing memory pressure?
@Zachary Winnerman closing the loop here - set redis pubsub limit to 0 0 and that solved our problem!
z
good to hear!
🙂