I saw in a github issue this might be caused by th...
# general
I saw in a github issue this might be caused by the network_interfaces inserts.. is that something i can disable in fleet?
Not yet, but very soon we have some major performance improvements that will be released.
Good to hear improvements are on the way! Is there a suggested temporary fix or workaround for the current situation? Iā€™m thinking about creating python script to kill hanging sql queries for now as a temporary solution
will reduce the load on the network_interfaces queries.
šŸ‘ 1
Depending on your use case this could be a day or more.
Hmm looks like i still have hanging queries even after this change, mainly: select distinct from distributed_query_campaigns. Causing updates to hang
Should i maybe just delete rows in the distributed_query_campaigns table? We only use it for testing, it has about 95 entries now
I don't think that's likely to help, but could be worth a try. What is the
you are configuring for your hosts?
I have gradually increased it, from 600, now to 1800 (on fleet options side)
I can maybe also disable it using
disable_distributed: true
.. Will labels still works then? As that is also a distributed query i think?
Labels do use distributed, so you can't do that and still use labels. But if you would be okay with disabling live queries, maybe just set an distributed_interval of 24 hours so you get daily label updates?
All of this will be moot with the new perf improvements, but we have some corporate process to work through before that can be made public.
āœ… 1
Ok i will try that, Wouldnt the problem reappear after 24 hours when they will check for distributed queries again?
I see it as a throughput issue... If hosts are checking in for an expensive query every 24 hours instead of every 30 minutes you are running ~50x fewer expensive queries. You're less likely to overload the mysql server that way.
Thanks, in meantime i have also enabled slow query logging, which will at least show the full query instead of prepared statement. Will see how it goes
I think its because our label_query_executions table is quite large, 1900000 entries already. We have like 100 labels..
Likely. All of these issues are interconnected and addressed with the new changes.
šŸ¤— 1
šŸŽ‰ 2