So I've run into an issue where certain queries are too large for my pipeline, and I don't think I can fix that part. Is there a way to split up the results from large queries in some way or some other strategy to handle this?
11/15/2022, 11:07 AM
Hello Adam, you mean that at the moment of query execution they consume too much CPU/Memory or something else?
That been said, in general I wouldn't say that there's much to do if not changing the query.
Results are generated on the fly and results are returned and managed all together.
The only part that handles results and that can be controlled is the shipping of the results via the TLS logger, where you can decide how many lines are sent per interval.
11/18/2022, 4:39 PM
No it's that the events are too large and getting truncated by our logging pipeline.
There are too many results of installed packages or other things. I assume we can optimize to an extent and try to shrink it by doing a diff instead of grabbing all the results or doing the query more frequently, but was wondering if there was a way to handle it in general as a failsafe if our queries are ever bigger than expected.