xposting to <#C08V7KTJB|general> chat from <#C08VA...
# general
h
xposting to #general chat from #core hey team, I was wondering if I can get some help tweaking or investigating this further: on certain hosts with osquery, I frequently see
Copy code
Linesize exceeds TLS logger maximum:
warnings at 5MB, 10MB, and 13MB values I believe this indicates that a query result from osquery is larger than the
logger_tls_max_linesize
value and is being dropped/not sent to the TLS endpoint. At the moment, that value is set to the default 1MB currently, I configured osqueryd to run with the following
Copy code
--config_tls_max_attempts=6
--database_path=/state/osquery.db
--decorations_top_level=true
--disable_events=true
--disable_extensions=false
--disable_watchdog=false
--docker_socket=/run/docker.sock
--enroll_secret_path=/etc/osquery/enroll_secret.txt
--enroll_tls_endpoint=<endpoint>
--host_identifier=hostname
--logger_plugin=tls
--logger_tls_endpoint=<endpoint>
--logger_tls_max_linesize=1048576
--logger_tls_period=60
--read_max=209715200
--table_delay=200
--tls_hostname=<endpoint>
--tls_session_reuse=true
--tls_session_timeout=3600
--utc=true
--watchdog_memory_limit=900
I was curious if anyone would know if there are settings I can tweak to avoid dropping these results, or if there was a way I can investigate which query pack was causing such a large result?
I can certainly explore setting
logger_tls_max_linesize
to something like 15MB, but that does seem large and doesn't help me identify exactly which is the "problem" query or query pack
s
Unfortunately there isn’t something that says which query that line was part of, it’s not even there as an information at that point. I think the quickest way is to see the line, by enabling the local filesystem logging via
--logger_plugin=filesystem,tls
and checking the size of the results like that. Otherwise a much longer approach would be to determine which query/tables likely have either many columns or variable sized columns that can grow that much, since this doesn’t happen on all hosts.
h
Thanks Stefano, I'll investigate further by flipping the plugin to filesystem as well
just an update -- I was able to examine the logs in the filesystem and quickly identify which query pack was problematic. general warning about getting a list of processes that have unusual ports on high traffic servers 😅