Hi all! I'm noticing Windows hosts are taking a l...
# fleet
m
Hi all! I'm noticing Windows hosts are taking a long time (and occasionally failing) to pull vitals, have queries run on them, etc. Mac OS and Linux are working without issue. Is there anything I need to configure to improve Windows query performance? ./fleetctl package --type=msi --fleet-url=<URL>--enroll-secret=<SECRET> —FLEET_SERVER_TLS=false --logger_plugin=filesystem,aws_firehose
k
Are you seeing any errors in the osquery logs for those Windows hosts?
Since you're sometimes getting results, it sounds unlikely to be a certificate issue, but something may be killing the `osqueryd`service.
m
Hi @Kathy Satterlee - Working on getting those logs, hoping that'll show me the culprit. I'll look into the EDR client possibly impacting that service... thank you!
k
No problem!
m
@Kathy Satterlee So in revisiting this - it looks like there is a recurring WMI error on the Windows client side. 2023-03-31T191052Z INF initial extensions update action failed error="check symlink failed: read existing symlink: readlink C:\\Program Files\\Orbit\\bin\\orbit\\orbit.exe: The file or directory is not a reparse point." 2023-03-31T191052Z INF token rotation is enabled 2023-03-31T191052Z INF start osqueryd cmd="C:\\Program Files\\Orbit\\bin\\osqueryd\\windows\\stable\\osqueryd.exe --pidfile=C:\\Program Files\\Orbit\\osquery.pid --database_path=C:\\Program Files\\Orbit\\osquery.db --extensions_socket=\\\\.\\pipe\\orbit-osquery-extension --logger_path=C:\\Program Files\\Orbit\\osquery_log --enroll_secret_env ENROLL_SECRET --host_identifier=uuid --tls_hostname=<HOSTNAME> --enroll_tls_endpoint=/api/v1/osquery/enroll --config_plugin=tls --config_tls_endpoint=/api/v1/osquery/config --config_refresh=60 --disable_distributed=false --distributed_plugin=tls --distributed_tls_max_attempts=10 --distributed_tls_read_endpoint=/api/v1/osquery/distributed/read --distributed_tls_write_endpoint=/api/v1/osquery/distributed/write --logger_plugin=tls,filesystem --logger_tls_endpoint=/api/v1/osquery/log --disable_carver=false --carver_disable_function=false --carver_start_endpoint=/api/v1/osquery/carve/begin --carver_continue_endpoint=/api/v1/osquery/carve/block --carver_block_size=8000000 --tls_server_certs C:\\Program Files\\Orbit\\certs.pem --force --flagfile C:\\Program Files\\Orbit\\osquery.flags" W0331 191057.357725 7064 bitlocker_info.cpp:52] Error retreiving information from WMI. I0331 191100.079705 7064 interfaces.cpp:102] Failed to retrieve network statistics for interface 7 I0331 191100.160095 7064 interfaces.cpp:102] Failed to retrieve network statistics for interface 1 I0331 191100.190980 7064 interfaces.cpp:130] Failed to retrieve physical state for interface 1 I0331 191100.207547 7064 interfaces.cpp:157] Failed to retrieve DHCP and DNS information for interface 1 W0331 191100.587801 7064 chocolatey_packages.cpp:65] Did not find chocolatey path environment variable 2023-03-31T191122Z INF calling flags update
k
Would you mind sending over the full log file? I can take a quick hunt through to see what I can see!
m
Sure thing, thank you!
k
I did see one instance in the logs of the service being stopped, but just the one. Are all of your Windows hosts on the same OS version?
m
@Kathy Satterlee - Sorry was out of the office last week. We have 2 hosts that are Windows 10, and one Windows 11, both with the same issue. So since the Windows systems occasionally check in, but I haven't been able to run successful queries on them in the Fleet UI, I'm wondering if perhaps I need to increase the timeout settings for the Windows hosts. Perhaps it just takes longer to pull the necessary information from Windows? I'm spitballing here, but it kind of makes sense. Are there timeout settings I could adjust to test this?