Hi , I have some packs on fleet and dumping th...
# fleet
s
Hi , I have some packs on fleet and dumping the logs with option (--tls_dump=true and --verbose) , I could see the pack queries being executed on the host . When I send " live query " SELECT * FROM osquery_schedule , the result is "_No results found" . Do the packs pushed from fleet do not get listed in osquery_schedule and if that cases , how to check the blacklisted queries ?_
z
Packs pushed from Fleet do get listed in osquery_schedule. Are you certain your live query is hitting the same host that is executing the scheduled queries? Can you see the live query request and response in those tls_dump logs?
s
Yes. I could see the live query hitting the same host. Below is the config --enroll_secret_path=/etc/osquery/cert/secret --tls_server_certs=/etc/osquery/cert --tls_hostname=<server>:8080 --database_path=/tmp/osquery.db --pidfile=/tmp/osquery.pid --host_identifier=uuid --force=true --config_plugin=tls --config_tls_endpoint=/api/v1/osquery/config --config_tls_refresh=30 --disable_distributed=false --distributed_interval=3 --distributed_plugin=tls --config_tls_max_attempts=3 --distributed_tls_max_attempts=3 --distributed_tls_read_endpoint=/api/v1/osquery/distributed/read --distributed_tls_write_endpoint=/api/v1/osquery/distributed/write --hash_delay=20 --pack_refresh_interval=60 --schedule_splay_percent=20 --table_delay=20 --enroll_tls_endpoint=/api/v1/osquery/enroll --disable_watchdog=false --watchdog_delay=120 --watchdog_level=1 --watchdog_memory_limit=100 --watchdog_utilization_limit=5 --logger_plugin=filesystem --logger_path=/tmp
z
This sounds like possibly a bug in osquery? If you can provide logs that show the same osquery process (1) receiving a config with a nonempty schedule and (2) returning an empty result set for
select * from osquery
then you could file an issue for osquery and it could be investigated.
s
--------------------- I0304 185242.263559 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185253.134891 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185253.744874 13256 distributed.cpp:121] Executing distributed query: kolide_distributed_query_4: SELECT * FROM osquery_schedule I0304 185253.745616 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/write I0304 185300.308753 13242 config.cpp:1205] Refreshing configuration state I0304 185300.309159 13242 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/config I0304 185304.472676 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185315.224409 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185325.871349 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185330.915606 13242 config.cpp:1205] Refreshing configuration state I0304 185330.916021 13242 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/config I0304 185336.478948 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185347.100065 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read I0304 185357.731017 13256 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/distributed/read "distributed_tls_max_attempts": 3,   "pack_delimiter": "/"  },  "packs": {   "normal": {    "queries": {     "osquery": {      "query": "SELECT * FROM osquery_info",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 1     },     "processes": {      "query": "SELECT * FROM processes",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 2     }    }   }  } } {} {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "decorators": {   "load": [    "SELECT uuid AS host_uuid FROM system_info;",    "SELECT hostname AS hostname FROM system_info;"   ]  },  "options": {   "disable_distributed": false,   "distributed_interval": 10,   "distributed_plugin": "tls",   "distributed_tls_max_attempts": 3,   "pack_delimiter": "/"  },  "packs": {   "normal": {    "queries": {     "osquery": {      "query": "SELECT * FROM osquery_info",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 1     },     "processes": {      "query": "SELECT * FROM processes",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 2     }    }   }  } } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {   "kolide_distributed_query_4": "SELECT * FROM osquery_schedule"  } } {"queries":{"kolide_distributed_query_4":[]},"statuses":{"kolide_distributed_query_4":0},"messages":{"kolide_distributed_query_4":""},"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {} {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "decorators": {   "load": [    "SELECT uuid AS host_uuid FROM system_info;",    "SELECT hostname AS hostname FROM system_info;"   ]  },  "options": {   "disable_distributed": false,   "distributed_interval": 10,   "distributed_plugin": "tls",   "distributed_tls_max_attempts": 3,   "pack_delimiter": "/"  },  "packs": {   "normal": {    "queries": {     "osquery": {      "query": "SELECT * FROM osquery_info",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 1     },     "processes": {      "query": "SELECT * FROM processes",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 2     }    }   }  } } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "decorators": {   "load": [    "SELECT uuid AS host_uuid FROM system_info;",    "SELECT hostname AS hostname FROM system_info;"   ]  },  "options": {   "disable_distributed": false,   "distributed_interval": 10,   "distributed_plugin": "tls",   "distributed_tls_max_attempts": 3,   "pack_delimiter": "/"  },  "packs": {   "normal": {    "queries": {     "osquery": {      "query": "SELECT * FROM osquery_info",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 1     },     "processes": {      "query": "SELECT * FROM processes",      "interval": 10,      "platform": "",      "version": "",      "removed": true,      "shard": 2     }    }   }  } } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YMnz3q4v5/Q7zSb1Na9i0IUBInAW0F"} {  "queries": {} } {"node_key":"B4YI0304 185401.665854 13242 config.cpp:1205] Refreshing configuration state I0304 185401.666270 13242 tls.cpp:254] TLS/HTTPS POST request to URI: https://ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080/api/v1/osquery/config
ubuntu@ip-172-31-39-251:*~*$ ls -lt /tmp/osquery_result  -rw-r--r-- 1 root root 0 Mar 4 18:31 /tmp/osquery_result I am not getting query pack results on fleet either
I am using osquery version 4.5.1
z
Ah, you are setting
shard
to 1 and 2, meaning only 1 or 2% of the hosts that receive the query will actually schedule it.
So it's most likely that the host just isn't scheduling the query.
s
Yes , I had set Shard to 1 . changing thisto 0, works . Thanks @zwass .Appreciate your help
There two issue here 1. setting shard ------resolved 2. If pass config option for not sending the logs over TLS --logger_plugin=filesystem --logger_path=/tmp The demoen stops sending results for scheduled query to fleet .Is there any option to just send only result logs to fleet and not the status logs ?
If logger is configured for TLS and due to any reason fleet is offline , the osquery daemon keeps on sending the log/results even the TLS connection is not there Error sending status to logger: Request error: Failed to connect to ec2-54-75-78-244.eu-west-1.compute.amazonaws.com:8080: Connection refused After few hours , the osqueryd starts crashing . to prevent the issue I chose to send the logs to local system . But it stops sending results also .
would setting "--buffered_log_max" help preventing such crashes , if the it due to growing buffer when not able to set log/results over TLS ?
z
Osquery should not crash... If it does that's a bug.
s
Seem I has been hitting the bug https://github.com/osquery/osquery/issues/6887 . which is fix 15 days back. I have cherry-picked the changes .
👍 1