Ryan
02/08/2022, 6:20 PMlevel=error ts=2022-02-08T18:18:27.469599841Z op=QueriesForHost err="load active queries: EOF"
Tomas Touceda
02/08/2022, 6:24 PMTomas Touceda
02/08/2022, 6:24 PMRyan
02/08/2022, 6:24 PMRyan
02/08/2022, 6:24 PMRyan
02/08/2022, 6:24 PMTomas Touceda
02/08/2022, 6:24 PMRyan
02/08/2022, 6:24 PMRyan
02/08/2022, 6:25 PMRyan
02/08/2022, 6:58 PMRyan
02/08/2022, 6:58 PMmethod=POST uri=/api/v1/osquery/distributed/write took=1.86989925s ip_addr=<ip>:45002 x_for_ip_addr= ingestion-err="campaign waiting for listener (please retry)" err="timestamp: 2022-02-08T18:57:28Z: error in query ingestion
Ryan
02/08/2022, 6:59 PMTomas Touceda
02/08/2022, 6:59 PMRyan
02/09/2022, 10:31 AMRyan
02/09/2022, 10:39 AMtlsconnect.go
Ryan
02/09/2022, 10:51 AM2022/02/09 10:50:12 pool created successfully
2022/02/09 10:50:13 command result: NOAUTH Authentication required. ; NOAUTH Authentication required.
Looks good! the AUTH value is set in the Fleet.yml, doesn’t seem to be possible to test that with the tlsconnect.go
tool though.Ryan
02/09/2022, 11:00 AMingestion-err="writing results: PUBLISH failed to channel results_49: EOF" err="timestamp: 2022-02-09T10:57:52Z: error in query ingestion"
Ryan
02/09/2022, 11:01 AMRyan
02/09/2022, 11:01 AMTomas Touceda
02/09/2022, 11:04 AMRyan
02/09/2022, 11:04 AMRyan
02/09/2022, 11:06 AMBenjamin Edwards
02/09/2022, 1:33 PMRyan
02/09/2022, 4:29 PMRyan
02/09/2022, 4:30 PMRyan
02/09/2022, 4:30 PMRyan
02/09/2022, 4:31 PMBenjamin Edwards
02/09/2022, 4:37 PMRyan
02/10/2022, 11:42 AMRyan
02/10/2022, 11:42 AMRyan
02/10/2022, 11:43 AMRyan
02/10/2022, 11:54 AMRyan
02/10/2022, 11:54 AMRyan
02/10/2022, 11:54 AMRyan
02/10/2022, 6:16 PMaddress
?
But we do set the tls_server_name
config option to the IP to fix that error, so it should be fine? It’s a Redis 6 instance too, is that significant?Ryan
02/10/2022, 6:17 PMRyan
02/10/2022, 6:17 PMTomas Touceda
02/10/2022, 6:20 PMTomas Touceda
02/10/2022, 6:20 PMRyan
02/10/2022, 6:23 PMRyan
02/10/2022, 6:24 PMRyan
02/10/2022, 6:24 PMRyan
02/10/2022, 6:24 PMRyan
02/11/2022, 4:38 PMfleet[9704]: level=error ts=2022-02-11T162951.534788035Z component=http method=POST uri=/api/v1/osquery/distributed/write took=196.493333ms ip_addr=<ip>:51318 x_for_ip_addr= ingestion-err=“campaign waiting for listener (please retry)” err=“timestamp: 2022-02-11T162951Z: error in query ingestion”However, the distributed query executes just fine and I get results back from all hosts, so I’m not really sure what’s going on, but I’m going to leave it in this configuration for a while and see how it goes.
Ryan
02/11/2022, 4:38 PM