I’m noticing filesystem logging stopped working fo...
# fleet
r
I’m noticing filesystem logging stopped working for all my MAC systems after the 4.4.0 upgrade. The packs appear to be running based on the webgui and when I run fleet in debug mode but my result.log doesn’t contain anything but the Linux system I have.
t
so you're saying that when you run fleet in debug mode, it all works, but when you disable debug logging it stops working?
r
debug shows the queries running but does NOT log them, sorry should have been more specific
t
could you share some logs for a fleet with
--logging_debug
?
r
anything specific to be more helpful?
t
I want to see if there are any errors when posting to the endpoints that log results and status, and anything else that might stand out
r
{"component":"http","err":"failed to save host software: context canceled","ip_addr":"127.0.0.1:59788","level":"debug","method":"POST","took":"15.546673753s","ts":"2021-10-13T13:42:25.122075164Z","uri":"/api/v1/osquery/distributed/write","x_for_ip_addr":"redacted"}
{"component":"http","ip_addr":"127.0.0.1:32976","level":"debug","method":"POST","took":"1.091589ms","ts":"2021-10-13T13:44:22.47424425Z","uri":"/api/v1/osquery/distributed/read","x_for_ip_addr":"redacted"}
{"component":"http","ip_addr":"127.0.0.1:32978","level":"debug","method":"POST","took":"6.566669ms","ts":"2021-10-13T13:44:22.529058068Z","uri":"/api/v1/osquery/distributed/write","x_for_ip_addr":"redacted"}
Copy code
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T00:53:05.328093401Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T01:07:28.027777727Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T01:08:12.535135764Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T01:10:05.498371226Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T01:13:45.067581835Z"}
2021/10/13 02:23:20 http: TLS handshake error from 127.0.0.1:56164: EOF
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T02:53:16.070657637Z"}
{"component":"http","err":"decoding JSON: stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/log","ts":"2021-10-13T03:02:58.206417002Z"}
2021/10/13 03:38:09 http: TLS handshake error from 127.0.0.1:46326: EOF
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T05:39:04.186353129Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T07:03:10.142720041Z"}
2021/10/13 07:08:17 http: TLS handshake error from 127.0.0.1:52604: EOF
2021/10/13 07:38:05 http: TLS handshake error from 127.0.0.1:41608: EOF
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T08:22:08.675437535Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T08:55:46.736319219Z"}
2021/10/13 09:37:58 http: TLS handshake error from 127.0.0.1:52530: EOF
{"component":"service","err":"detail_query_os_version expected single result got 0","method":"IngestFunc","ts":"2021-10-13T09:45:46.489181632Z"}
2021/10/13 10:28:16 http: TLS handshake error from 127.0.0.1:53638: EOF
{"component":"service","err":"detail_query_network_interface expected 1 or more results","method":"IngestFunc","ts":"2021-10-13T10:57:31.823445805Z"}
{"component":"http","err":"stream error: stream ID 1; INTERNAL_ERROR","level":"info","path":"/api/v1/osquery/distributed/write","ts":"2021-10-13T11:02:55.028527116Z"}
2021/10/13 13:13:15 http: TLS handshake error from 127.0.0.1:59894: EOF
added logging to fleet via startup script and got this
t
those errors shouldn't prevent results and status to be logged as expected, as those things go through a different endpoint
do you see any log lines for
/api/v1/fleet/status/result_store
?
r
nothing
t
are you running fleet with
--logging_debug
?
r
yes /usr/local/bin/fleet serve --config /etc/fleetdm/fleetdm.yml --logging_debug
t
ok, let's give it a bit of time and see what happens when hosts starting submitting results, in the meantime, could you tell me the output of
fleetctl get config --yaml
?
r
---
apiVersion: v1
kind: config
spec:
 
agent_options:
  
config:
   
decorators:
    
load:
    
- SELECT uuid AS host_uuid FROM system_info;
    
- SELECT hostname AS hostname FROM system_info;
   
file_paths:
    
binaries:
    
- /usr/bin/%%
    
- /usr/sbin/%%
    
- /bin/%%
    
- /sbin/%%
    
- /usr/local/bin/%%
    
- /usr/local/sbin/%%
    
- /opt/bin/%%
    
- /opt/sbin/%%
    
configuration:
    
- /etc/%%
    
efi:
    
- /System/Library/CoreServices/boot.efi
   
options:
    
disable_distributed: false
    
disable_tables: windows_events
    
distributed_interval: 10
    
distributed_plugin: tls
    
distributed_tls_max_attempts: 3
    
logger_plugin: tls
    
logger_snapshot_event_type: true
    
logger_tls_endpoint: /api/v1/osquery/log
    
logger_tls_period: 10
    
pack_delimiter: /
  
overrides: {}
 
host_expiry_settings:
  
host_expiry_enabled: true
  
host_expiry_window: 30
 
host_settings:
  
enable_host_users: true
  
enable_software_inventory: true
 
org_info:
  
org_logo_url: ""
  
org_name: XXXXXX
 
server_settings:
  
enable_analytics: false
  
live_query_disabled: false
  
server_url: <https://XXXXX>
 
smtp_settings:
  
authentication_method: "0"
  
authentication_type: "0"
  
configured: false
  
domain: ""
  
enable_smtp: false
  
enable_ssl_tls: true
  
enable_start_tls: true
  
password: ""
  
port: 587
  
sender_address: ""
  
server: ""
  
user_name: ""
  
verify_ssl_certs: false
 
sso_settings:
  
enable_sso: true
  
enable_sso_idp_login: true
  
entity_id: fleet
  
idp_image_url: ""
  
idp_name: XXXXX
  
issuer_uri: XXXXX
  
metadata: |-
   
removed
  
metadata_url: ""
 
vulnerability_settings:
  
databases_path: /etc/fleetdm/vuln
 
webhook_settings:
  
host_status_webhook:
   
days_count: 0
   
destination_url: ""
   
enable_host_status_webhook: false
   
host_percentage: 0
  
interval: 24h0m0s
t
do you have any logs for
/api/v1/osquery/log
?
r
not that I can see
t
something's off, the hosts should be pushing logs every 10 seconds based on your config, do you have any hosts online?
r
160 currently
t
could you run osqueryd with
--tls_dump
on a host that you are seeing results for, and one you're not, to see how those are behaving?
r
🙌 looks like something with the update increase the body size on the post request to
/api/v1/osquery/log
so I’m getting a 413 error in Nginx.
t
all right!
r
good now, thanks!
👍 1