Title
#general
p

Prateek Kumar Nischal

10/06/2020, 4:34 PM
Hey guys, I have been implementing FIM in my org using the process_file_events and it is working good in regular systems. But lately in certain systems which has high I/O, the CPU usage is getting kind of crazy shooting upto 150% parsing all those read, open, write syscalls and sometimes shooting the memory limit. Most of those events are noise, Due to the the watchdog is going crazy on osquery and respawning the daemon. Is there something obvious that I am missing in the config. The audit params are almost default with just
events_max=50000
which I don’t think is very tightly related with audit logs. I don’t have any immediate optimisations to keep the cpu under control and I don’t want to move the watchdog to level -1 (i.e. unbounded). Is there something i can do with this. I am fine for now to drop a few events (I know auditd does that) if there are too many events.
a

alessandrogario

10/06/2020, 4:56 PM
The first issue may be that the file_events table is also starting up. There is a draft PR here: https://github.com/osquery/osquery/pull/6663 Audit is rather heavy sadly, and it generates quite a lot of data we need to parse when audit-based FIM is used. One thing that can be done is to use cgroups to limit the maximum cpu usage (I was told that it's how osquery was deployed at Facebook)
p

Prateek Kumar Nischal

10/06/2020, 5:33 PM
My cgroups knowledge is a bit weak.. it would just block on any extra utilization until the quota refreshes and not respawn, am I right ? And yes, disabling the inotify will definitely help a bit with the memory usage.. but I am afraid, it would still be overshadowed by the huge compute cost of audit