Lichao Li
01/22/2025, 11:40 PMRebecca Cowart
01/23/2025, 3:58 PMRebecca Cowart
01/23/2025, 4:09 PMLichao Li
01/23/2025, 4:51 PMHave these queries had a chance to run yet?i checked the UI again and all the queries with ‘Frequency’ set have performance impact values. otherwise I need to trigger a live query for it to appear. looks good. Thanks for the doc link. it looks like fleet’s performance impact is determined by Duration (the time taken for a query to execute). which explains why when fleet reported a query as ‘Minimal’, Osquery profile.py reported the same query as ‘highest impact’. will you consider adding CPU/fd/memory usage to the performance impact calculation?
Kathy Satterlee
01/23/2025, 5:50 PMLichao Li
01/24/2025, 12:10 AMIf that data were available from osqueryAFAIK they are not available in osquery db. you need to download the script then run it against an osquery.conf (with the queries).
Lichao Li
03/07/2025, 8:31 PM(system_time + user_time) * 100.0 / wall_time_ms
provides the average CPU utilization (%) of each query.
Reference: how top
calculates `%CPU`:
https://man7.org/linux/man-pages/man1/top.1.html
%CPU -- CPU Usage
The task's share of the elapsed CPU time since the last screen
update, expressed as a percentage of total CPU time.
Some caveats, in a sample live query of osquery_schedule
I captured:
• sometimes wall_time_ms
is zero. it’s good to check for nonzero before division.
• sometimes stime and utime are 10ms each, and wall time is 1ms. the CPU utilization shows as 2000%, but those queries are fast and performant:
◦ when wall time (elapsed time) is smaller than stime/utime, and since we know osqueryd
runs on a single core, we can probably ignore such cases (e.g. change the result to 0)
Let me know if I miss anything 🙏Unthread
03/07/2025, 9:04 PM