I don't think those use cases can be implemented i...
# core
a
I don't think those use cases can be implemented in osquery, as it would encourage heavy loads behind queries which will cause osquery to get terminated by the watcher, either because it goes beyond the cpu/memory limit threshold of the watcher or because the table implementation didn't return soon enough (EDIT: I guess this only apply to extensions; in core we have no heartbeats so it would freeze osquery)
p
Hmm, I hashed the full file system (Mac) on my system as test (and windows vm) and osquery did not freeze nor was killed. But I understand those tests are probably? not representative of a majority systems.
Copy code
osquery> select count(*) from file_list where path_start='/' and path_limit=-1 join hash where file_list.path =hash.path;
+----------+
| count(*) |
+----------+
| 2354337  |
+----------+
Run Time: real 1888.383 user 1026.227232 sys 437.948222
osquery> select count(*) from file_list  left join hash on file_list.path =hash.path where path_start='/' and path_limit=-1 and path_filter=".*\.php";
+----------+
| count(*) |
+----------+
| 165      |
+----------+
Run Time: real 175.089 user 75.868881 sys 61.977795
I think all of the use cases are really beneficial and would be really great it was possible to implement in osquery. Some of the use cases above r really only applicable for a specific set of systems depending on the scenario. Hashing the full file system is a pretty common thing when doing incident response (IR) or at least MD5ing every single file regardless of size (with the exception maybe of obscenely large files). Hashing the full file system would/should really only be done on systems identified with malicious/suspicious activity. For example, an AV alert for REvil (ransomware) on a windows server/DC, would probably trigger an organizations incident plan. All plans are different but one of the steps could be to investigate the windows server and pulling a full file listing (and hashing/md5ing files) would be a minimum requirement I think. In addition, to collecting a lot of other data to determine how the malware got on it. Pulling a full file listing on all systems shouldn't really be done (IMO). The other examples mentioned above are valuable as well for hunting or other scenarios. For webshells, an organization can hunt for webshells (likely only on servers) with yara and file listing or they carve all web base files(jsp,php, aspx, etc) and then run a large amount of yara rules offline or any other webshell scanner tool. Another scenario could be a security organization/law enforcement may tell another organization that they have webshell(s) on their public website that consumers use to login/buy stuff. The file listing could be done in combination with yara or carve to get the web base files and review them For the concerns u mentioned, do u have any ideas on ways they could be addressed?