I’m trying to figure out how to detect touchid cha...
# macos
g
I’m trying to figure out how to detect touchid changes on osx? One of the branches of the threat model I’d like to prune is changes to touchid. The thought is to send a user through a different flow that requires a secondary step if there is a change to touchid on a user’s mac. Does anyone know how to check this using osquery? We are managing it using fleet. I see the
bioutil
command, but that isn’t great for change detection. Maybe there’s a way to query changes via
asl
logging? Asking here because possibly someone has already figured this out?
s
At Kolide we never found a better way then execing bioutil
g
Dang. Thanks for letting me know. I see the bioutil prs in the kolide repo and was hoping there was some secret sauce more. If I figure it out, I’ll share my findings. @seph do you happen to know if there is something concrete for Windows Hello fingerprint/face enrollment detection?
s
On macOS, there’s probably a private framework. But inside launcher, we can exec and it’s easy.
For windows, some of this stuff might be lurking in the registry around
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WinBio\AccountInfo\%\EnrolledFactors
,
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\NgcPin\Credentials\%\EncryptedPassword
, and
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\FaceLogonEnrolledUsers\%
g
I found logs using
log stream
filters. Very specifically adding a fingerprint on osx for touchid can be found using this super specific filter:
log stream --predicate 'subsystem == "com.apple.preference.passwordpref" && eventMessage CONTAINS[c] "BiometricKitUIEnrollResult 4"'
Do you know how I could get this via osquery? This doc helped me query this log: https://eclecticlight.co/2016/10/17/log-a-primer-on-predicates/
Removal of individual fingerprints:
log stream --predicate 'subsystem == "com.apple.preference.passwordpref" && eventMessage CONTAINS[c] "removeFingerprintWithUUID"'
@Brad Girardeau I saw you had some issues with the
unified_logs
table. Do you have or know of example code querying that I could use? re your comment
b
Yeah in theory this you'd want something like this query:
Copy code
select * from unified_log where
  timestamp > -1 and
  max_rows = 1000 and
  predicate = 'subystem == ...';
The challenges with that simple query are: • Across your schedule, you can only have 1 query with
timestamp > -1
, otherwise they will steal each others long entries, because the last read log entry is a global value for osquery ◦ You have to be particularly careful that the SQL planner doesn't split this up to run two queries behind the scenes too -- generally only use AND conditions and confirm with
osqueryi --planner
• When the query first runs, it will scan the entire unified log, which takes several minutes and uses gigabytes of memory. This causes osquery watchdog to kill the query (and anecdotally this maybe triggers a bug in Apple's framework, it doesn't seem to like to be interrupted in the middle of querying)
What we're doing is something like this instead:
Copy code
with now_ts as (
  select unix_time as t from time
),
filtered_log as (
  select * from unified_log where
    timestamp > -1 and
    timestamp > (select t - 720 from now_ts) and
    max_rows = 10000 and
    predicate = concat(
      '([event predicate1]) || ',
      '([event predicate2])'
    )
),
event_type1 as (
  select 
    'event_type1' as event_type, timestamp, message from filtered_log where ...
),
event_type2 as (
  select
    'event_type2' as event_type, timestamp, message from filtered_log where ...
)
select * from event_type1 union
select * from event_type2
This does a single query to the unified log to get a copy of different event types that pass any of multiple initial filtering predicates, then further tables query each of the events of interest into a common set of columns (using
json_object
for comes in handy there). It only goes back in the log for at most 12 minutes to keep runtime and resource usage more under control, so just accepting some event loss here outside that lookback. I'd like to dig in more and get that 12 minutes up to a few hours at least. Already had to bump watchdog settings especially for memory (~3.5GB) to avoid this getting denylisted on most machines, and does get denylisted for ~5% of our fleet on any given day still. Some apps just seem to spew a huge amount of noise into this log. You could try being even more permissive in watchdog settings, at risk of more problems from a bad query elsewhere
g
This is fantastically helpful, thank you! Gist of the memory management sounds like shorter timeframes are better. I should have time in the next two days to put this to the test and see how it performs.
b
Yeah definitely curious how it goes for you or any insights you find! I ran out of time to spend on it, but would like to understand and optimize more eventually