Liveness Detection and Injection Attack Defense in FedRAMP Onboarding

Liveness Detection and Injection Attack Defense in FedRAMP Onboarding

In 2022, I spent three weeks reconstructing a documentation package after a DCSA audit flagged that our identity records had been stored outside our FedRAMP boundary. That experience eventually led me to build Verifyfed. But there was a second problem I discovered during that audit review that doesn't get talked about nearly enough: the selfies we'd collected for identity verification were completely unprotected against replay attacks. Anyone with a photo from LinkedIn and basic screen-recording software could have passed our biometric check. Every time.

This article is about why liveness detection matters for FedRAMP-scope onboarding, how injection attacks actually work, and why the right response to an edge case is a human adjudicator queue rather than an automatic rejection.

The Three Attack Vectors We See in Practice

When we talk to contractors about biometric verification risks, most assume the threat is someone physically impersonating another person in person. That's the least common scenario we encounter. The three vectors that actually show up in FedRAMP onboarding contexts are:

  • Video replay attacks. A pre-recorded or looped video of the legitimate employee is fed into the camera input during the selfie capture step. Older liveness checks that only detect blink or head-tilt can be defeated by a video clip with the right motion cues already embedded.
  • Digital injection via virtual camera drivers. Specialized software creates a fake camera device at the OS level. The identity verification system receives what it believes is a live camera stream, but the actual input is a synthetic feed generated from a static photo or deepfake rendering. The camera system never knows the difference unless you're checking driver fingerprints.
  • Emulator and frame-metadata spoofing. Submission happens through an Android or iOS emulator rather than a real device. Frame metadata exposed by a legitimate device capture session looks different from metadata produced by an emulator. Inconsistencies in frame timing, device identifiers, and sensor noise signatures are detectable if you're looking for them. Most verification vendors aren't.

Deepfakes are a fourth vector that gets more press coverage, but in our experience the injection-via-virtual-camera approach is far more accessible to a motivated bad actor than building a convincing real-time deepfake. The barrier is low. That's the part that matters.

How Active Liveness Challenges Work

Passive liveness detection analyzes a selfie frame for signs of spoofing: texture artifacts, depth inconsistencies, reflections. It works against low-quality attacks. Against a competently produced video replay or a virtual camera feed, passive detection fails at meaningful rates.

Active liveness detection presents the user with a randomized motion challenge before or during frame capture. Nod left. Look up. Blink twice. The randomization matters. A fixed challenge sequence can be anticipated and scripted into an injection attack. A randomized sequence with sufficient entropy means an attacker cannot pre-record or pre-generate the correct response.

In Verifyfed's implementation, the challenge sequence is generated server-side at challenge time and is never transmitted to the client in advance. The captured frames are validated server-side against the expected motion sequence, not client-side. This matters because client-side validation can be bypassed by intercepting and modifying the API response before the verification logic runs.

Our face-comparison model, run after a successful liveness challenge, operates at a false match rate below 0.01% at the 1-in-1,000 assurance threshold. To put that concretely: at 1,000 verification attempts by different individuals, fewer than 1 would be incorrectly accepted as a match for a different person's document photo. For a contractor onboarding 300 cleared hires per year, that translates to an expected false match rate approaching zero in any single contract period.

Injection Attack Signals: What We Actually Check

Active liveness challenges are necessary but not sufficient on their own. An attacker with a real-time deepfake pipeline can respond to a motion challenge. The defense in depth is the injection signal layer.

Three signals we check on every submission:

Virtual Camera Driver Fingerprinting

Legitimate device camera drivers expose a specific metadata profile to the operating system. Virtual camera software, including OBS virtual camera, ManyCam, and similar tools, produces identifiable signatures in how they register with the OS. We flag submissions where the camera device fingerprint matches known virtual camera driver patterns and route those to the adjudicator queue rather than auto-rejecting. Not all OBS-registered machines mean active injection. Context matters.

Frame Metadata Inconsistency

A live capture session from a physical device produces frame metadata with specific characteristics: consistent sensor noise patterns, timestamps that advance in lockstep with the device clock, and device-specific identifier consistency across frames. Emulator sessions and injected video streams produce detectable deviations. Our system runs a frame-metadata consistency check across the captured liveness sequence. Inconsistencies above threshold are flagged. Period.

Emulator Artifact Detection

Android and iOS emulators used for automated testing leave artifacts in session behavior that differ from physical device behavior. Screen resolution patterns, touch event timing, device identifier formats, and battery state reporting all differ in detectable ways from a physical device in hand. In our tracking, emulator submissions account for roughly 2 to 4% of flagged exceptions in onboarding workflows for contractors running automated HR testing environments. Most of those are legitimate: the contractor's IT team is running integration tests. Human adjudicator review resolves them correctly without rejecting legitimate employees.

Why FedRAMP Requires Human Adjudicator Review

Here's the thing. Automatic rejection of any submission that triggers an injection-attack signal sounds like the safe choice. It isn't.

FedRAMP identity verification requirements, specifically NIST SP 800-63B and the supporting identity assurance guidance, require that identity verification decisions affecting individuals have a documented adjudication pathway that includes human review for exceptions. This is not about being lenient with attackers. It's about due process for the 2 to 4% of legitimate submissions that trigger a false positive on an injection signal check.

Consider the practical case. A cleared employee at a remote facility submits their liveness selfie on a personal laptop. The household has OBS installed for personal streaming and OBS virtual camera is registered as a system device even though it was never used in the submission session. Virtual camera driver fingerprint: flagged. But the actual submission came through the device's physical built-in camera.

Auto-reject means that employee's onboarding stalls. The contractor's coordinator has to restart the flow. The employee has to submit again, potentially from a different device. Timelines slip. At $15,000 to $90,000 in contract performance risk per delayed cleared-hire start, even a 0.3-day average slip per false positive adds up fast across a 200-hire onboarding cohort. Real numbers. Real cost.

Human adjudicator review solves this correctly. Our adjudicator queue presents the flagged submission alongside the full signal report: which check triggered, what the frame metadata showed, what the driver fingerprint indicated. An adjudicator with context can distinguish an OBS residue flag from an active injection attempt in under 4 minutes on average. The decision is logged, timestamped, and attached to the verification record. The audit trail shows exactly which human made the call and on what evidence.

For DCSA auditors and CMMC assessors, that chain of documented adjudication decisions is not a vulnerability. It's a compliance asset. An adjudicated record with documented rationale is more defensible than a binary pass/fail system that auto-rejected legitimate submissions and produced no documentation of the reasoning behind rejections.

What This Means for Contractors Choosing a Verification Vendor

When we review verification vendors with contractor security officers, we ask four questions about their liveness and injection attack handling:

  1. Does your liveness challenge sequence randomize server-side, or is it transmitted to the client before capture?
  2. What injection signals do you check beyond passive liveness analysis?
  3. Do flagged exceptions get auto-rejected or routed to human review?
  4. Is the adjudication decision and rationale stored in the verification record within your FedRAMP-authorized boundary?

In our experience, most commercial identity verification vendors built for financial services KYC answer question 1 adequately and questions 2 through 4 poorly. KYC use cases tolerate auto-rejection. A failed identity check in KYC means the user retries through a different channel or contacts support. FedRAMP onboarding does not have that flexibility. A failed check on a cleared hire has contract performance consequences within 24 to 48 hours.

Honestly, the gap we identified when building Verifyfed was not that liveness detection technology was unavailable. It was that no vendor had designed the exception-handling workflow for a federal contractor compliance environment where every adjudication decision needs an auditable record stored inside a FedRAMP boundary.

The technology question and the compliance documentation question are not separable. They have to be built as one system. Fact.

The Documentation Requirement That Doesn't Get Enough Attention

Under FedRAMP Moderate controls, specifically IA-3 (Device Identification and Authentication) and IA-5 (Authenticator Management), the identity verification record for a cleared hire needs to include not just the outcome but the method. Assessors want to see that liveness detection was performed, what challenge type was used, what signals were checked, and how exceptions were resolved.

That level of documentation is not optional. A CMMC Level 2 assessment finding against an identity verification record that shows "biometric check: passed" with no underlying methodology documentation is an assessor's finding waiting to happen. We've seen it. The documentation gap is as much a risk as the technical attack vector itself.

Building both layers correctly from day one is what FedRAMP-scope identity verification actually requires. Not just a liveness check. A liveness check plus injection detection plus adjudicator workflow plus documented records inside the boundary.

That's the standard we built Verifyfed to meet. It's also the standard contractors should be holding every identity verification vendor to before signing a contract.