Facial Recognition in Your Company: Accuracy, Bias, and Legal Risks
Other
Facial Recognition in Your Company: Accuracy, Bias, and Legal Risks | Syneo
Corporate Guide for 2026: How to Measure Facial Recognition Accuracy, Manage Bias, and Address Legal Risks Under the GDPR and the EU AI Act — Pilots and Controls.
face recognition, biometrics, GDPR, EU AI Act, bias, data protection, security, pilot, access control, DPIA, liveness detection
March 12, 2026
In the minds of many executives, facial recognition is seen as a “convenience” technology: faster access, fewer lost cards, and less fraud. In a corporate environment, however, it is much more than that. It handles biometric data, automates decisions, and, if configured incorrectly, can simultaneously pose risks to information security, compliance, and reputation.
This article will help you get a realistic picture of what 2026 will look like:
What does "accuracy" actually mean in facial recognition?
how and where bias appears,
what legal risks related to the GDPR and the EU AI Act should be taken into account,
and what minimum checks you should have in place before you even get started.
1) Facial recognition at the company: clearing up the first misunderstanding
In business discussions, solutions that involve very different levels of risk are often lumped together.
Verification or identification?
Verification (1:1): “Are you who you say you are?” Example: When entering a facility, the employee provides an identifier (badge, PIN, app), and the facial recognition system verifies only that.
Identification (1:N): “Who are you?” The system compares a facial image against an entire database to find a match.
The difference between the two approaches is not merely technical. The 1:N approach typically entails greater privacy and ethical risks and is subject to stricter scrutiny under many regulatory interpretations.
Access control, attendance tracking, customer experience, loss prevention
The most common corporate use cases:
Physical access control (office, production hall, warehouse, server room)
Logical access control (e.g., kiosk, shared workstation, privileged access "step-up")
Attendance System (Time Tracking)
Customer identification (e.g., for premium services)
Security use case (e.g., blacklist, repeat fraud)
When it comes to "convenience" purposes, the question is particularly pressing: is it necessary and proportionate to use biometrics, or can the same result be achieved with less risk (card + PIN + MFA, smartphone access, QR codes, etc.)?
2) Accuracy: Why isn’t “99%” enough?
The accuracy of facial recognition isn't just a single number. It depends on the environment, the threshold setting, the camera, the lighting, the process, and the cost of an error.
The two types of critical errors
False Acceptance: the system grants access to someone who should not be allowed in. This is a security risk.
False Reject: the system denies access to someone who should be allowed in. This constitutes an operational risk (queues, downtime, and tensions related to HR and labor law).
There is always a trade-off between the two. If you set the system to be very strict, false positives may decrease, but false negatives will increase.
“Lab” vs. “reality”
Vendor demos are often created in ideal environments. Typical factors that can complicate matters in a real-world business setting include:
backlighting, poor lighting, motion detection
protective gear (goggles, mask, helmet), hat
different camera angles (top-mounted camera), distorted optics
multiple locations and a diverse fleet of equipment
“tailgating” (someone sneaking in behind another person), keeping the gate open
Accuracy metrics that you should ask for as a manager
Definition | What does it measure? | Business consequences if it goes wrong | Typical control |
FAR (False Acceptance Rate) | False acceptance rate | Unauthorized entry, incident | Threshold tuning, multi-factor, zone-based access |
FRR (False Reject Rate) | False rejection rate | Queues, dissatisfaction, malfunctions | Alternative entry point, process redesign |
FTE (Failure to Enroll) | Registration failed | Exclusion, administrative burden | A high-quality enrollment process, multiple attempts, and support |
Liveness / anti-spoof | Is this a real person, not a photo or video? | Spoofing attack, fraud | Liveness tests, IR camera, challenge-response |
Even if an access control system makes mistakes “only occasionally,” this can still be unacceptable—for example, in server rooms or high-risk facilities. In such cases, it is common practice to treat facial recognition as a convenience feature only, while requiring an additional factor in high-risk zones.
Independent performance test: a good compass
When choosing a vendor, it’s worth asking about independent benchmarks. The best-known such program is the NIST Face Recognition Vendor Test (FRVT), which compares the performance of many market solutions using a standardized methodology. It doesn’t replace your own pilot, but it helps filter out overly optimistic marketing claims.
3) Bias: not just an “ethical” issue, but an operational risk
Bias becomes a business problem when the system does not perform equally well across different groups (such as gender, age, skin tone, facial features, cultural attire, and protective gear).
How does bias arise in practice?
Data bias: The training data does not represent your user base.
Environmental bias: Due to poorer lighting conditions at a particular facility, there are more rejections during certain shifts.
Process bias: enrollment is not conducted with the same level of quality for everyone.
The corporate consequences of bias are typically tangible:
more manual exception handling for certain groups
workplace conflicts, risk of discrimination claims
lower adoption rates, "manipulation" of processes
What should you ask the supplier, and what should you measure during the pilot phase?
The bare minimum you should take seriously:
Measurement plan: how to measure FAR/FRR in real-world traffic (shadow mode or controlled period).
Segmented report: performance broken down by at least location, camera type, shift, and entry point. (Demographic breakdown is a sensitive issue that requires legal and ethical consideration, but without some form of bias control, the risk remains “invisible.”)
Exception Handling: What happens in the event of an FRR, how long does it take, and who approves it.

4) Legal risks in 2026: Think in line with the logic of the GDPR and the EU AI Act
GDPR: biometric data, special categories of data
A facial image, and in particular a "template" (biometric template) derived from a face, typically qualifies as biometric data if you use it for unique identification. Under the GDPR, biometric data is considered a special category of data, the processing of which is prohibited by default, unless a narrow exception applies.
Two important starting points:
GDPR Text and Terms: GDPR Regulation (EU 2016/679)
The interpretive framework for biometric systems is often clarified by guidelines issued by supervisory authorities and the EDPB. (In Hungary, the NAIH is the relevant authority.)
In the workplace, “consent” is rarely a good basis
A common misconception: “We ask for consent, and that’s it.” In the hierarchical structure of the workplace, consent is often not truly voluntary, making it highly vulnerable to legal challenge.
This does not mean that facial recognition can never be lawful. It means that:
The choice of legal basis and the exception under Article 9 of the GDPR is critical,
and an assessment of necessity, proportionality, and alternatives is unavoidable.
A DPIA (Data Protection Impact Assessment) is typically required
Biometric identification, especially when used on a large scale, on a regular basis, or for access control, is typically considered high-risk data processing. In such cases, a Data Protection Impact Assessment ( DPIA ) is often the minimum requirement.
A DPIA is not merely an administrative burden; rather, it provides a structured framework for addressing issues such as:
detailed data flow (what images, templates, logs, and where they are stored)
access, data retention, deletion
risks to data subjects (false rejection, feeling of being watched, profiling)
mitigation measures (alternative access path, human review, audit)
EU AI Act: You’re probably thinking in terms of the high-risk category
Based on the logic of the EU AI Act (Regulation on Artificial Intelligence), biometric systems can be classified as high-risk AI systems in many situations, which may entail additional obligations (risk management, documentation, data quality, logging, human oversight, transparency). Official legal text and status: EU AI Act on Eur-Lex.
The practical takeaway for managers: don’t just rely on a supplier’s claim that they are “GDPR-ready.” Ask for:
compliance documentation (what is covered by the warranty and what is not)
auditable logging and controls
clear division of roles (data controller, data processor, subcontractors)
5) Security risks: it’s not just about data protection
Facial recognition can be challenged from two angles:
Access attacks: photo, video, deepfake, mask, “presentation attack.” The key issues here are liveness detection and the configuration of the physical environment (camera position, lighting, gate logic).
Data breaches: If biometric templates, images, or identifying metadata are leaked, the consequences are often more serious than those of a password breach, because biometric data cannot be “reset” like a password.
Minimum requirements in a corporate environment:
encryption during storage and transmission
strict access control (RBAC, MFA for admin interfaces)
logging, alerts, regular reviews
Vulnerability management across the entire chain (camera firmware, edge box, server, client)
6) Decision-making framework: When does it make sense to implement facial recognition?
A good decision doesn’t require “cutting-edge” technology; rather, it requires that the risks and benefits be in proportion to one another.
Quick Decision Chart (Guide)
Use case | Business benefits | Typical risk | A common good direction |
Access to a high-security zone (e.g., server room, hazardous facility) | High | FAR, spoofing, legal compliance | Multi-factor authentication, facial recognition as an additional factor, strict audit |
Warehouse and logistics positions with many temporary employees | Medium | Enrollment load, rows due to FRR | Preferably a card/app + MFA; facial recognition only at well-controlled locations |
Time Tracking | Medium | Labor law and GDPR risks; consent issues | Alternative solutions are preferred; biometrics should be used only when strictly necessary |
Customer identification for "expedited" processing | Medium | Consent, transparency, reputation | Opt-in, clear information, easy alternative |
The logistics sector is particularly interesting because it involves both high physical security requirements and, in many cases, high employee turnover. If you’re considering 3PL or transportation and warehousing operations, it’s worth looking at the entire process, not just the gate. (An example of an operation where access control, warehousing, fulfillment, and delivery are closely intertwined: in the case of 3PL logistics services, site access and process security directly impact the SLA and risks.)
7) Minimum requirements for implementation: What makes the solution enterprise-grade (and defensible)?
If the decision is to proceed, the goal of the “minimum package” is: measurable accuracy, controlled risk of bias, and documented compliance.
The most important deliverables you should require
Use case and risk classification: verification or identification, which zones, what incident scenarios.
Data flow and data inventory: what types of data are generated where (images, templates, logs), where they are stored, and for how long.
DPIA and justification of necessity and proportionality: comparison of alternatives, handling of data subjects’ rights.
Pilot measurement plan: FAR/FRR target values, measurement method, rollback plan.
Bias risk management: monitoring performance by site, analyzing exception handling data.
Security baseline: access controls, logging, backups, incident management, vendor SLAs.
Plan the process, not just the model
Most "face recognition failures" do not occur because the model is flawed, but because:
There is no clearly defined fallback (what does the gatekeeper do, what is the rule in the event of an FRR)
There is no maintenance discipline (camera, lighting conditions, firmware)
There is no continuous monitoring (drift, environmental changes, seasonal effects)
8) How can Syneo help (without overpromising)?
The most costly mistake in facial recognition typically occurs when a company purchases a technology, only to discover later that its accuracy falls short of expectations or that the legal and organizational requirements are not met.
The value of Syneo-type IT and AI consulting support in this area typically manifests itself in the following ways:
clarifying use cases and requirements (verification vs. validation, controls)
developing a pilot measurement plan and KPIs (FAR/FRR, exception handling)
in defining architecture and minimum security requirements (logging, access controls, operations)
supplier evaluation (documentation, responsibilities, SLA, data flow)
If your goal is to reach a sound decision (go or no-go) within 4 to 8 weeks, the best starting point is usually a structured assessment and a controlled pilot, not a large-scale procurement.
Summary
Implementing facial recognition in a company is not “just an access control project.” Accuracy must be measured, bias must be controlled, and risks must be managed in a documented manner in accordance with the principles of the GDPR and the EU AI Act. Those who take this seriously not only reduce legal risk but also build a more stable system that generates fewer exceptions and is better accepted.

