Skip to content

Quick Start

Get from zero to your first biometric verification. This guide walks you through the full flow: getting access, understanding your project, generating API tokens, configuring scoring, and making your first API call.

Prerequisites

  • A Biometry account. Request access here if you don’t have one.
  • A backend server (Node.js, Python, Go, etc.) that will proxy API calls to Biometry.

Step 1: Get access

Biometry uses a waitlist-based onboarding process:

  1. Register at the Biometry Console.
  2. Your registration enters a review queue. A Biometry administrator will review and approve your account.
  3. Once approved, a project is automatically created for you. This project is your workspace — it holds your API tokens, scoring configurations, sessions, and transaction history.

After approval, log in to the Console to access your project dashboard.


Step 2: Generate an API token

API tokens authenticate your backend’s requests to the Biometry API. Each token is scoped to your project and determines which biometric services (Face Liveness Detection, Voice Recognition, etc.) your requests are authorised to use.

  1. Go to the API Management page in the Console.
  2. Select the biometric services you want this token to have access to.
  3. Click Generate Token.
  4. Copy and securely store the token — it will not be shown again.

Step 3: Set up your backend

Store the API token as an environment variable on your backend. Never commit it to version control.

# .env
BIOMETRY_API_TOKEN=your_api_token_here

All API requests go to the base URL:

https://api.biometrysolutions.com/api-gateway

Every request must include the token in the Authorization header:

Authorization: Bearer YOUR_API_TOKEN

For a detailed reference on request format, response structure, and error codes, see API Calls.


Step 4: Start a session

Sessions group related transactions (consent, enrollment, verification) together so you can track them as a unit in the Console dashboard.

curl --location 'https://api.biometrysolutions.com/api-gateway/sessions/start' \
--request POST \
--header 'Authorization: Bearer YOUR_API_TOKEN'

Response:

{
"data": "sess_fe402c29-b543-4413-8624-af1a2a0b2f2c",
"message": "session started successfully"
}

Save the session ID (data field) — you’ll pass it as the X-Session-ID header in the following requests.


Before performing biometric operations, you must collect consent from your end-user. There are two types:

Consent typeRequired forEndpoint
Authorization consentLiveness checks, face & voice recognitionPOST /api-consent/consent
Storage consentFace enrollment, voice enrollmentPOST /api-consent/strg-consent

For a standard liveness + identity verification flow, you need authorization consent:

curl --location 'https://api.biometrysolutions.com/api-consent/consent' \
--header 'Content-Type: application/json' \
--header 'X-Session-ID: sess_fe402c29-b543-4413-8624-af1a2a0b2f2c' \
--header 'Authorization: Bearer YOUR_API_TOKEN' \
--data '{
"is_consent_given": true,
"user_fullname": "USER_IDENTIFIER"
}'

Read more about consent types and history in the Consent guide.


Step 6: Process a video

This is the core verification step. Your end-user records a short video where they show their face and say a set of numbers out loud. You send that video to Biometry for analysis.

curl --location 'https://api.biometrysolutions.com/api-gateway/process-video' \
--header 'Authorization: Bearer YOUR_API_TOKEN' \
--header 'X-User-Fullname: USER_IDENTIFIER' \
--header 'X-Session-ID: sess_fe402c29-b543-4413-8624-af1a2a0b2f2c' \
--form 'video=@"/path/to/video.mp4"' \
--form 'phrase="one two three four five six"'

Response:

{
"data": {
"Active Speaker Detection": {
"code": 0,
"description": "Successful check",
"result": 97.79,
"score": 98.77
},
"Face Liveness Detection": {
"code": 0,
"description": "Successful check",
"result": true,
"score": 76.65
},
"Visual Speech Recognition": {
"code": 0,
"description": "Successful check",
"result": "ONE TWO THREE FOUR FIVE SIX",
"score": 88.64
}
},
"scoring_result": "pass",
"message": "video processed successfully"
}

The services returned depend on which services your API token is authorised for. Each service returns its own score and result, while scoring_result gives you the overall pass, fail, or refer verdict based on your active scoring system.

See the full Process Video reference for all parameters, response fields, and error codes.


Step 7: Understand scoring systems

Scoring systems control how individual service scores combine into the final scoring_result verdict. Every API token has a scoring system assigned to it — by default, the Default scoring system is used.

How it works

A scoring system is made up of scoring blocks — one per biometric service. Each block defines:

PropertyDescription
WeightHow much this service contributes to the overall score (0 to 1).
Fail thresholdIf the service score falls at or below this value, the block is considered failed.
CriticalIf a critical block fails, the entire result is immediately fail — regardless of other services.
ActiveWhether this block is evaluated. Inactive blocks are skipped.

The system computes a weighted average of all active block scores, then compares it against the scoring range:

  • Score ≥ pass value"pass"
  • Score ≤ fail value"fail"
  • Score between the two → "refer" (needs manual review)

Default scoring system

If you don’t create a custom scoring system, the default one is applied. It has Active Speaker Detection and Face Liveness Detection as critical blocks, with a pass threshold of 80 and a fail threshold of 50.

Creating custom scoring systems

You can create and manage scoring systems from the Console under Configurations → Scoring System. When you create or update a scoring system, you assign it to one or more API tokens — those tokens will then use your custom scoring rules instead of the default.

This is especially useful when you need different verification strictness for different use cases (e.g., a stricter scoring system for financial transactions vs. a more lenient one for basic identity checks).


Step 8: End the session

Once you’ve completed all the operations for this user, end the session:

curl --location 'https://api.biometrysolutions.com/api-gateway/sessions/end/sess_fe402c29-b543-4413-8624-af1a2a0b2f2c' \
--request POST \
--header 'Authorization: Bearer YOUR_API_TOKEN'

You can now view the session and all its transactions in the Console dashboard.


Step 9: View results on the dashboard

Every transaction you make through the API is recorded and visible in the Console dashboard. You can:

  • Review individual transaction results and service scores.
  • Inspect session history and linked transactions.
  • Flag transactions as fraudulent or possibly fraudulent for investigation. See Fraud for details.
  • Retrieve raw samples (video frames, source files) for any transaction via the Get Samples endpoint.

Try it with the demo app

To see the full end-to-end flow in action before building your own integration, check out the Web SDK example application included in the SDK repository. It demonstrates a working React integration with camera capture, session management, and video processing — all wired through a backend proxy.

Use it as a reference, but keep in mind that for production you should implement your own flow:

Your app (captures media) → Your backend (proxies with API token) → Biometry API

See Architecture recommendations below for the full pattern.


Architecture recommendations

How you integrate Biometry into your system matters for both security and reliability.

Your backend is the gateway

Your application architecture should follow this pattern:

End-user device Your backend Biometry API
┌──────────────┐ ┌──────────────────┐ ┌───────────────┐
│ Browser / │ │ │ │ │
│ Mobile app │─────>│ Your server │────>│ Biometry │
│ │<─────│ (API token │<────│ API Gateway │
│ (SDK) │ │ stored here) │ │ │
└──────────────┘ └──────────────────┘ └───────────────┘

Why this matters:

  • API token security: Your token never leaves your backend. Embedding it in frontend code exposes it to anyone who inspects the page source.
  • Request validation: Your backend can validate and sanitise user input before forwarding it to Biometry, preventing abuse.
  • Rate control: You can enforce your own rate limits and business logic (e.g., only allowing authenticated users to submit biometric checks).
  • Audit trail: Your backend logs provide a complete record of what was sent to Biometry and when.

Using the SDKs

Biometry provides client-side SDKs to handle media capture (camera, microphone) on the end-user’s device. The captured files should then be sent to your backend, which forwards them to the Biometry API.

PlatformPackagesPurpose
Webbiometry-sdk, biometry-react-componentsSDK client + pre-built React UI components (FaceRecorder, DocScan, etc.)
FlutterbiometrySDK client for Flutter/Dart applications

On the frontend, use the SDK components to capture media:

import { FaceRecorder } from 'biometry-react-components';
function Verification() {
const handleConfirm = async (video: File, audio: File, phrase: string) => {
const formData = new FormData();
formData.append('video', video);
formData.append('phrase', phrase);
// Send to YOUR backend, not directly to Biometry
const response = await fetch('/api/verify', {
method: 'POST',
body: formData,
});
};
return <FaceRecorder onConfirmRecording={handleConfirm} />;
}

For the full SDK API reference, see the Web SDK and Flutter SDK documentation.

Additional recommendations

  • Use sessions to group related transactions. This makes debugging and auditing significantly easier in the Console dashboard.
  • Implement retries with backoff for 500 errors. Biometry services are highly available, but transient failures can occur.
  • Use webhooks for asynchronous processing. Some operations (like Anti-Video Forgery) take longer; webhooks notify your server when results are ready instead of requiring polling.
  • Pass device information and geolocation headers when available. These enrich your fraud detection data. See Device Information and Geo Location.

What’s next?

Now that you have a working integration, explore the capabilities in depth:

GoalWhere to go
Enroll a user’s face for future recognitionFace Enrollment
Enroll a user’s voice for future authenticationVoice Enrollment
Match a document photo to a live faceFace Match
Verify document authenticityDocAuth
Detect video forgeryAVF Infer
Set up event-driven processingWebhooks
Review all available servicesServices overview
Integrate the Web SDKWeb SDK
Integrate the Flutter SDKFlutter SDK