Functionality

In this page we walk through recops core design and functionality. recops is mainly a command-line interface (CLI) toolkit and well documented, use --help to get description of each function and the available options.

recops --help
recops <function> --help

This document explains recops internals, and we suggest reading to get a better understanding. Please keep in mind that this is a BETA version, is experimental, and not recommended for production workflows.


DATASET

Dataset is the central class where all other classes (face(s), image(s), “identity (ies)”) are linked to. Keep in mind that you can create multiple datasets but any operation runs on a single dataset.

When we create a dataset we specify the relevant face detectors and face recognition models that will be used for all operations on this dataset. recops does not develop any model, is a wrapper around the open-sourced deepface library from which it integrates eight face recognition models and five cutting-edge face detectors.

Face detection is a type of object detection that detects a single object class (a face). Currently, multiple cutting-edge facial detectors are wrapped in recops, you can choose between: opencv, ssd, mtcnn, dlib, retinaface and mediapipe.

How to choose which face detection model to use? We will try to briefly give you a good idea of when you should be using each model, allowing you to balance speed, accuracy, and efficiency. So, if your priority is to achieve high face detection accuracy, then you should consider using retinaface (the performance is very satisfactory as seen in the illustration (image source)) or mtcnn as a face detector. retinaface and mtcnn seem to overperform in detection and alignment stages but they are much slower.

On the other hand, if high speed is more important for your project, then you should use dlib, (dlib does not detect faces smaller than 80x80 so if working with small images make sure that you upscale them but this will increase the processing time) opencv or ssd.

Facial recognition is the task of making a positive identification of a face in a photo or video image against a pre-existing database of faces. It begins with detection - distinguishing human faces from other objects in the image - and then works on identification of those detected faces.

The following face recognition models can be used with the recops library. Most of them are based on Convolutional Neural Networks (CNN) and provide best-in-class results: VGG-Face, FaceNet (Google), OpenFace, DeepFace (Facebook), DeepID, ArcFace, Dlib and SFace. In the table below you can find out the scores of face recognition models on Labeled Faces in the Wild dataset.

ModelTaskDatasetMetric NameMetric ValueRankPaperCodeYear
ArcFaceFace VerificationLabeled Faces in the WildAccuracy99.83%# 1ArcFace: Additive Angular Margin Loss for Deep Face RecognitionGitHub2018
Facenet512---99.65%# 2---
FacenetFace VerificationLabeled Faces in the WildAccuracy99.63%# 3FaceNet: A Unified Embedding for Face Recognition and ClusteringGitHub2015
SFaceFace VerificationLabeled Faces in the WildAccuracy99.60%# 4SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face RecognitionGitHub2022
DlibFace VerificationLabeled Faces in the WildAccuracy99.38%# 5Dlib-ml: A Machine Learning ToolkitGitHub2009
VGG-FaceFace VerificationLabeled Faces in the WildAccuracy98.78%# 6Deep Face RecognitionGitHub2015
DeepFaceFace VerificationLabeled Faces in the WildAccuracy98.37%# 7DeepFace: Closing the Gap to Human-Level Performance in Face VerificationGitHub2014
Human Beings---97.53%# 8---
DeepIDFace VerificationLabeled Faces in the WildAccuracy97.05%# 9Deep Learning Face Representation from Predicting 10,000 ClassesGitHub2014
OpenFaceFace VerificationLabeled Faces in the WildAccuracy92.92%# 10OpenFace: A general-purpose face recognition library with mobile applicationsGitHub2016

Additionally dataset holds information on the distance-metric. Is the method being used to calculate the distance between two faces vectors and can be: cosine, euclidean, euclidean_l2. The default configuration uses cosine.

NOTE: Performing face alignment for facial recognition can dramatically improve performance. Experiments show that just alignment increases the face recognition accuracy almost 1%. Face alignment is an early stage of the modern face recognition pipeline. Google declared that face alignment increases the accuracy of its face recognition model FaceNet from 98.87% to 99.63%. This is almost 1% accuracy improvement.

Following are the command-line interface (CLI) functions relevant to dataset

NOTE: Before running any command please set your DATABASE_URI and STORAGE_URI environment variables. Please read “Installation” carefully if you havent already.

To create or update a dataset use the following. Keep in mind when updating a dataset --detector and --basemodel options can’t change.

recops dataset-create --detector $DETECTOR_OPTION --basemodel $BASEMODEL_OPTION $DATASET_NAME
Available options = ["opencv", "ssd", "mtcnn", "dlib", "retinaface", "mediapipe"]
Available options = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib", "SFace"]
Available options = ["cosine", "euclidean", "euclidean_l2"]

To list all available dataset(s) and additional information use:

recops dataset-list

Dataset output information:

<Dataset[ 
    id: {self.id}
    name: {self.name}
    detector: {self.detector_backend}
    basemodel: {self.basemodel_backend}
    identities: {self.identity_count}
    faces: {self.face_count} 
    threshold: {self.threshold}
    default_threshold: {self.default_threshold}
    distance_metric: {self.distance_metric}
]>

To list available face(s), image(s), “identity (ies)” within a dataset use:

# List available faces in dataset
recops dataset-list-faces -d $DATASET_ID

# List available images in dataset
recops dataset-list-images -d $DATASET_ID

# List available identities in dataset
recops dataset-list-identities -d $DATASET_ID

Export datasets content into a ZIP archive:

recops dataset-export -d $DATASET_ID -o $PATH_TO_SAVE_ARCHIVE

To delete a specified dataset and all faces, images and identities linked to it use:

recops dataset-delete $DATASET_ID

FACE

The word “face” generally refers to the front-most region of the uppermost part of the human body. In recops context face is a class contains information on a particular face and it is always derived from an image, and linked to it. (Please read “IMPORT DATA” section on how to import face(s))

To delete a face use:

recops face-delete $FACE_ID

IMAGE

Image stands for an image file and is always linked to that file. Keep in mind that any import, or recognition operation will create image(s) and the files will be copied to STORAGE_URI. After the import you can delete the files being used during the import as they are now contained in STORAGE_URI.

The image class holds the checksum of the related file and before any import it checks if image is already imported by comparing the checksums. (Please read “IMPORT DATA” section below on how to import image(s))

To delete an image use:

recops image-delete $IMAGE_ID

IDENTITY

Identity does not have to be a real identity, but rather a group of faces. Depending on the investigation or case study, the group may refer to person’s name, job, team or anything. It can link to a single or multiple faces.

Create an identity:

recops identity-create -d $DATASET_ID $IDENTITY_NAME

Link face to an identity:

recops identity-link $IDENTITY_ID $FACE_ID

To delete an identity use:

recops identity-delete $IDENTITY_ID

IMPORT DATA

The process of importing data is the most important and sometimes confusing. recops has 3 different functions to help with that:

# Import face(s) from local folder
recops dataset-import-faces --help

# Import face(s) linked to "identity (ies)" from local folder
recops dataset-import-identities --help

# Import image(s) to dataset from a local folder
recops dataset-import-images --help

recops handles 5 common stages in the background: detect, align, normalize, represent and verify. Before going into details about each operation we will first explain how the face extraction process works that is common for all 3 imports.

Any import will handle at least one image file and follow this process:

For each detected face:

NOTE: By default recops aligns faces, so if you already have pre-processed images and you would like to skip alignment use --no-align flag.

There always gonna be some edge cases that will fail. Keep in mind that when a face is detected but we fail to extract facial features, then we do not store it and not process it any further.

All import functions might be time consuming depending on how much data we process, the basemodel, and detector we use. While importing, it is safe to stop the process using Ctrl-C at any time. When you rerun the function, the import will start from where is left. If this is not desirable behaviour, you can use --force flag to recompute everything.


IMPORT IMAGE(S)

Import image(s) to dataset from a local folder. This is a generic import, where all image(s) in a folder are processed and face(s) are extracted from image(s). This process will not link the face(s) to any “identity (ies)”, you should do it manually. The import will only handle .png, .jpeg, and .jpg files.

Example folder structure:

├── local folder
│   ├── image-001.jpeg
│   ├── whatever-name.jpeg
│   ├── unrelated.pdf       <= will be skipped
│   ├── unrelated.mp4       <= will be skipped
│   ├── another-image.png
│   ├── whatever-name.gif   <= will be skipped

Basic use of this command (use dataset-import-images --help to list all of the available command options):

recops dataset-import-images -d $DATASET_ID $PATH_TO_IMAGES

Command options:


IMPORT FACE(S)

Import face(s) from local folder. Consider using this function when you already have a list of image(s) contain cropped and aligned face(s) (not a prerequisite). Each image should contain a single face, if no face or more than one face is detected then the process will print an error and will not import it (to see errors set –log-level to INFO).

In case you want to export all images that produced errors use --output-errors and point to a folder where error images will be copied.

If all faces are part of the same identity then you can use --identity-id and all faces will be linked to that identity (use identity-create to create an identity first).

Example folder structure:

├── local folder
│   ├── face-001.jpeg
│   ├── face-002.jpeg
│   ├── whatever-name.jpeg
│   ├── blah.png

Basic use of this command (use dataset-import-faces --help to list all of the available command options):

recops dataset-import-faces -d $DATASET_ID $PATH_TO_FACES

Command options:


IMPORT “IDENTITY (IES)”

Import face(s) linked to “identity (ies)” from local folder. Consider using this function when you already have a list of image(s) contain cropped and aligned face(s) (not a prerequisite), grouped in folders named after face(s) identity.

Each image should contain a single face, if no face or more than one face detected then the process will print an error and will not import it (to see errors set –log-level to INFO).

In case you want to export “identity (ies)"/face(s) with errors use --output-errors to specify a folder where error images will be copied.

Folder structure should be in the following format:

├── local folder
│   ├── identity-name-001
│   │   ├── face-001.jpeg
│   │   ├── face-002.jpeg
│   ├── identity-name-002
│   │   ├── face-001.jpeg
│   │   ├── face-002.jpeg
│   │   ├── face-003.jpeg
│   ├── identity-name-003
│   │   ├── face-001.jpeg
│   │   ├── face-002.jpeg
│   ├── identity-name-004
│   │   ├── face-001.jpeg
│   │   ├── face-002.jpeg
│   │   ├── face-003.jpeg
│   │   ├── face-004.jpeg

Basic use of this command (use dataset-import-identities --help to list all of the available command options):

recops dataset-import-identities -d $DATASET_ID $PATH_TO_IDENTITIES_AND_FACES

Command options:


BACKUP

A full database backup backs up the whole database and files, so that the full database can be recovered after a full database backup is restored. Full database backups represent the database at the time the backup finished.

To backup full database and files (by default outputs a .zip file) use:

recops backup -o $PATH_TO_SAVE_ARCHIVE

To restore use the following:

unzip backup.zip -d /tmp/data
export STORAGE_URI=file:///tmp/data
export DATABASE_URI=sqlite:////tmp/data/recops.db
recops dataset-list

Command options:


OTHER FUNCTIONALITY

MATCH FACE(S)

Compare face(s) linked to identity with face(s) without identity for given dataset. After importing your data, you might end up with unknown face(s) in your dataset and want to find if they match with identified face(s). In such case this function might help. This function will not change anything in the database it will just create a report that can be reviewed manually.

It will loop through face(s) without identity compare them with those having an identity. By “comparing faces” we mean that will compute the distance between their coresponding face vectors. Then we collect all matched face(s) that are equal to and below the specified threshold (use --threshold to specify that) and build a report.

The report will show a table with matched face(s), any coresponding “identity (ies)” and their distance. After reviewing the report you can manually link the face(s) to the “identity (ies)" or can use the dataset-link-matched-faces function to do it for you in an automated fashion.

Use --output to specify file path to save results to, and --output-format to select the output file format, which can be either .csv or .html. The .csv file can be used to feed dataset-link-matched-faces function and the .html file can be used to go over it and check the faces manually, can be opened in any browser but for it to visualize the faces properly you need to run webui function.

Basic use of this command (use dataset-match-faces --help to list all of the available command options):

recops dataset-match-faces -d $DATASET_ID -o $PATH_TO_SAVE_OUTPUT

Command options:


This is a helpful function to link face(s) to “identity (ies)” from given .csv compatible with export from dataset-matched-faces function. The .csv file should have at least the following two rows: <face_id>, <identity_id>

Basic use of this command (use dataset-link-matched-faces --help to list all of the available command options):

recops dataset-link-matched-faces $PATH_TO_CSV

CLUSTER FACES

Face clustering plays an essential role in exploiting massive unlabeled face data. Face clustering is a fundamental face analysis task and has wide applications in real-world scenarios like the dataset preparation or cleaning for face recognition.

Face recognition and face clustering are different, but highly related concepts. When performing face recognition we are applying supervised learning where we have both example images of faces we want to recognize along with the names that correspond to each face (i.e., the “class labels”).

But in face clustering we need to perform unsupervised learning — we have only the faces themselves with no names/labels. From there we need to identify and count the number of unique people in a dataset.

NOTE: This function might be time consuming depending on how much data we process. For truly massive datasets you should consider using the chinese whispers algorithm as it’s linear in time.

Use the following command to run face clustering for given dataset (use dataset-cluster-faces --help to list all of the available command options):

recops dataset-cluster-faces -d $DATASET_ID -o $PATH_TO_SAVE_OUTPUT

For debug logging and further analysis:

recops --log-level DEBUG dataset-cluster-faces -d $DATASET_ID -o $PATH_TO_SAVE_OUTPUT

Command options:


COMPUTE EXTENDED FIELDS

Moreover, recops comes with a strong facial attribute analysis module. You can use this function to analyze and extract rich metadata for each face in a dataset, including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) prediction.

Currently, the age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in this tutorial. Also, the CNN model to recognize facial expressions of human beings produces 57% accuracy on test set.

You can find here the entire source code for both apparent age prediction and gender prediction as a notebook to GitHub. Similarly, code for race and ethnicity prediction and facial expression recognition on GitHub, too.

IMPORTANT NOTE: PLEASE KEEP IN MIND THAT THIS FUNCTION USES WEAK AND QUESTIONABLE MODELS THAT CATEGORIZE FACE(S) IN A COMPLETELY DISCRIMINATIVE WAY.

You can use the following command to execute the facial attribute analysis and test it out yourself (use dataset-compute-extended-fields --help to list all of the available command options):

recops dataset-compute-extended-fields -d $DATASET_ID

Command options:


RECOGNIZE

Run face recognition on a given image and get an output report. Once we have created a dataset, with face(s) linked (or not) to “identity (ies)”, we can run this function against a provided image to find out if the face(s) inside the image match any of the face(s)/“identity (ies)" in our dataset.

The function will run the following process:

First we detect the faces on the given image. By detecting the face we use detector model (as specified in dataset). Then for each face we use the basemodel (as specified in dataset) to extract its face embedding - or vectors - similar to what we discussed in “IMPORT DATA” section above. The face vectors are used as unique identifiers for the face.

Once we have the face vector we iterate through all faces that are stored in our dataset and compute the distance between the 2 vectors (for each detected and stored face pair). If the distance between the 2 face vectors, is equal to and below the specified --threshold then we consider it a match. The distance is computed by the distance metric as specified in dataset.

Then, we sort all matched faces (if any) with priority to the ones having smaller distance, and keep maximum 10 matched faces (by default, but you can use --maximum flag to specify more or less) that are closest to the detected face.

Last but not least we create a report out for our matches. The default report outputs an .html file. The file is self-contained, it doesnt use any external link, so it can be reviewed offline by any browser. It does that by including raw images encoded in base64 format and modern SVG (Scalable Vector Graphics) functionality. It does not use any javascript. Additionally if the default report is not desired you can use your own template to customize how it looks like by specifying --template. To develop a template needs some development skills (nothing too special) and you should have a look at the existing template and follow it as a paradigm.

Use the following command to run face recognition on a given image (use recognize --help to list all of the available command options):

recops recognize -d $DATASET_ID $PATH_TO_IMAGE -o $PATH_TO_SAVE_OUTPUT

For debug logging and further analysis:

recops --log-level DEBUG recognize --force -d $DATASET_ID $PATH_TO_IMAGE -o $PATH_TO_SAVE_OUTPUT

Command options:


VERIFY

This function verifies that 2 faces belongs to the same person or different persons. Actually, we only pass an image pair as an input, and that’s all! This procedure will not write anything in the database, will just detect the faces in the given input images, compare the similarity of two face images and provides a similarity score based on the distance between two face vectors.

The dataset is used to get the --detector and --basemodel and has no interaction with it’s linked identities or faces whatsoever.

You can use the following command to verify face pairs as same person or different persons (use verify --help to list all of the available command options):

recops verify $PATH_TO_IMAGE $PATH_TO_IMAGE -d $DATASET_ID

Command options:


WEBUI

recops serves a web user-friendly interface as well, that allows you to easily manage basic operations. Is a standard base web application that connects via HTTP and allows enhanced productivity with direct access to structure data, easy browsing, and several other features built to provide a rich user experience.

To access the web interface, use the following command, launch a web browser and enter the server web address http://127.0.0.1:5000. Stop the process using Ctrl-C at any time.

recops webui

Below are briefly described some of the basic recops operations served on the web user-friendly interface. You are expected to call these operations as HTTP post methods. Service endpoints will be:

recops webui has been built with simplicity and composability in mind. Please keep in mind that this is a BETA version and the webui is in its testing phase. There are some known issues, a few missing features, and there are guaranteed to be bugs.

Our goal in this first release was to lay the foundation for a structured, extensible and integrated architecture. We’ll continue to add features for the foreseeable future and improve the performance of the user interface.