Face Detection with face-api.js

Face Detection with face-api.js

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces. Researchers are currently developing multiple methods in which facial recognition systems work. The most advanced face recognition method, which is also employed to authenticate users through ID verification services, works by pinpointing and measuring facial features from a given image.

While initially a form of computer application, facial recognition systems have seen wider uses in recent times on smartphones and in other forms of technology, such as robotics. Because computerized facial recognition involves the measurement of a human’s physiological characteristics facial recognition systems are categorized as biometrics. Although the accuracy of facial recognition systems as a biometric technology is lower than iris recognition and fingerprint recognition, it is widely adopted. Facial recognition systems have been deployed in advanced human-computer interaction, video surveillance and automatic indexing of images.

(Follow Wikipedia).

There are many libraries that support algorithms for this, in this article I will use the library face-api.js.

ABOUT FACE-API.JS

Face-api.js is a JavaScript API used to recognize faces in Browser as well as NodeJS using Tensorflow.js.

Some main features:

Face Recognition

Face Landmark Detection

Face Expression Recognition

Age Estimation & Gender Recognition

In this article I will present the face recognition feature.

Face detection consists of 2 steps: face detection and face recognition.

In short, the goal we want to achieve is to identify a person as they provide a picture of their face. We do this by collecting one (or more) pictures tagged with the name of each person we want to recognize. Then compare the input image with the reference data and find the most similar reference image. If both images are identical enough, we output the name of the person and vice versa, it is considered indeterminate.

You can start looking at the example below:

The first create a directory structure to manage.

mkdir face
touch index.html
touch script.js
mkdir data // Used to drop input image data
mkdir images // Image used for testing
mkdir models // Contains pre-trained model
face-api-js.min.js // Download from github.

Next I will download the models that the face-api.js provides.

Data of models downloaded in: https://github.com/justadudewhohacks/face-api.js/tree/master/weights

In the html file, I simply put the image in it. And a canvas tag to show the box of the face.

<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <script src="./face-api.min.js"></script>
    <script src="./script.js"></script>
    <title>Face Recognition</title>
    <style>
        canvas {
            position: absolute;
            top: 0;
            left: 0;
        }
    </style>
</head>

<body>
    <img id="myImg" src="images/huu.jpg" />
    <canvas id="myCanvas">
</body>

</html>

Load Model into the project, here use 3 main models:

  • ssdMobilenetV1 Model: This is the pre-trained model used to detect faces.
  • faceLandmark68Net Model: This is a pre-trained model used to display points around his face.
  • FaceRecognitionNet Model: This is the pre-trained model for facial recognition.

In the file script.js will load these 3 models first.

(async () => {
  // Load model
  await faceapi.nets.ssdMobilenetv1.loadFromUri("/models");
  await faceapi.nets.faceRecognitionNet.loadFromUri("/models");
  await faceapi.nets.faceLandmark68Net.loadFromUri("/models");
})();

Use the detectSingleFace() function to detect faces in images.

After finished loading the model, continue to write face detection.

(async () => {
  // Load model
  await faceapi.nets.ssdMobilenetv1.loadFromUri("/models");
  await faceapi.nets.faceRecognitionNet.loadFromUri("/models");
  await faceapi.nets.faceLandmark68Net.loadFromUri("/models");

  // Detect Face
  const input = document.getElementById("myImg");
  const result = await faceapi
    .detectSingleFace(input, new faceapi.SsdMobilenetv1Options())
    .withFaceLandmarks()
    .withFaceDescriptor();
  const displaySize = { width: input.width, height: input.height };
  // resize the overlay canvas to the input dimensions
  const canvas = document.getElementById("myCanvas");
  faceapi.matchDimensions(canvas, displaySize);
  const resizedDetections = faceapi.resizeResults(result, displaySize);
  console.log(resizedDetections);
})();

The detectSingleFace() function will return the result of detecting a face and return the coordinates of the face.

Use the FaceMatcher() and findBestMatch() functions to recognize faces.

After detecting a face, now we will write the detectFace() function used to detect a face.

async function detectFace() {
  const label = "Huu";
  const numberImage = 5;
  const descriptions = [];
  for (let i = 1; i <= numberImage; i++) {
    const img = await faceapi.fetchImage(
      `/data/Huu/${i}.jpg`
    );
    const detection = await faceapi
      .detectSingleFace(img)
      .withFaceLandmarks()
      .withFaceDescriptor();
    descriptions.push(detection.descriptor);
  }
  return new faceapi.LabeledFaceDescriptors(label, descriptions);
}

Then, we are use the detectFace () function to detect faces and draw a box around the face.

(async () => {
  // Load model
  await faceapi.nets.ssdMobilenetv1.loadFromUri("/models");
  await faceapi.nets.faceRecognitionNet.loadFromUri("/models");
  await faceapi.nets.faceLandmark68Net.loadFromUri("/models");

  // Detect Face
  const input = document.getElementById("myImg");
  const result = await faceapi
    .detectSingleFace(input, new faceapi.SsdMobilenetv1Options())
    .withFaceLandmarks()
    .withFaceDescriptor();
  const displaySize = { width: input.width, height: input.height };
  // resize the overlay canvas to the input dimensions
  const canvas = document.getElementById("myCanvas");
  faceapi.matchDimensions(canvas, displaySize);
  const resizedDetections = faceapi.resizeResults(result, displaySize);
  console.log(resizedDetections);

  // Recognize Face
  const labeledFaceDescriptors = await detectFace();
  const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, 0.7);
  if (result) {
    const bestMatch = faceMatcher.findBestMatch(result.descriptor);
    const box = resizedDetections.detection.box;
    const drawBox = new faceapi.draw.DrawBox(box, { label: bestMatch.label });
    drawBox.draw(canvas);
  }
})();

In the end, we are completed a simple recognition system in JavaScript .

You can see the source code at the link Gitlab .

CONCLUDE

Above is the article introducing face-api.js and its simple usage. 

We can expand the research to add some functions not introduced above such as: determining age, gender, feelings.

Note that this identity system is used by pre-trained the model without going through any complicated steps to train the data from the pre-trained model so its accuracy is relative.

REFERENCES

https://en.wikipedia.org/wiki/Facial_recognition_system

https://github.com/justadudewhohacks/face-api.js#getting-started

https://itnext.io/face-api-js-javascript-api-for-face-recognition-in-the-browser-with-tensorflow-js-bcc2a6c4cf07

Image source: https://www.pexels.com/license/