Serverless Face Recognition Application

July 22, 2022

Have you ever wondered how hard it is to build a face recognition application nowadays? I asked myself the same question, and the answer might surprise you. Spoiler alert - easy! 

Let’s go straight to the point and see what we are going to build:

  • Progressive Web Application (PWA)
  • Serverless backend using AWS Lambda, Amazon S3, and Amazon Rekognition

What is PWA

Progressive Web Application is a web application built with standard HTML, CSS, and JavaScript technologies that can be installed on your desktop or mobile device using your browser as a delivery method. It is just a regular web page that does not require any kind of unique bundling or packaging, nor does it need to be delivered using Google Play, the AppStore, or something similar. To put it simply, you open your web browser, visit the web page, and it gives you the option to install it on your device. Depending on your platform, the message can be different, but it does the same thing. 

PWAs are supported on all platforms, with an important note about Apple. Because of their monopoly policy that tries to kill the web, you can only use Safari on macOS and iOS to add PWA onto the home screen. Hopefully, this will end like how Microsoft was tamed down at the beginning of the 21st century, but that is a different story. 

You don’t need any kind of framework to build a PWA. You need to follow the rules and specifications to make an application installable. However, to speed things up, you can use anything you are comfortable with. I'm going to use Ionic Framework with Angular. To start things rolling, visit this excellent blog post that I also used to jump-start my project. 

What is Amazon Rekognition

Lambda and S3 are common and known services to every developer, so I will not spend time describing what they are. The more exciting part of our application development is the Rekognition service which is a part of the AWS Machine Learning service offering. It is a visual analysis service that can, with its pre-trained models, analyze your video or image input. 

Rekognition can recognize text in the image, identify the celebrities in the photos, compare faces, and analyze faces, giving you information about age, emotion, and whatnot. 

Our application will use DetectFaces API, which can recognize up to 100 faces in the image. For each face, we will receive attributes in the response, such as

  • Position of the face in the photo
  • Age range
  • Does the person wear eyeglasses? 
  • Are they wearing sunglasses?
  • Gender
  • Are they smiling? 
  • And even the things like are their mouth open, do they have a beard, mustache, etc.

The App

The workflow is as follows:

  • Take the photo from the gallery or your device camera
  • Get the pre-signed URL with the random file name to upload the file to S3
  • Upload the file
  • Call the Amazon Rekogntion API with the generated file name to analyze the image
  • Present the response

To create the PWA, we will need to do the following:

  • Install tools
  • Generate project
  • Add Camera plugin 
  • Call the backend APIs
  • Write a code to display the results
  • Deploy application


We will call our application “agnitio” which is a Latin word for Recognition.

Install Ionic tools: 

  
> npm i -g @ionic/cli
> ionic start agnitio blank --type=angular --capacitor
  

Install Angular and add PWA tools: 

  
> npm i -g @angular/cli
> ng add @angular/pwa
  

Add Camera plugin:

  
> npm i @capacitor/camera
> npm i @ionic/pwa-elements
  

I will not go into too many details about the Ionic and Angular, but if you are interested in learning more, check out the Ionic blog post mentioned earlier then browse their documentation page. You can find the relevant part of the code in the `src/app/home/home.page.ts` and `src/app/home/home.page.html` files.

Now, here is where the magic happens:

  
async takePicture() {
    const photo = await Camera.getPhoto({
      quality: 90,
      allowEditing: true,
      resultType: CameraResultType.Base64,
      source: CameraSource.Camera
    });

    this.image = 'data:image/png;base64,' + photo.base64String;
    const blob = this.base64ToArrayBuffer(photo.base64String);

    const { signedUrl, fileName } = await this.getSignedUrl(blob.type);
    try {
      await this.showLoading('Recognizing...');
      await this.uploadFile(signedUrl, blob);
      const result: any = await this.recognize(fileName);
      this.FaceDetails = result.FaceDetails;
      if(this.FaceDetails.length === 0) {
        await this.presentAlert('No faces found');
      }
    } catch (error) {
      console.log(error);
    } finally {
      await this.hideLoading();
    }
  }
  
  

        
          

Face {{i+1}}

Gender: {{ face?.Gender?.Value }}
Age: {{ face?.AgeRange?.Low }} - {{ face?.AgeRange?.High }}
Smile: {{ face?.Smile?.Value }}
Beard: {{ face?.Beard?.Value }}
Mustache: {{ face?.Mustache?.Value }}
Mouth open: {{ face?.MouthOpen?.Value }}
Eyeglasses: {{ face?.Eyeglasses?.Value }}
Wearing sunglasses: {{ face?.Sunglasses?.Value }}
Emotion: {{ face?.Emotions.length > 0 ? face?.Emotions[0].Type : 'Neutral'}}

What is left is to set up a backend project. We will use the Serverless framework. 

We need two Lambda functions. The First Lambda function generates the pre-signed URL. The second Lambda function will call the Rekognition API. You can find the full source at the end of this article. Check the `backend` folder in the root directory of the project. Please note that we are using AWS SDK for JavaScript v3. 

  
> npm i -g serverless
> sls create -t aws-nodejs -p backend
> npm i –save-dev serverless-iam-roles-per-function
  

We will update the serverless.yml file to use the installed plugin, then add the definition for our two endpoints. I am using parameter store to define the bucket where the images for the analysis will be stored.

  
 environment:
    UPLOAD_BUCKET_NAME: ${ssm:/agnitio/upload-bucket-name}
  

Generate pre-signed URL Lambda:

  
const { errorResponse, buildResponse } = require("./utils")
const { randomUUID } = require('crypto');
const { getSignedUrl } = require("@aws-sdk/s3-request-presigner");
const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");
const handler = async (event) => {
  try {
    const { ContentType } = event.queryStringParameters;
    const fileName = randomUUID();
    const client = new S3Client();
    const command = new PutObjectCommand({
      Bucket: process.env.UPLOAD_BUCKET_NAME,
      Key: fileName,
      ContentType: decodeURIComponent(ContentType),
    });
    const signedUrl = await getSignedUrl(client, command, { expiresIn: 3600 });
    return buildResponse(200, { signedUrl, fileName });
  } catch (error) {
    return errorResponse(error);
  }
}

module.exports = {
  handler
}
  

Call the Rekognize API Lambda:

  
const { errorResponse, buildResponse } = require("./utils")
const { RekognitionClient, DetectFacesCommand } = require('@aws-sdk/client-rekognition');

const handler = async (event) => {
  try {
    const { fileName } = JSON.parse(event.body);
    if(!fileName) throw {
      statusCode: 400,
      message: "fileName is required"
    }

    const client = new RekognitionClient();
    const command = new DetectFacesCommand({
      Image: {
        S3Object: {
          Bucket: process.env.UPLOAD_BUCKET_NAME, 
          Name: fileName
        }
      },
      Attributes: ["ALL"]
    });
    
    const response = await client.send(command);
    return buildResponse(200, response);
  } catch (error) {
    return errorResponse(error);
  } 
}

module.exports = {
  handler
}
  

Deploy the App

As I mentioned at the beginning of this article, PWA is just a Web Application. We don’t need Google Play or the App Store to distribute it, but we do need to deploy it somewhere and make it available for our users. The fastest way this time was to use the Netlify service. I have connected the repository and given permission. The only important thing to remember is that you must provide a build command so the Netlify service can know how to build your app. For my version of Angular, that command is

  
ng build --configuration=production
  

The distribution folder name where the artifacts are stored is called `www`.

Of course, you don’t need to use Netlify, you can deploy your app to any web server. It is just a normal web application. For example, you can fork the code and expand it further by adding CloudFront distribution instead of Netlify, but that is up to you. 

Conclusion

Finally, believe it or not, that is the full application in less than 2 hours! With the help of the Ionic Framework and AWS Services, we have just built a serverless face recognition application. It works in the browser, but for the best experience, open it in the browser on your phone, then add it to your home screen. Start from there and try. Here is the link to the app in production:

https://famous-fairy-e0b928.netlify.app

Here is the link to the full source code: 

https://github.com/bind-almir/agnitio

Here is the example of the results of the analyzed image:

I encourage you to experiment with other services as well. There are endless possibilities and use cases where this can be applied. You have the tools, don’t hesitate to use them!

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones - Founder
Ryan Jones
Founder
Speak to a Guru
arrow
Edu Marcos - CTO
Edu Marcos
Chief Technology Officer
Speak to a Guru
arrow
Mason Toberny
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.