Getting started with our Facial Recognition API (frAPI)

We update this to include changes from our frAPI version 5.0. Know more about the
changes in the Medium post.

Hello everyone, our facial recognition API, or frAPI for short, was made to be simple and fast both in performance and setup time. Notice that you can contact us if you want to test it without cost (even for the on-premise version).

First some references

In order to provide a complete specification, we made a swagger for the API which can be used as reference and also as a tool to make the first tests (once you receive your API key):
Swagger specification of API

There is also an ipython notebook for you to download and test right off the bat. There is a small step-by-step on how to do it here:
Testing the API: ipython notebook

If you want something even simpler that our API, we provide a on-premise version with interface of the system, which is described in the following post:
frAPI 5.0 highlights (on-premise)

API overview

The two major routes of the API are /train and /recognize:

  • /train/person: This route is used to include a labeled person in your database. If more than one person is present in the image, it will take the largest face detection of the image. It is as simple as giving an image and a label to the system.
  • /recognize/people: This will detect all the faces present in the image and associate labels to them. The data will contain the predicted label and a value of confidence ranging from 0 to 100. The value of confidence is ideal to attest the accuracy of the recognition and to prone people that are not in the database (a threshold of < 20 is usually enough for the latter).

Your API calls will always be identified by the API key, which must be present in the HTTP request headers. There are two ways to pass an image to the API, depending on the content type of the request:

  • multipart/form-data: Give this Content-Type, the API will look for a parameter in the form called image. See in the example bellow how to perform this, is quite simple. For obvious reasons, this only works with POST requests.
  • application/json: In this case, you should provide an url of the image inside a json. The name of the parameter is imageUrl and should be available for direct access.

These couple of points together with the swagger reference should be enough to perform the first steps on facial recognition, so lets do it!

Training

Imagine that you have a local path with a couple of subdirectories which contain the train images for you base. For instance, a directory called ‘train’ with subdirectories ‘Arnold_Schwarzenegger’, ‘Julianne_Moore’, ‘Luke_Skywalker, ‘Pele’, etc. The following snippet of code in python can be used to train this dataset:


import requests
from requests_toolbelt import MultipartEncoder
train_path = './train'
HOST = 'http://demo.meerkat.com.br/frapi/'
api_key = YOUR_API_KEY

for label_name in os.listdir(train_path):
    for image_name in os.listdir(train_path+'/'+label_name):
        filename = train_path+'/'+label_name+'/'+image_name
        m = MultipartEncoder(fields={'image': ('filename', open(filename, 'rb')), 
                                     'label': label_name})
        res = requests.post(HOST+'/train/person', data=m, 
                    headers={'Content-Type': m.content_type, 'api_key': api_key})

Isn’t that simple? This code uses the Python package requests and requests_toolbelt, but to use your favorite language/package is quite straight-forward. You will know that it worked if the returned status code is 200 and you get back a JSON similar to this:

{
    "personSamples": 3,
    "selectedFace": {
        "bottom_right": {
            "x": 156,
            "y": 123
         },
         "top_left": {
            "x": 196,
            "y": 162
         }
    },
    "trainId": "06752ebc7286633fcc1f31dc29e04037"
}

You might want to keep the trainId for later if you are interested in removing this training sample.

Recognition

Recognition is as simple as it can get, you just need to make a request (GET or POST) to the /recognize/people end-point:


filename = 'some_image_to_recognize.png'
m = MultipartEncoder(fields={'image': ('image', open(filename, 'rb'))})

res = requests.post(HOST+'/recognize/people', data=m, headers={'Content-Type': m.content_type, 'api_key': api_key})

You should get a JSON like this:

{
     "people": [
         {
             "bottom_right": {
                  "x": 277,
                  "y": 315
             },
             "top_left": { 
                  "x": 107, 
                  "y": 145 
             },
             "recognition": {
                  "confidence": 70.6480930725795,
                  "predictedLabel": "Arnold_Schwarzenegger"
             }
 
        }]
}

Notice that together with the face rectangle and the recognized label, we also provide a confidence value from the face recognition algorithm.

That is it. This (together with the referenced documents) should be enough for you to start using the API. Enjoy!

9 comments

  1. Liesel Strauss · October 27

    Thanks

  2. Md Amran Hossain · October 27

    please can you send api for java. after Testing success we will buy this service. please replay as soon as possible.
    thanks,
    Md Amran Hossain

    • meerkat · October 27

      Hello Amran, we just finished our Java SDK for face verification and face liveness. Please send us
      an e-mail if you want to test those.

  3. kanagaraj · October 27

    hello hey
    Thank for allowing me ujse the software

  4. Giovanny · October 27

    In another language distinct of phyton? like .NET

    • meerkat · October 27

      It’s a restful HTTP API, so you can just use your favorite lib to make the appropriate calls.

      Also, we provide SDKs for Java, C++, Python, Android and iOS. We also make ports to other languages. Please
      contact us if you’re interested.

  5. Mario Ghersi · October 27

    I would like to test a group of picture (1700) to find a group of people (160). Can you give as some advice how to process this quantity of information and how to do it?
    My preliminary test of the platform with one person and 5 pictures was OK.

    • meerkat · 30 Days Ago

      I recommend you to try our on-premise version, since it’s much suitable for larger datasets.
      Take look our medium Post (medium.com/@meerkat.cv/pushing-the-boundaries-of-face-recognition-e34dfdf3b9ad) and contact me at gustavo@meerkat.com.br