Face detecion and manipulation in Xamarin using Azure Face service & SkiaSharp

Have you ever wondered how apps like Snapchat work? In this article you will learn how to create a Xamarin app that can detect faces and manipulate them using Azure Faces API.

First thing you will need is an Azure subscription, so you can create a Face resource. The pricing for a Face resource is quite resonable. It is free for up to 20 transactions a minute, then the pricing structure changes once the demand is greater.

In the Azure portal click “Create a Resource” then search for “Face”. Create the resource with the defaults set. Once the resource is created you will need two important peices of information; The face resource endpoint and a key. The endpoint is available from the Face resources overview screen. so go ahead and copy that down for later. Directly under the endpoint information click “Manage keys” and copy “Key 1” and jot that down for later. With these two peices of importantion you are ready to get started.

In Visual Studio create a new Xamarin.Forms project with Android & iOS capabilities. With the mobile app created we can get started adding nugets. Add the Microsoft.Azure.CognitiveServices.Vision.Face 2.4.0-preview to your Xamarin.Forms project. You will need to ensure that “Include prereleases” is checked from the manage nuget screen in order to find this package.

We will be using Xam.Media.Plugin nuget to take a selfy with the devices camera, so go ahead and add this nuget to all of your projects. We will paint the selfy onto a SkiaSharp canvas therefore you will need to add the SkiaSharp and SkiaSharp.Views.Forms nugets to your Xamarin.Forms project. The final nuget we will need to add to our projects is the Arc.UserDialogs nuget to present an ActivityIndicator while we are awaiting a response from our Face api, thus you must add this to your projects as well.

In the MainActivity of the Android project you will need to initialize the Xam.Media.Plugin as well as the Acr.UserDialogs, there is no initialization requirements for iOS.

CrossCurrentActivity.Current.Init(this, savedInstanceState);


Since we are using camera you will need to add the correct permissions to your Android and iOS apps. In the AndroidManifest add the file provider inside the application tag. There are two different ways to add a file provider depending on if you are using AndroidX or not, this is how its done in AndroidX.

<provider android:exported="false"
              <meta-data android:name="android.support.FILE_PROVIDER_PATHS"

For iOS in the info.plist add the camera permissions. Feel free to change the permission description to be more specific to your applications usage of camera.

<string>The app needs access to the camera to take photos</string>
<string>The app needs access to microphone for taking videos</string>
<string>This app needs access to the photo gallery for picking photos and videos</string>
<string>This app needs access to photos gallery for picking photos and videos</string>


Now that you have all the requirements in place, all thats left to do is add the code. We will start by creating a FaceDetector class that is responsible for setting up our FaceClient and sending an image as a Stream to the Face api.

public class FaceDetector
    public FaceDetector() => InitializeFaceClient();

    /// <summary>
    /// Initializes Azure Face client
    /// </summary>
    void InitializeFaceClient()
        var faceCredentials = new ApiKeyServiceClientCredentials(FACE_DETECTION_KEY);
        _faceClient = new FaceClient(faceCredentials);
        _faceClient.Endpoint = FACE_DETECTION_ENDPOINT;

    /// <summary>
    /// Gets faces from Azure Face API
    /// </summary>
    /// <param name="image"></param>
    /// <returns></returns>
    public async Task<List<DetectedFace>> GetFaces(MediaFile image) =>
        (await _faceClient.Face.DetectWithStreamAsync(

    private FaceClient _faceClient;
    private const string FACE_DETECTION_KEY = "YOUR_KEY";
    private const string FACE_DETECTION_ENDPOINT = "https://YOUR_ENDPOINT.cognitiveservices.azure.com/";


In order to paint silly faces onto the canvas we will need to create a FaceCanvas class. This class does not paint the selfy onto the canvas, it just paints the face filters. Normally, I would add an OnPaintSurface override to a component like this to enforce Single-Responsibility, but there is an issue when using camera while painting to an SKCanvas at the same time. The issue is that when you launch the camera you are actually leaving your application and navigating to the devices camera application. When you leave your application, the canvas is disposed and you will not be able to write to it. For that reason the painting is done in the ContentPage code behind instead of the FaceCanvas control.

The DetectedFace object is what is returned to us by the the Face api. It has a vast set of interesting properties. The ones we will be concerned with today will be the FaceLandmarks property.

A face diagram with all 27 landmarks labeled

Each of these points are returned to us in the FaceLandmarks objects as X,Y coordinates. All we need to do is calculate the offset and scale. Then we will be able to draw bitmaps as rectangles over these points. Some padding is added the the calculation of the rectangle to make the face distortion a bit more realistic.

You will need to add the images for LeftEye, RightEye, Nose and Mouth to your Xamarin.Forms project as EmbeddedResource’s. If you want to use mine feel free to grab them from my GitHub, the link is at the bottom, although you might have more fun creating your own.

public class FaceCanvas : SKCanvasView
    public static readonly BindableProperty FacesProperty = BindableProperty.Create(nameof(Faces),
    new List<DetectedFace>());

    public IList<DetectedFace> Faces
        get => (IList<DetectedFace>)GetValue(FacesProperty);
        set => SetValue(FacesProperty, value);

    /// <summary>
    /// Adds a silly filter to a face
    /// </summary>
    public void ApplyFaceFilter(SKCanvas canvas, DetectedFace face, float left,float top,float scale)
        if (face.FaceLandmarks != null)
            //Draw eyes
            var eyePadding = 50;

            var leftEye = LoadImage("FaceChanger.Images.LeftEye.png");
            canvas.DrawBitmap(leftEye, new SKRect(
                left + (scale * (float)(face.FaceLandmarks.EyeLeftOuter.X - eyePadding)),
                top + (scale * (float)(face.FaceLandmarks.EyeLeftTop.Y - eyePadding)),
                left + (scale * (float)(face.FaceLandmarks.EyeLeftInner.X + eyePadding)),
                top + (scale * (float)(face.FaceLandmarks.EyeLeftBottom.Y + eyePadding))));

            var rightEye = LoadImage("FaceChanger.Images.RightEye.png");
            canvas.DrawBitmap(rightEye, new SKRect(
                left + (scale * (float)(face.FaceLandmarks.EyeRightInner.X - eyePadding)),
                top + (scale * (float)(face.FaceLandmarks.EyeRightTop.Y - eyePadding)),
                left + (scale * (float)(face.FaceLandmarks.EyeRightOuter.X + eyePadding)),
                top + (scale * (float)(face.FaceLandmarks.EyeRightBottom.Y + eyePadding))));

            //Draw nose
            var nosePadding = 30;
            var nose = LoadImage("FaceChanger.Images.Nose.png");
            canvas.DrawBitmap(nose, new SKRect(
                left + (scale * (float)(face.FaceLandmarks.NoseLeftAlarOutTip.X - nosePadding)),
                top + (scale * (float)face.FaceLandmarks.NoseLeftAlarTop.Y),
                left + (scale * (float)(face.FaceLandmarks.NoseRightAlarOutTip.X + nosePadding)),
                top + (scale * (float)(face.FaceLandmarks.NoseRightAlarOutTip.Y + nosePadding + 30))));

            //Draw mouth
            var mouthPadding = 40;
            var mouth = LoadImage("FaceChanger.Images.Mouth.png");
            canvas.DrawBitmap(mouth, new SKRect(
                left + (scale * (float)(face.FaceLandmarks.MouthLeft.X - mouthPadding)),
                top + (scale * (float)(face.FaceLandmarks.UpperLipTop.Y - mouthPadding)),
                left + (scale * (float)(face.FaceLandmarks.MouthRight.X + mouthPadding)),
                top + (scale * (float)(face.FaceLandmarks.UnderLipBottom.Y + mouthPadding))));

    /// <summary>
    /// Loads image from embedded resource
    /// </summary>
    /// <param name="resourceId"></param>
    /// <returns></returns>
    private SKBitmap LoadImage(string resourceId)
        SKBitmap bitmap;
        var assembly = GetType().GetTypeInfo().Assembly;

        using (var stream = assembly.GetManifestResourceStream(resourceId))
            bitmap = SKBitmap.Decode(stream);

        return bitmap;


Next thing we need to do is simply reference the FaceCanvas control from anywhere in our XAML.

<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
    <face:FaceCanvas x:Name="FacePaintingCanvas" PaintSurface="PaintFaces"/>

Code Behind

As I had mentioned previously because of the issue of SKCanvas disposing, my hand was forced to code some of the painting logic in the code behind. This is not the end of the world. We should always strive to avoid adding logic to the codebeind, but this is not always possible, such is this example.

We start off by calling InitializeFacePainter method, which will immediatly launch the Camera application and allow the user to take a selfy. Once the picture is taken, we pass it as a Stream to our Face api and a list of DetectedFace’s are returned to us. We set these faces in our FaceCanvas’s Faces property and call InvalidateSurface to force the canvas to repaint. When the canvas repaints, the paint logic will see that there are now faces available and paint the face filters onto the canvas.

public partial class MainPage
    public MainPage()

    /// <summary>
    /// Initializes camera, face painter and launches camera
    /// </summary>
    private async void InitializeFacePainter()
        _faceAPI = new FaceDetector();
        await CrossMedia.Current.Initialize();
        CapturedImage = await TakePicture();

    /// <summary>
    /// Draws face filters on every face in captured image
    /// </summary>
    /// <param name="sender"></param>
    /// <param name="e"></param>
    private void PaintFaces(object sender, SkiaSharp.Views.Forms.SKPaintSurfaceEventArgs e)
        var info = e.Info;
        var canvas = e.Surface.Canvas;

        if(_capturedImageBitmap != null)
            var scale = Math.Min(info.Width / (float)_capturedImageBitmap.Width, info.Height / (float)_capturedImageBitmap.Height);
            var scaledWidth = scale * _capturedImageBitmap.Width;
            var scaledHeight = scale * _capturedImageBitmap.Height;
            var scaledLeft = (info.Width - scaledWidth) / 2;
            var scaledTop = (info.Height - scaledHeight) / 2;
            //Draws captured image
            canvas.DrawBitmap(_capturedImageBitmap, new SKRect(scaledLeft, scaledTop, scaledLeft + scaledWidth, scaledTop + scaledHeight));

            //Draws faces over captured image
            FacePaintingCanvas.Faces?.ForEach(face => FacePaintingCanvas.ApplyFaceFilter(canvas, face, scaledLeft, scaledTop, scale));

    /// <summary>
    /// Takes selfy
    /// </summary>
    /// <returns>captured image</returns>
    public async Task<MediaFile> TakePicture()
        MediaFile mediaFile = null;
        if(CrossMedia.Current.IsCameraAvailable && CrossMedia.Current.IsTakePhotoSupported)
            mediaFile = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions
                Name = "selfy.jpg",
                RotateImage = true,
                Directory = "FaceChanger",
                PhotoSize = PhotoSize.Medium,
                DefaultCamera = CameraDevice.Front
            await DisplayAlert("Camera not found", "No Camera Available", "Ok");
        return mediaFile;

    /// <summary>
    /// Uses Azure Face API to detect faces and draws them on SkiaSharp canvas
    /// </summary>
    public async void DetectAndPaintFaces()

        if (CapturedImage != null)
            UserDialogs.Instance.ShowLoading("Loading", MaskType.Black);
            _capturedImageBitmap = SKBitmap.Decode(CapturedImage.GetStreamWithImageRotatedForExternalStorage());

            FacePaintingCanvas.Faces = await _faceAPI.GetFaces(CapturedImage);
            if (FacePaintingCanvas.Faces.Count > 0)
                UserDialogs.Instance.Toast("No faces found");

    public static MediaFile CapturedImage;

    private FaceDetector _faceAPI;
    private SKBitmap _capturedImageBitmap;


There are many approaches to solving a problem and this is just one of them. Instead of using Azure’s Face service, you could use algorithms to detect landmarks on a face. Insted of manipulating an image, you can manipulate video like Snapchat using RSTP.

The Azure Face service has soo much more to offer than I was able to show you here. You can use it for identification, judging sentiment of a face, detecting facial hair and gender, etc. It is well worth checking out!

I hope that you found this article enjoyable and that you learned a new skill along thee way. Thank you for following along and Happy coding!