Combine Core Image Filters and CIDetector to Build Useful iOS Photo & Video Processing Apps

Eilon Krauthammer
6 min readMay 15, 2022
Photo by h heyerlein on Unsplash

While the CIDetector image processing API is not new by any means, it is still a powerful tool for modern use cases, and has gotten better over the past years.
In this article, I will overview some of what you can do with CIDetector, and go through the steps of building a mini app that automatically blurs faces in a given image, utilizing CIDetector and Core Image filters.

Automatic face blurring! That’s what we’ll build

What is CIDetector?

A CIDetector object uses image processing to search for and identify notable features in images. Detected features are represented by CIFeature objects that provide more information about each feature.

CIDetector can identify various types of features in an image, such as texts, rectangular areas, barcodes, and, faces — which is what we will focus on in this article. You can read about the other types in Apple’s documentation.

Apart from the different types of detection, we can supply the CIDetector with specific options, or features — which we can use to filter the results given to us by the detector object. For example, we can filter results only to faces that has non-closed eyes, by providing the CIDetectorEyeBlink key.

Finally, we can specify configuration options in the CIDetector initializer, that can help us with optimizing for the results we expect to get. For example, we can specify CIDetectorAccuracyLow, which optimizes for performance but is less accurate, that is better for continuous processing in videos. However, if computing time is not a concern, it is best to use CIDetectorAccuracyHigh.

Diving in

We’ll start by getting familiar with how the CIDetector API behaves, so we’ll start our mini-app by taking an image as an input and find the faces if there are any, marking them with a green box.

If you’d like to follow along, you can clone the “starter” branch in the project repo.

To get started, all we need is a basic UIViewController that has an image view connected, and a UIImagePickerController that we can present and is hooked up with the relevant delegate methods for handling its output.

Next, we’ll create a new ImageProcessingService class that will be responsible for interacting with our CIDetector object. Let’s create an instance of both CIContext and CIDetector, which work together.

As you can tell, we create a new CIDetector instance with a CIDetectorTypeFace as a type, which is what the object will look for when we ask for results.

Now we can create the function that returns a CGRect, normalized to the image view’s frame so that we can display our bounding box around the faces. Here are the steps:

Note that we don’t cast the result features to CIFaceFeature and only work with the abstract CIFeature class, because that’s enough for our needs. You can, however, cast the results to an array of CIFaceFeatures and unlock access to interesting features like face angle, eyes & mouth position and more!

The above code will not compile for you however, unless you add the following CGImagePropertyOrientation extension provided by Apple, for initializing with a UIImage.Orientation. We should correct the orientation because initializing a CIImage with a UIImage may lose its orientation.

At this point, we can hook up the UI layer to display the faces inside a little green bounding box!

We can now easily visualize the results of CIDetector’s work. nice job!

Now that we’ve got a feel for CIDetector, it’s time to get back to our original focus — blurring those faces out.
As most programming problems, there are likely plenty of ways to get this done. I’ll share the solution I’m currently comfortable with, but feel free to share any other solution if you find one!

Here’s how our image processing pipeline is roughly going to look like:

  1. Get the face detection results from CIDetector
  2. Iterate over every CIFeature, extract its bounds and add a mask with the frame to a mask map (more about it soon)
  3. Create a blurred copy of the original image
  4. Mask the blurred image with our mask map, over the original image

Since Core Image doesn’t offer a pre-built way of blurring just some parts of an image, we need to engineer something of our own. In this case we can use masks — we can mask a blurred copy of the image with an image that contains color representation of where the faces are, and finally compose the result over the original image, resulting in the original image with blurry parts on top.

To achieve this effect in Core Image, we can use the CIBlendWithMask filter, which takes in an input image, which in our case will be the blurred copy, mask image, which will be our mask map and a background image which will be our original image.

One last thing before bringing it together in code — in the image above I show a very basic example of a rectangular mask. It works, but as you can see in the result, the output blur has a rectangular shape and doesn’t look great. We want a shape that resembles a human head, and that does not have sharp corners. Luckily, Core Image provides a useful filter called CIRadialGradient, that accepts two radii and color values as input, resulting in a radial gradient. For the color, we can use two white CIColor objects, one with 1.0 for alpha and second with 0.0, so we can get that smooth effect. For radius, it depends what style how strong you want to blur output to be. I found that half the face rect’s height for inputRadius0 and the face rect’s height for inputRadius1 looks pretty nice. Regardless, this is what we can expect our new radial gradient mask to look like:

Masked with the blurred image, the new mask will provide a much more natural blur look, and will adopt more of the face shape.

So let’s get our hands dirty. In ImageProcessingService.swift we’ll create a blurFace(in image: UIImage) -> UIImage? function and fill in the logic.

All that’s left to do is wire up our view controller to this logic.
In ViewController.swift, add the following method and call it when receiving an image from the image .

func blurFaces(in image: UIImage) {
let resultImage = imageProcessingService.blurFaces(in: image)
imageView.image = resultImage
}

Note that the processing is currently running on the main thread, which results in blocking the UI. In an app taken to production, you should run the image creation separate background thread.

And that’s it!

Of course, from here, possibilities are endless. You might want to let the user select his preferred blur style, select who is blurred and who’s not, or even use emojis instead of a blur effect 🤷‍♂️.

You are welcome to download the source code for this project from the project’s repo.

I truly hope that you’ve learned something from article! If you did, feel free to share it in the comments or to me in private. Also, every question is welcome.

The main purpose of this article is to show how (relatively) simple it is to craft an implementation for ideas that sound really complex at first. Luckily for us, iOS developers, Apple provides countless AI & ML backed APIs for image and video processing — from Core Image, Vision to ARKit and more. So if you have an idea for an app, even if it sounds complex — there’s likely an API to help you with that!

Thank you for reading. See you in the next one!

Photo by Kristopher Roller on Unsplash

--

--