Building A Camera App With SwiftUI and Combine
Building A Camera App With SwiftUI and Combine
kodeco.com/26244793-building-a-camera-app-with-swiftui-and-combine
Nov 10 2021
Swift 5.5, iOS 15, Xcode 13
5
·
21 Ratings
, 5 Reviews
Download materials
Save for later
1/23
amount of effort. How cool is that?
Add Combine as a data pipeline to the mix, and you’ve got the
Batman and Robin of the programming world. You can decide
which one is Batman and which one is Robin. :]
Some people may complain that SwiftUI and Combine aren’t ready
for prime time, but do you really want to tell Batman he can’t go out and fight crime? In
fact, would you believe you can write a camera app completely in SwiftUI without even
touching UIViewRepresentable ?
Creating a camera app using SwiftUI and Combine makes processing real-time video easy
and delightful. Video processing can already be thought of as a data pipeline. Since
Combine manages the flow of data like a pipeline, there are many similarities between
these patterns. Integrating them allows you to create powerful effects, and these pipelines
are easy to expand when future features demand.
In this tutorial, you’ll learn how to use this dynamic duo to:
You’ll do these with an app called Filter the World. So, get ready to filter the world
through your iPhone — even more so than you already do!
Note: Because this app requires access to the camera, you’ll need to run it on a real device.
The simulator just won’t do.
Getting Started
Click the Download Materials button at the top or bottom of this tutorial. There’s
currently not a lot there, aside from a custom Error , a helpful extension to convert from
a CVPixelBuffer to a CGImage , and some basic SwiftUI views that you’ll use to build
up the UI.
Currently, there’s a blank screen with the name of the app in the center.
If you want to make a camera-based app, what’s the most important thing you need?
Aside from having a cool name, being able to display the camera feed is probably a close
second.
2/23
Displaying Captured Frames
If you were going to use a UIViewRepresentable , you’d probably opt for attaching an
AVPreviewLayer to your UIView , but that’s not what you’re going to do! In SwiftUI,
you’ll display the captured frames as Image objects.
Since the data you get from the camera will be a CVPixelBuffer , you’ll need some way
to convert it to an Image . You can initialize an Image from a UIImage or a CGImage ,
and the second route is the one you’ll take.
Inside the Views group, create an iOS SwiftUI View file and call it FrameView.swift.
When adding FrameView to ContentView in a little bit, you’ll pass in the image it
should display. label is there to make your code in the next step a little bit cleaner!
3/23
// 1
// 2
GeometryReader { geometry in
// 3
.resizable()
.scaledToFill()
.frame(
width: geometry.size.width,
height: geometry.size.height,
alignment: .center)
.clipped()
} else {
// 4
Color.black
FrameView(image: nil)
.edgesIgnoringSafeArea(.all)
This adds the newly created FrameView and ignores the safe area, so the frames will flow
edge to edge. For now, you’re passing in nil , as you don’t have a CGImage , yet.
There’s no need to build and run now. If you did, it would show up black.
To display the frames now, you’ll need to add some code to set up the camera and receive
the captured output.
4/23
Managing the Camera
You’ll start by creating a manager for your camera — a CameraManager , if you will.
First, add a new Swift file named CameraManager.swift to the Camera group.
Now, replace the contents Xcode provides with the following code:
5/23
import AVFoundation
// 1
// 2
enum Status {
case unconfigured
case configured
case unauthorized
case failed
// 3
// 4
private init() {
configure()
// 5
So far, you’ve set up a basic structure for CameraManager . More specifically, you:
Configuring the camera requires two steps. First, check for permission to use the camera
and request it, if necessary. Second, configure AVCaptureSession .
// 1
// 2
// 3
// 4
// 5
6/23
Here, you define:
DispatchQueue.main.async {
self.error = error
Here, you set the published error to whatever error is passed in. You do this on the
main thread, because any published properties should be set on the main thread.
Next, to check for camera permissions, add the following method to CameraManager :
7/23
private func checkPermissions() {
// 1
case .notDetermined:
// 2
sessionQueue.suspend()
// 3
if !authorized {
self.status = .unauthorized
self.set(error: .deniedAuthorization)
self.sessionQueue.resume()
// 4
case .restricted:
status = .unauthorized
set(error: .restrictedAuthorization)
case .denied:
status = .unauthorized
set(error: .deniedAuthorization)
// 5
case .authorized:
break
// 6
@unknown default:
status = .unauthorized
set(error: .unknownAuthorization)
In this method:
Note: For any app that needs to request camera access, you need to include a usage string
in Info.plist. The starter project already included this usage string, which you’ll find under
the key Privacy – Camera Usage Description or the raw key
8/23
NSCameraUsageDescription. If you don’t set this key, then the app will crash as soon as
your code tries to access the camera. Fortunately, the message in the debugger is fairly
clear and lets you know you forgot to set this string.
Now, you’ll move on to the second step needed to use the camera: configuring it. But
before that, a quick explanation!
This shows a high-level view of how each of these pieces fit together. In your app, you’ll be
using these parts to configure the capture session.
9/23
private func configureCaptureSession() {
return
session.beginConfiguration()
defer {
session.commitConfiguration()
So far, this is pretty straightforward. But it’s worth noting that any time you want to
change something about an AVCaptureSession configuration, you need to enclose that
code between a beginConfiguration and a commitConfiguration .
.builtInWideAngleCamera,
for: .video,
position: .front)
set(error: .cameraUnavailable)
status = .failed
return
This code gets your capture device. In this app, you’re getting the front camera. If you
want the back camera, you can change position . Since
AVCaptureDevice.default(_:_:_:) returns an optional, which will be nil if the
requested device doesn’t exist, you need to unwrap it. If for some reason it is nil , set the
error and return early.
After the code above, add the following code to add the device input to
AVCaptureSession :
do {
// 1
// 2
if session.canAddInput(cameraInput) {
session.addInput(cameraInput)
} else {
// 3
set(error: .cannotAddInput)
status = .failed
return
} catch {
// 4
set(error: .createCaptureInput(error))
status = .failed
return
10/23
Here, you:
1. Try to create an AVCaptureDeviceInput based on the camera. Since this call can
throw, you wrap the code in a do-catch block.
2. Add the camera input to AVCaptureSession , if possible. It’s always a good idea to
check if it can be added before adding it. :]
3. Otherwise, set the error and the status and return early.
4. If an error was thrown, set the error based on this thrown error to help with
debugging and return.
Have you noticed that camera management involves a lot of error management? When
there are so many potential points of failure, having good error management will help you
debug any problems much more quickly! Plus, it’s a significantly better user experience.
Next up, you need to connect the capture output to the AVCaptureSession !
Add the following code right after the code you just added:
// 1
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
// 2
videoOutput.videoSettings =
// 3
videoConnection?.videoOrientation = .portrait
} else {
// 4
set(error: .cannotAddOutput)
status = .failed
return
1. You check to see if you can add AVCaptureVideoDataOutput to the session before
adding it. This pattern is similar to when you added the input.
2. Then, you set the format type for the video output.
3. And force the orientation to be in portrait.
4. If something fails, you set the error and status and return.
Finally, there’s one last thing you need to add to this method before it’s finished — right
before the closing brace, add:
status = .configured
11/23
Camera Manager Final Touches
There are a couple of small things you need to take care of to hook all the camera logic
together.
Remember that configure() you initially added with the class definition? It’s time to
fill that in. In CameraManager , add the following code to configure() :
checkPermissions()
sessionQueue.async {
self.configureCaptureSession()
self.session.startRunning()
Now, check for permissions, configure the capture session and start it. All of this happens
when CameraManager is initialized. Perfect!
The only question is: How do you get captured frames from this thing?
func set(
_ delegate: AVCaptureVideoDataOutputSampleBufferDelegate,
queue: DispatchQueue
) {
sessionQueue.async {
Using this method, your upcoming frame manager will be able to set itself as the delegate
that receives that camera data.
Pat yourself on the back and take a quick break! You just completed the longest and most
complicated class in this project. It’s all smooth sailing from now on!
12/23
Next, you’ll write a class that can receive this camera data.
Add a new Swift file named FrameManager.swift in the Camera group. Replace the
contents of the file with the following:
13/23
import AVFoundation
// 1
// 2
// 3
// 4
label: "com.raywenderlich.VideoOutputQ",
qos: .userInitiated,
attributes: [],
autoreleaseFrequency: .workItem)
// 5
super.init()
1. Define the class and have it inherit from NSObject and conform to
ObservableObject . FrameManager needs to inherit from NSObject because
FrameManager will adopt AVCaptureSession ‘s video output. This is a
requirement, so you’re just getting a head start on it.
2. Make the frame manager a singleton.
3. Add a published property for the current frame received from the camera. This is
what other classes will subscribe to to get the camera data.
4. Create a queue on which to receive the capture data.
5. Set FrameManager as the delegate to AVCaptureVideoDataOutput .
Right about now, Xcode is probably complaining that FrameManager doesn’t conform to
AVCaptureVideoDataOutputSampleBufferDelegate . That’s kind of the point!
To fix this, add the following extension below the closing brace of FrameManager :
func captureOutput(
_ output: AVCaptureOutput,
) {
DispatchQueue.main.async {
self.current = buffer
14/23
In this app, you’re checking if the received CMSampleBuffer contains an image buffer
and then sets the current frame. Once again, since current is a published property, it
needs to be set on the main thread. That’s that. Short and simple.
You’re close to being able to see the fruits of your oh-so-hard labor. You just need to hook
this FrameManager up to your FrameView somehow. But to do that, you’ll need to
create the most basic form of view model first.
Create a new Swift file named ContentViewModel.swift in the ViewModels group. Then,
replace the contents of that file with the following code:
import CoreImage
// 1
// 2
init() {
setupSubscriptions()
// 3
func setupSubscriptions() {
In this initial implementation, you set up some properties and methods you need:
15/23
// 1
frameManager.$current
// 2
.receive(on: RunLoop.main)
// 3
.compactMap { buffer in
// 4
.assign(to: &$frame)
1. Tap into the Publisher that was automatically created for you when you used
@Published .
2. Receive the data on the main run loop. It should already be on main, but just in
case, it doesn’t hurt to be sure.
3. Convert CVPixelBuffer to CGImage and filter out all nil values through
compactMap .
4. Assign the output of the pipeline — which is, itself, a publisher — to your published
frame .
Excellent work!
Now, open ContentView.swift to hook this up. Add the following property to
ContentView :
FrameView(image: model.frame)
Do you know what time it is? No, it’s not 9:41 AM. It’s time to build and run!
Finally, you can display the frames captured by the camera in your UI. Pretty nifty.
But what happens if there’s an error with the camera or capture session?
16/23
Error Handling
Before you can move on to even more fun, take care of any potential errors
CameraManager encounters. For this app, you’ll display them to the user in an
ErrorView . However, just like the capture frames, you’re going to route the errors
through your view model.
Next, you’ll add a new Combine pipeline to setupSubscriptions() . Add the following
code to the beginning of setupSubscriptions() :
17/23
// 1
cameraManager.$error
// 2
.receive(on: RunLoop.main)
// 3
.map { $0 }
// 4
.assign(to: &$error)
Now, to hook it up to your UI, open ContentView.swift and add the following line inside
your ZStack , below FrameView :
ErrorView(error: model.error)
If you build and run now, you won’t see any difference if you previously gave the app
access to the camera. If you want to see this new error view in action, open the Settings
app and tap Privacy ▸ Camera. Turn off the camera permissions for FilterTheWorld.
The app correctly informs you that camera access has been denied. Success! Or, um,
error!
Now you have a very basic, working camera app, which also displays any encountered
errors to the user. Nice. However, the point of this app isn’t to just show the world as it is.
After all, the app is called Filter the World…
18/23
Creating Filters With Core Image
It’s time to have a little fun. Well, even more fun! You’ll add some Core Image filters to the
data pipeline, and you can turn them on and off via some toggle buttons. These will let
you add some cool effects to the live camera feed.
First, you’ll add the business logic to your view model. So, open ContentViewModel.swift
and add the following properties to ContentViewModel :
These will tell your code which filters to apply to the camera feed. These particular filters
are easily composable, so they work with each other nicely.
Since CIContext s are expensive to create, you also create a private property to reuse the
context instead of recreating it every frame.
19/23
// 1
frameManager.$current
// 2
.receive(on: RunLoop.main)
// 3
.compactMap { buffer in
// 4
.assign(to: &$frame)
frameManager.$current
.receive(on: RunLoop.main)
.compactMap { $0 }
.compactMap { buffer in
// 1
return nil
// 2
// 3
if self.comicFilter {
ciImage = ciImage.applyingFilter("CIComicEffect")
if self.monoFilter {
ciImage = ciImage.applyingFilter("CIPhotoEffectNoir")
if self.crystalFilter {
ciImage = ciImage.applyingFilter("CICrystallize")
// 4
.assign(to: &$frame)
Here, you:
Now, to connect this to the UI, open ContentView.swift and add the following code within
the ZStack after the ErrorView :
ControlView(
comicSelected: $model.comicFilter,
monoSelected: $model.monoFilter,
crystalSelected: $model.crystalFilter)
20/23
21/23
22/23
Where to Go From Here?
Download the completed project files by clicking the Download Materials button at the
top or bottom of the tutorial.
You wrote a lot of code and now have a well organized and extendable start for a camera-
based app. Well done!
We hope you enjoyed this tutorial. If you have any questions or comments, please join the
forum discussion below!
23/23