Subscribe now

Combining Concurrent Work [02.10.2017]

When we have concurrent asynchronous work

Previously, we mentioned that an iOS or Mac app has a main queue which will loop over user events.

//main.swift DispatchQueue.main.async { print(this will never get executed) } print(completed main)

While a command-line app has a main queue, and begins execution on it, once it runs, the app returns, even if more closures have been enqueued on the main queue. So, if our app is multi-queued, we'll need a way for the main queue to wait for other work to finish.

Similarly, we may write code on a background queue, which itself dispatches many work closures concurrently to other queues, and it needs to wait for all the closures to be completed before continuing.

Dispatch Groups

DispatchGroups support waiting until all of the closures in the group have completed.

let group:DispatchGroup = DispatchGroup()

We create a group with an empty .init( method. Fundamentally, DispatchGroups maintain a count of uncompleted pieces of work. When we wait, the current queue will block until the count is zero.

group.enter()

One way to increase the group's uncompleted count is to call enter().

concurrentQueue.async() { group.leave() }

And the corresponding way to decrement the count is to call .leave().

group.wait()

To block further execution of the current queue until all work in the group has completed, call .wait(). This example will block, until the concurrentQueue has completed the block we gave it.

//main.swift let group:DispatchGroup = DispatchGroup() group.enter() DispatchQueue.global(qos: .default).async { print(this will get executed before the app returns) group.leave() } group.wait() print(completed main)

With DispatchGroups, we now have a way to manage a synchronous flow using other concurrent queues to maximize work. Our command-line example finishes all the work before returning.

DispatchGroup wait timeouts

Often, we'll queue large amounts of background work to get things prepared for the user. If the user makes a change, we may decide to cancel the work, and start the new task. Alternatively, the user may decide they've waited too long, and cancel manually. In either case, we'll need a way to stop waiting, and escape the process.

.wait() doesn't have a variant which checks for cancellation, but it does have a way to time out when waiting too long.

if group.wait(wallTimeout: DispatchWallTime.now() + .seconds(1)) == .timedOut { return // trigger some failure }

Using .wait with a wallTimeout, will wait until the earlier of either all the work items have completed, or the timeout expires. If the work completed, it returns .success, and if the timeout happened, it will return .timedOut.

Math with DispatchWallTime

DispatchWallTime has no constructor which takes some number of seconds from a literal value. However, Swift defines special math operators to add a Float64 in seconds to the current time.

if group.wait(wallTimeout:.now()) == .timedOut { return // trigger some failure }

First, we create now with the .now() static method.

if group.wait(wallTimeout:.now() + 0.75) == .timedOut { return // trigger some failure }

Next, we add a floating point number of seconds. Here, I'm adding three quarters of a second. So this timeout will happen three quarters of a second after now.

Next, we realize that we'll never really know how long the user is willing to wait, so we'll actually have to check back more often.

var canceled:Bool = false while group.wait(wallTimeout: .now() + 1.0) == .timedOut { if canceled { throw //some error } }

Instead of failing whenever the wait times out, we'll simply check if our work has been cancelled, and only break under those circumstances. Now we're free to set our timeout to the lowest value the user might wait, like 1 second. Once a second, this timeout will fail, and we'll check for cancellation. If we were cancelled, we escape the flow; if we weren't canceled, we wait again.

Let's wrap all that up in a class.

class CancelableGroup { ///encapsulation of this prevents uncanceling private var canceled:Bool = false var isCanceled:Bool { return cancelled } func cancel() { cancelled = true } let group:DispatchGroup let periodicWait:TimeInterval init(group:DispatchGroup = DispatchGroup(), periodicWait:TimeInterval = 1.0) { self.group = group self.periodicWait = periodicWait } enum WaitResult { case completed, canceled } func wait()->WaitResult { while group.wait(wallTimeout: DispatchWallTime(timeIntervalSinceNow:periodicWait)) == .timedOut { if cancelled { return .canceled } } return .completed } }

extension DispatchQueue { func async(cancelable:CancelableGroup, execute work:@escaping (CancelableGroup)->()) { self.async(group: cancelable.group, execute:{ work(self) }) } }

This recipe wraps up the use of a group and its waiting for a cancelation. Instead of returning after one timeout period, it loops while timeouts are still happening, only returning canceled if cancel() is called before all work is completed.

The extension on DispatchQueue calls a special method on DispatchQueue which takes the group as an argument. DispatchQueue will call enter when we enqueue the block, and leave when it completes. Alleviating us of this cumbersome responsibility.

Together, now our long running code can look like this:

class BackgroundWorkController { let backgroundQueue:DispatchQueue = DispatchQueue(label:BackgroundWorkController, attributes:[.concurrent]) let cancelable:CancelableGroup = CancelableGroup() func cancel() { cancelable.cancel() } func doWork() { //dispatch many async blocks backgroundQueue.async(cancelable:cancelable, execute:{ cancelable in if cancelable.isCanceled { return } //long running work here } }) if cancelable.wait() == .canceled { return } //dispatch many more asynchronous blocks backgroundQueue.async(cancelable:cancelable, execute:{ cancelable in if cancelable.isCanceled { return } //more long running work here } }) if cancelable.wait() == .canceled { return } //compete } }

Here's what our app's code might look like, repeatedly enqueueing multiple asynchronous blocks on the queue using our cancelable group. Between each major sequence of concurrent work at which point results need to be tied back together, we wait for blocks to execute, returning early only if cancel was triggered. Each wait won't take more than periodicWait seconds, so the latency of leaving execution is quick.

Keep in mind that unlike OperationQueue in Foundation, which will skip cancelled operations, DispatchQueue will execute all closures given it, (assuming the app doesn't crash before it gets around to it). So each block that's already been enqueued for execution needs to check for cancellation. This is one reason we wrap up the cancel parameter in a class, so its state can be shared amongst many closures.

Quality of Service

Since there are typically many more queues than processors, and potentially many closures enqueued on each one at a time, Dispatch needs a way to determine which queues need to go first. It does this with a Quality of Service (QoS) attribute on the queue.

let queue = DispatchQueue(label:Daily, qos: DispatchQoS = default, ...

When we create a queue, we can set its QoS. If we need a queue to cycle work as fast as possible, because we need the new data during a drag of the mouse or finger across the screen, we'll pick the userInteractive QoS. In other cases it may be normal to let work get done whenever it can, so we'd use the .background QoS.

async(group: DispatchGroup? = default, qos: DispatchQoS = default,...

DispatchQueue has a version of async that takes a qos as an argument. This doesn't make our closure skip to the beginning of the queue, instead, it temporarily raises the quality of service of the entire queue to match the value we provided.

Summary

Today we learned to create DispatchGroups to wait for many concurrent work closures to be completed. We learned to do math with wall times, and avoid deadlocking by periodically watching for cancelation. Last we learned how Quality-of-Service affects which work gets done when.

Synchronizing asynchronous execution, now that's Swift!

Day 5 - Practicing

//write me

//come up with an app which does 3 things: something expensive, but inherently serial, something inhernrlt ypuarralell, and something that wants synchronous access to a shared resource. Ask them to pick the right design pattern for the app.