It’s a concurrent world, and, increasingly, it’s an asynchronous world too. Many things are going on at the same time, and it’s impossible to determine exactly when each is starting or ending. In other words, everything is fast and out of control.
As a software developer, both concurrency and asynchronicity are more important concepts than ever. But both are hard concepts, and they’re also hard to embrace in practice—the sequential model of programming is so ingrained in my head that both new structures for concurrency and asynchronicity in familiar languages and less familiar languages with built-in support for these concepts offer a steep learning curve. Fortunately, there’s been a lot of buzz, from separate origins, about two programming concepts that may, um, promise to deliver a brighter future to me and other befuddled developers.
Disclaimer: work in progress
In preparing this post, I violated a critical rule I have for blogging: I bit off way more than I could chew in one post. I try to limit my blogging time to an hour or two per post. The time limit helps keep me from spending all day on a post, and it keeps the post focused and short enough I might expect someone else to read it. Well, there’s so much out there about futures and promises that I spent hours following links and taking notes, and when I put everything together, I had 2000 words. That’s just to much.
So this is actually the first in a few posts I hope to do about these topics this winter. I suspect they’ll lead into more posts on concurrency: my interest in the topic seems to be getting stronger. I hope that getting started with what I know now will help me get my thoughts together and maybe encourage more knowledgeable people to chime in.
The game is afoot
Have you ever reflected on the way you become aware of new or newly important things? New technologies, concepts, writers, trends — at some point you realize, “This is new, at least to me.” Sometimes there’s a lot of inescapable hype, but usually something new comes along with a mention here and then another one there, and your brain says, “I’ve seen or heard about this before.” I like identifying these moments and looking deeper into the new arrival.
Last week, I saw a tweet from Javascript stalwart Chris Williams pointing me to a post by Kris Zyp about CommonJS. The post was mostly about the concurrency strategy that CommonJS looks to be embracing. Zyp mentioned “promises” as a concept. When I read that, I remembered Michael Fogus’s talk at the Capital Area Clojure Users Group earlier this month, in which he mentioned the arrival of “futures and promises” in Clojure 1.1, also in the context of concurrency. Then, my January issue of Dr. Dobb’s came online and, voila!, an article about asynchronous concurrency that encourages programmers in Java, .Net, and C++ to “[u]se futures to manage the asynchronous results.” Three recent examples.
This is new, at least to me.
Asynchronous concurrency in JavaScript
I’ll start with JavaScript and look at Clojure and other languages in a later post.
CommonJS, according to Zyp’s post, is moving towards embracing “shared-nothing event-loop concurrency.” That’s a mouthful, and it’s not easy to find a good definition or discussion of it online. From what I can tell, it is based on the Actor model, and it was well defined in the programming language E, which used event-loop concurrency to provide concurrency to sequential imperative programming without locks and deadlock, and with less onerous safety vs. liveness trade-offs.
Roughly speaking, then, event-loop concurrency imagines workers, objects with state and behavior, that are designed to receive and queue up messages, or events, that are coming from another queue, either within another worker or some central object or thread. Each worker has its own event-loop, waiting for and then responding to events it finds in that worker’s queue. The workers don’t share state or references to state with any other objects (hence, “shared-nothing”), so they can only send messages back and forth to each other.
The notion of workers isn’t unique to CommonJS. It’s taken from the W3C’s Web Workers API for HTML 5:
This specification defines an API that allows Web application authors to spawn background workers running scripts in parallel to their main page. This allows for thread-like operation with message-passing as the coordination mechanism.
This coordination between workers will be asynchronous: there can be no way of knowing when an event is going to be processed by a worker’s event loop. The promise is introduced as an abstraction to make it easier for workers to coordinate. Zyp explains the concept this way:
A promise can be thought of as representing the result of an asynchronous computation. The promise will receive the value returned by the call when it is finished, and acts as a placeholder for the returned value.
So a promise can be treated as an object that may or may not have a value at any point in time. When the asynchronous activity that determines its value is completed, the promise allows access to that value.
Promises are described as immutable: once a promise’s value is set, it must not be changed. It appears that implementation of immutability is left to the implementation of the promise in CommonJS.
The promise itself must expose a function ‘then’ (or, explicitly, must return a function as the value of the property ‘then’). The ‘then’ function takes as arguments functions that will act as handlers:
then(fulfilledHandler, errorHandler, progressHandler)
Interestingly, a promise’s ‘then’ function returns another promise. This allows a chaining of operations that can pleasingly resemble English instructions:
var be_patient = do_something. then(do_something_else). then(do_x_on_success, do_y_on_failure)
The fictional do_something
function returns a promise, promise1. The first ‘then
‘ returns another promise, which stands in for the result of applying do_something_else
to the successful outcome of promise 1. The second ‘then
‘ returns a third promise, which stands in for the result of applying do_x_on_success
or do_y_on_failure
to the result of promise 2. This third promise is assigned to the var be_patient
. When promise 3 has a value, the execution is complete, and be_patient
has a value.
Beyond this requirement, a promise may also be extended to respond to get(propertyName)
and call(functionName, *args)
messages. *These functions return promises* — for the property values or results of function calls.
CommonJS’s promises are inspired in part by the Dojo toolkit’s Deferreds. The Dojo documentation reads:
Deferreds are a way of abstracting non-blocking events, such as the final response to an XMLHttpRequest. Deferreds create a promise to return a response a some point in the future and an easy way to register your interest in receiving that response.
This feels very familiar — it feels lazy. Not lazy as in “wait until I tell you do this, then do it right away”, but as in “do this as soon as you can, I can wait.”
All the code examples I’ve seen are fairly trivial: they end up printing a string to the console, or whatever standard out is inside a JS server instance. They look trivially like sequential lines of code. Also, they potentially confuse the reader about what exactly happens when a promise is fulfilled: as far as I can tell, all that happens is an evaluation whose outcome may be captured or not. You only get the result printed, e.g., if a callback includes a print command as a side effect.
I know one of the JavaScript projects planning to adhere to the CommonJS API is node.js, and I’m looking forward to Paul Barry’s upcoming presentation on node.js at the Baltimore JavaScript Users Group. I’m sure he’s going to talk about event-based programming, but I plan to ask to go into detail about event-loop concurrency and the use of promises in node.js.
One thought on “Futures, promises, asynchonicity, and concurrency”
Comments are closed.