We’re just getting started on the MIT OpenCourseWare course Structure and Interpretation of Computer Programs. I got a lot of responses from interested people, which is exciting. So far we haven’t encountered much worth chatting about, but I’m thinking the material’s difficulty is going to increase quickly. In the meantime, I’ve jotted down a few notes about 1) why recursion is important, 2) the role of abstraction as a learning tool, and 3) the geekiest videos imaginable.
The notes for the introductory lecture open with the claims that SICP isn’t really about Computer Science, or even computers, which are just the tools used to harness the power of computation, but rather how to reason about imperative knowledge and how to put computation to use to solve problems. Early on, the lecture presents an algorithm and then introduces the notion of recursion. This lead to my first moment of enlightenment:
Why Recursion Matters
I had occasionally wondered why programming in Scheme involved so much emphasis on recursion. After all, for loops and while loops in other languages seemed to do pretty much the same thing, so what’s so great about recursion?
Scheme is described, e.g. in the recent announcement of R6RS, as “The Algorithmic Language Scheme.” Scheme favors recursion because algorithms employ it. And Scheme is more interested in modeling algorithms than in modeling execution flow, memory pointers, etc. So the function is the algorithm, or at least one cycle in the algorithm’s application, and it handles the “apply the steps again” part simply by calling itself. Why would a language devoted to modeling algorithm execution this cleanly bother with special keywords and structures for looping?
Now I can’t wait until I piece together why lists are so important that they’re essentially the only native data structure in Lisp.
Abstraction’s Role in Learning
I’ve been familiar with the notion of abstraction since I started programming. But when I heard “abstraction” I usually thought about classes and interfaces — about code.
SICP uses the idea of abstraction from the start of the course, and not just to describe code but almost everything we’re learning, from the beginning. Although I don’t think the lectures or the book (or the videos, see below) come out and say this, everything presented by the course is to be understood as an abstraction. The substitution model, for instance, is offered as merely the first way we’re going to think about how the computer evaluates code.
Our grasp on the functioning of the language, of computers, is just an approximation, an interpretation whose value is pragmatic. When actual pieces of code, like (define ...)
and (lambda ...)
are introduced they are lot listed as keywords or functions, but presented as abstractions, for definition and procedures respectively. Don’t focus on the magic words. Focus on the concepts to which they give you access.
I really like this broad use of abstraction. It makes me more comfortable starting out, because I know that what I’ve learned so far is imperfect, but it’s enough for now. In a way, it’s a step on a ladder. It doesn’t take me to the top, but it takes me up to the next step.
The Videos
I’ve also been watching the videos that HP recorded of Abelson and Sussman lecturing on the course contents of SICP back in 1986. The first videos drive home both of the points I made above. They also prove to be oddly entertaining in an “I’m watching just about the geekiest thing imaginable” way.
Along for the SICP ride, although he isn’t promising he’ll keep up, is Lispy, who had some thoughts on the video lectures. I thought the videos were interesting too, although the detail I choose to point out is how often Sussman had to double-check the number of closing parens he had on his simple programs.
It’s good to know even the grand masters have to watch those parens.
One thought on “SICP: Week One”