Sometimes you're looking at a snippet of code, and want to change it.

my_data
my_data.some_attr

When going from the first line to the second, I needed to put my cursor near the end, and then add the attribute access. Especially when working in a REPL where the previous line was just above me, this is pretty nice!

my_data.some_attr
json.loads(my_data.some_attr)

This is less nice. I have to jump backwards, add json.loads, and wrap the thing I care about! And here I'm assuming we had import json.

This sort of targetting can get annoying in a larger expression, especially if you're not using a fancy structural editor that all the LISPers seem to have access to.

do_a_thing(
   one_argument=some_stuff,
   another_argument=json.loads(
     my_data.some_attr
   )
)

This idea of forward progress can be a huge contributor to feeling quick in a language. This is why pipelining in shell scripts can feel so satisfying.

ls *.md  # look at all the markdown files here
ls *.md | xargs wc -l  # now let's count the lines on each file

Of course one can just do wc -l *.md, but just appending to the end of program text maintains a flow.

Fluent APIs also help you get this feeling. You see something and want to do something with it. It's very likely you then will want to do a second thing with it.

$('.warning-message') 
     .color ('red')
     .size (2, 'em')

Some languages features are not amenable to forward progress, even though at first they seem to lean into velocity generation.

Python comprehensions can be helpful at combining map and filter operations, but leave a bit to be desired when changing up the expression.

results_on_scaled_diagonal = [
   do_operation(10 * i, 10 * i)
   for i in range(10)
   if 10 * i not in indices_to_skip
]

I would like to be able to do j = 10 * i here, but instead I end up needing to do something like:

results_on_scaled_diagonal = [
   do_operation(j)
   for j in [10 *i for i in range(10)]
   if j not in indices_to_skip
]

range(10, 100, 10) also gives me what I want here, since it's a range call, of course. But sometimes I want something a bit subtler than a linear transformation.

While conceptually I'm looking to just capture some computation and reuse it, comprehensions don't really get me there without doing some extra work.

Forward progress is sometimes about being able to continue typing from an existing expression to dig deeper. In a sense, typing "from left to right".

But at a deeper level, I think forward progress is about being able to take an existing intermediate expression and easily re-use it, without having to factor out the value in some way or another.

Many of Rust's standard library types are versed in forward progress. I particularly like Option because it tries to imagine constant

// provide a default
Some(x).unwrap_or(y) = x
None.unwrap_or(y) = y
// provide a default (gotten from a function call)
// useful if calculating the default is a problem
Some(x).unwrap_or_else(f) = x
None.unwrap_or_else(f) = f()
// apply a function "inside" the Option
Some(x).map(f) = Some(f(x))
None.map(f) = None

// Compte a default function result (if none)
// or apply a different function to the contained value!
None.map_or_else(f, g) = f()
Some(x).map_or_else(f, g) = g(x)

map_or_else in particular feels downstream of realizing that even very specific transformations are worth giving a name, even if the name feels fairly awkward. People who need it often will reach for it.

Another place where I believe Rust got things right here is the await keyword

# in Python land... 
foo().bar()["baz"]
# oops! the return to bar needs to be awaited
(await foo().bar())["baz"]
# actually the result of foo does as well!
(await (await foo()).bar())["baz"]
// Over in Rust land
foo().await.bar().await["baz"]

One of the top players for forward progress in APIs is all of the APL-likes, which generally make it extremely hard to give names to things but makes it easy to work with intermediate values despite having no names.

Consider taking an average of the elements of a list

# In Python land...
avg = sum(data)/len(data)

This is all fine and good when data has a name, but what if it's an intermediate value and you're just in the middle of an expression?

# first you look at the sum...
sum([doc.total for doc in get_documents()])
# now you want the total... you're forced to factor out to make this clean!
data = [doc.total for doc in get_documents()]
avg = sum(data) / len(data)

One might consider this a feature of Python. And of course in this sort of circumstance you can write out a helper function

my_avg = lambda elts: sum(elts) / len(elts)
avg = my_avg(data)

But now you have a helper function that might be underspecified, on top of having a name. This average blows up on empty lists, of course, but perhaps you know that in your case the list length is always positive.

There's a core tension with intermediate values, in that you usually only can consume them once.

In Uiua, a stack-based APL-like, there are a bunch of helpers around this. One operator I like a lot is on

# this puts some data on the stack
data
# this calls some_operation on the
# top of the stack (i.e. data)
some_operation
# --> stack is [some_operation(data)]

So in practice I can do something like the following

# put this list of numbers on the stack
[3 4 5 10 23]
# --> stack is [[3 4 5 10 23]]
# sum the numbers in the list
# on the top of the stack
/+ # (this is "reduce add")
# ---> stack is [45]

So by default we have values on the stack and are consuming them. But what if I wanted to keep my data around? Then on is your friend here.

on f will call f with the top of the stack x, but then put f(x) and x onto the stack.

This means you don't consume your value, and can use it in other calculations.

For our average

# put this list of numbers on the stack
[3 4 5 10 23]
# --> stack is [[3 4 5 10 23]]
# sum the numbers in the list
# on the top of the stack
# but also copy the top of the stack
on /+ # (this is "reduce add")
# ---> stack is [[3 4 5 10 23] 45]
length
# ---> stack is [5 45]
divide
# --> stack is now [9]
# we have our average

By adding on into your script you can keep on working with values you would otherwise consume.

By the way without comments you're looking at something much terser.

[3 4 5 10 23]
/+
[3 4 5 10 23]
divide length on /+

Whether this is more readable or not is a question some people will argue about. But I think that when you're working in an immediate environment, and you can hold all of the conditions of your data in your head, then this forward movement is very helpful.

And on a larger scale, if all of your code is a bit terser, then the end result can be a codebase where everything is just a bit terser.

In a sense the forward progress idea is linked to very old discussions of top-down and bottom-up procedural programming.

data = foo(bar(baz(some_input)))
data = some_input | baz | bar | foo

In one model, you might be able to quickly say "my data is (return type of foo)". In another, you might be able to quickly say "the computer takes some input, then runs baz, then bar, then foo".

There are circumstances and contexts where one works better than the other. But it feels very true that having good options to do both can lead to people choosing the "clearer" option as needed.

In particular, think about trying to do the following things on intermediate values in your favorite language, and wonder whether you wouldn't mind something a bit terser, or that doesn't involve moving your code around as much.

  • I have a list of Foos, give me a list of their bars

  • I have a list of Foos, give me a list of their bars, but by filtering out any items where baz matches some predicate

  • I have a bunch of pairs, I want an associative map built from these pairs

  • I have an associative map, I want a new one but without certain keys

  • I have an associative map, I want a new one including only certain keys

  • I have an associative map, but I want to merge it with another associative map

  • I have x, I want the result (f(x), g(x))

  • I have x, and I want the result g(f(x), x)

  • I have (x, y), and I want to do f(x, y)

Does it make sense for all of these operations to have their own names, and for you to commit those to memory? Probably not. But I do feel like it's nice if you are able to quickly write them up from first principles if you need them. Because then operations that are conceptually simple can look as much on the page.