Recently I posted a post titled Curry for dummies. I have to admit the article did possibly exceed the dummy level, so here's a little code-based wrap-up before extending upon it:

int Add(int a, int b) { return a + b; }

//
// Without curry support = all or nothing
//

var add = Add; // a delegate (no args applied = nothing)
// an error (not enough args applied)
var three = add(1, 2); // an int (all args applied = all)

//
// With curry support = allow partial application
//

// a delegate
// a function with one more int to go
var three = addOne(2); // an int - all args of Add have been "eaten"

I pointed out there are two ways to make currying work. Actually I'd say there's only one "right" way of doing it (at least from a theoretical point of view), which is formal substitution as specified in lambda calculus. Let's do the last part of the sample above but more formally:

//
// With curry support = allow partial application
//
add = λ a b . a + b
= (λ a b . a + b) 1
= [a := 1](λ b . a + b)
= λ b . [a := 1](a + b)
= λ b . 1 + b
= (λ b . 1 + b) 2
= [b := 2](1 + b)
= 1 + 2
= 3

But we have to keep ourselves honest. This approach only works in a purist functional world. I'll explain why in just a bit. The alternative I've discussed is by wrapping the original function in a new function, reducing the number of arguments (the "InvocationExpression" way):

But why is this "impure" way preferable in some sense? Side-effects come and bite us if we don't do it this way. Essentially what this approach does is preserving the "call-by-value" semantics of function calls throughout the currying process (which we can't really call curry anymore because we're loosing some of the essential lambda purity, but that's nitpicking). As we don't touch the inner workings of the original function, its behavior isn't changed with respect to calling it. Why this matters becomes apparent in the following sample:

Func<int,int,int> f = (a, b) => (a + b) / a;

I hear my readers already scream the word "simplify" but can we really do that, like this?

Func<int,int,int> f = (a, b) => 1 + b / a;

That's exactly the problem and I want you to think about the assumptions you've made why this would hold true. Just a few questions/hints. Did you make assumptions about the type of the parameters (or: did you really look at the variable type first; if not, what made you believe the distributive law holds)? What about overflowing conditions? Although all of those questions need to be answered, a more important question to raise is on the call semantics. You'll have to admit you just assumed (correctly for this particular language) that call-by-value semantics are in effect. For example, calling f like this:

f(e, 2)

is executed as:

f_arg_0 = e;
f_arg_1 = 2;
f(a := f_arg_0, b := f_arg_1) // using := for parameter binding
{
return (a + b) / a; // could we optimize here? why? why not? assumptions?
}

What are the implications of this strategy? The expressions that act as the parameter values are evaluated exactly once (I've assumed a left-to-right order but even that might differ from language to language). But maybe the body of f doesn't use the value of the first parameter at all (this behavior might be conditioned on the second argument). So if expression e wouldn't terminate, we're hanging the application for no good reason. To look at it in another way: the parameter evaluation stays outside the function which acts as a black box from the caller's point of view. An alternative is call-by-name where the parameters to the function are "propagated down into" the function's body by a formal substitution mechanism:

f(e, 2)

becomes

return (e + 2) / e;

This time, e is evaluated twice, which is essentially what the substitution implementation in Curry for dummies does. Does this matter? Most people will say: yes, because of performance. While that's true, there's more. Let's assume our compiler is smart and can eliminate common subexpressions, say as a performance optimization trick. Assume e = (some * giant + expression):

return ((some * giant + expression) + 2) / (some * giant + expression);

So, it would have the right to say: hey, this thing is done twice (and that shouldn't necessarily be caused by substitution of parameters), let's just do it only once and reuse the produces values:

var dummy = (some * giant + expression);
return (dummy + 2) / dummy;

But can we really do this? What about side-effects? Here's a sample where such optimization will hurt:

Console.WriteLine("Current time: " + DateTime.Now);
Console.WriteLine("Current time: " + DateTime.Now);

The compiler will say this:

string dummy = "Current time: " + DateTime.Now;
Console
.WriteLine(dummy);
Console.WriteLine(dummy);

Does it have the right to do this? No. For what reason? Side-effects are not made explicit in the framework (or in more general terms, in imperative languages). The problem is not with the string concatenation; that's a pure function (is it really?). Instead the problem is in DateTime.Now's type (remember that the property Now is really a call to a "function" called get_Now) being DateTime. From the type we can't know whether or not the result will be a constant, and clearly it isn't. In Haskell the type would be IO DateTime to indicate there's a side-effect involved, meaning this optimization can't be carried out. Because we don't know - and because we don't have the right to gamble - we can't be smart and need to be conservative.

A slightly modified version of call-by-name is called call-by-need where memoization (a specific form of caching) is employed. Again, parameter evaluation is percolated down into the function being called (to avoid evaluating any parameters that are not used on the code path taken inside the function during this particular call) but when the parameter is used multiple times, it's only evaluated the first time it's used:

f(e, 2)

becomes

var cell_a = new Cell<...>(() => e);
var cell_b = new Cell<...>(() => 2);
return (cell_a.Value + cell_b.Value) / cell_a.Value;

The generic Cell<T> introduced above takes in a function producing a value (Func<T>) that's evaluated once at most, when calling the Value property getter for the first time. On subsequent calls, the result is returned straight away without calling the function again. Here's how a Cell<T> implementation could look like:

class Cell<T>
{
private Func<T> _func;
private T _value;
private bool _evaluated;

public Cell<T>(Func<T> func)
{
_func = func;
}

public T Value
{
if (!_evaluated)
{
_value = _func();
_evaluated = true;
}

return _value;
}
}

Readers looking for a challenge are invited to think about how it would be possible to adapt the original (substitution-driven) Curry for dummies code to work with call-by-need semantics; in other words, how can a lambda expression be applied (the number of arguments provided is not really relevant in this discussion) with arguments that are lazily (one time at most) evaluated? I'll post the answer some time soon.

Exercise: Based on the discussion of these three strategies, point out what the (differences in the) results of the following function call will be when each of the strategies is applied:

twice(DateTime.Now.Ticks, div(1, 0));

where

long twice(long a, long @default)
{
return a > 0 ? a + a : @default; // a + a != 2 * a
}

long div(long a, long b)
{
return a / b;
}

For people who want to know more, Philip Wadler's site contains a bunch of interesting papers on calling strategies: http://homepages.inf.ed.ac.uk/wadler/topics/call-by-need.html. How Haskell benefits from call-by-need is something the reader is invited to think about a bit more and if you really want to dig deeper, I strongly recommend Wadler's paper on Monads for functional programming.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Filed under: