Wednesday, January 16, 2008 7:44 PM bart

Foundations of Functional Programming - Part 0


It's a long time ago I've been posting on my blog (there are certainly good reasons for it) but finally I'm back. Guess the Holiday season has had some positive influence :-). So, what's up? In this series I want to talk a little about functional programming, more specifically what it is, why you should care and how it's been lurking around the corner for a long time. Although this series covers quite some theory, you should find it amazingly simple to absorb and apply (already). For now, we won't dive into "exotic languages" (the exotic denominator depends whose opinion you ask) such as Haskell or F#; rather we'll stick with our good old friend C#. Of course, if there's a large demand for F# stuff, I'll address that as well :-).


Functions - why should I care?

There are lots of good reasons. Maybe you love maths (I do), who knows? But you shouldn't necessarily be a math freak to apply functional programming in real-world scenarios in practice. A very important trend in computing nowadays is the introduction (well, we're already way beyond that, maybe omnipresence is a better word for 2008) of multi- and many-core computing (not to talk about NUMA and the steep climb of virtualization yet, which puts things in yet other dimensions). The fact I'm writing this post on a 1,5 year old laptop with a sticker "Centrino Duo" illustrates this. As I started to write this post I left Task Manager running in the background and here's the result:


Not much going on in Multi-Core Wonderland isn't it? Okay, I didn't put much stress on my machine but guess what, here's a similar screenshot taken while running some "busy" program:


However, only one half on my investment is working actively on the job: processor (or better: core) number two seems to be damn busy while the other one is wasting it's idle time doing nothing really useful. So, why would I buy a multi-core machine anyway if software isn't taking advantage of it? Why is the following application only wasting time from one core?

class Busy { static void Main() { while(true); } }

In this case it's a good thing obviously (no-one wants a busy loop to bring down the entire system), but what if that inner loop was doing something really useful? Some programs out there today (just to mention a few of the big rocks: SQL Server, MSBuild 3.5 with /m[axcpucount] flag) take advantage of the available computing power but the inner workings of those beasts are (incredibly) complex which makes testing, debugging, etc so much harder than it should be. By the way, isn't it funny (or remarkable) that the samples I mentioned are not really "desktop software", but rather machinery that's typically used in environments where lots of computing power is available (database servers, build machines, developer workstations)? What about all other sorts of applications? Agreed, we have wonderful APIs to make things "easier", and yet more high level concepts (I think of the BackgroundWorker) but let's step back from this for a while...

So, how to write software that can leverage the power of all cores while keeping developer concerns relatively low (read: not having to worry about locks, semaphores, race conditions, deadlocks and other cumbersome things)? Believe it or not, functional programming can help us to answer this question - and yes, it's available to all of you today :-). Other than this, functional programming can help to solve many other questions, for example how to reason about programs in a more formal (and mathematically sound) way. If you think I've taken on a marketing role, don't worry - we'll get into technical details in the next parts of this series. For now, I just want to set the scene in a more philosophical manner.


Partitioning the world

If functions represent some kind of heaven on earth (admitted, I like to exaggerate quite often), why didn't we really hear about it so much earlier? Well, the answer is that the theory behind functional programming has been there for ages (long before computers as we know them today appeared on the globe's surface). Let's give (part of) the theory a name: lambda calculus.

So, given the theory existed at the time computers were invented, why wasn't this promising paradigm chosen to pave the road ahead? Simply (or oversimplified?) because of the Hhugse (functional joke) gap between this more theorist friendly world on steroids (or should I say monoids?) and the machine architectures of these early computers (architectural pieces that live through all the way to the world of x86 and x64 by the way, feel free to go back to real basics such as the von Neumann architecture). If you were asked to invent a higher-level programming language given a book on assembler - what route would you go? I guess you'd like to control the machine as tightly as you can, yet making some typical constructs (such as loops and conditionals - something like recursion simply wouldn't be on the radar yet) easier to do, hence abstracting away (= higher-level language) from the bare metal. Hooray - you've just re-invented imperative programming, just by climbing one step higher than ASM.

On the other side of the universe there were (and luckily still are) these mathematicians (that no-one should be afraid of, all of them I know are friendly men or women) who'd rather like to descent the stairs from the functional cloud to implement those principles on computers whilst keeping their familiar safe and secure home of lambda calculus in reach. Obviously their descent to map these higher level constructs on machine-friendly instructions is way longer than the easy step the imperative language "designer" (I'd rather call someone in this early stage an "artist" drawing lines between one high-level construct onto n low-level instructions) took.

After all, it's all about mapping the What (you want the program to do) on the How (the machine should produce the right output for the given input).


I/O in a different perspective

Re-read the last sentence of the previous paragraph, more specifically the last part on the how. The easiest definition when explaining your job as a programmer to a four year young/old (approaching asymptotically to zero, end of philosophical note) or an eighty year old/young (approaching asymptotically to the maximum age for humans, end of philosophical note) is to handle some definition that implies you "tell a computer what it should do". However, if you step back and think of that statement you'll discover it's a big lie: you're not really telling (at least directly) what the program should do, you're telling how the machine should do it. That being said, when you have to explain your job again to the now five year old (four year olds don't remember things that well yet, exceptions set apart) or the now eighty-one year old (eighty year olds don't remember things that well anymore, exceptions set apart) you might come to the conclusion your statement is a little more true (entering fuzzy logic) that it was one year ago: things have abstracted away a little more from the plumbing level to the say-what-you-want or express yourself level (intentional italics). Think of SQL (do you tell how to execute a query?; not unless you're following the "curse of the cursor" - cursor in database-sense - or if you still live in a preDated Codd-free world), *ML type of languages (as in X*, HT*, XHT*... but also as in OCa* - anyone?) and more recently LINQ (tell it what data you want, regardless of the store or query languages behind the scenes).

So, what about another timeless (?) invariant (?) definition of programming: telling how a piece of software should transform a given input to a desired output (tip: don't define software in terms of programming - the four year old will catch the cyclic definition). Think of it as I/O in the broadest sense of the word and forget about buses, devices or peripherals. I did put the words input and output in the How-camp above, where most developers see it as a familiar construct. However, the place it really belongs is the What-camp: you simply want to state what output you expect a program to yield for a given input. And that's what functions are all about: transforming some input to one single output, however that might happen. Read the previous sentence carefully: one single output.

Now, what's a bug? Using your first definition of programming you'd likely tell the kid and your grandpa that "a bug occurs when the computer doesn't do what it should do". That's a handy definition, because you project bugs on the machine and you don't mention that you could have told the computer to do the wrong thing (strictly spoken you did!). In most cases it's somewhere in between: you might be telling it the right thing but you didn't get the plumbing right. To be more precise, you couldn't tame the side-effects: exceptions appearing out of the blue, some kind of interrupt (IRQ, event, ...) you didn't treat correctly, synchronization that isn't set up correctly, etc. One slogan that fits it all: "side-effects come and get you".

What about a better definition for a bug that fits better in the functional picture: if one observes incorrect output for a given input, it's a bug. It's up to the reader to figure out who to blame it to, but since you simply state what you want (that's what a function is all about) - at least in a pure world - it will be very hard to ignore you simply defined the wrong function or defined the function wrong. I agree, I'm making quite some ameliorations in here, but you get the big picture. The only unfortunate thing about all of this: you might loose your comfortable rescue net blaming the computer for your bugs :-).



Functional programming = love (riddle: replace the lhs, a TLA or three-letter acronym for left-hand side, with a TLA that caused a "TLA-hell" and you get an (old?) slogan of someone with a TLS or three-letter surname, who did co-invent a FLA or four-letter acronym technology - substitute all TL* occurrences). You might be unaware of it, but you've definitely seen applications of the theory already: LINQ is one of these and the Parallel Extensions (will blog about these in a separate series) are another encounter of it, and with some goodwill you could promote parts of SQL (relational algebra) to the functional domain (lambda calculus) as well (I'm not going to defend nor defeat this statement).

In the next part of this series, we'll finally see some real C# code again, introducing functional programming step-by-step. | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Filed under:


# re: Foundations of Functional Programming - Part 0

Thursday, January 17, 2008 1:43 AM by tibo

Great article. I'm all for diving into F# also :)

# re: Foundations of Functional Programming - Part 0

Friday, January 18, 2008 2:34 AM by BarfieldMV

The more programs I write the more I think of programming as big foreach loops that simply do something for every line/node/item there is in the original data. The rest of the program is only there either to load/input or save that data. So programming is getting the data that you want (input) Search through, edit and or modify the data (loop) Save the data (output) A Bug is anything that gets the wrong output somehow

# re: Foundations of Functional Programming - Part 0

Wednesday, January 23, 2008 2:20 PM by Eber Irigoyen

great introduction can't wait to see some code

# Foundations of Functional Programming - Part 1

Sunday, January 27, 2008 5:29 PM by B# .NET Blog

Welcome back to this blogging series on functional programming. After some scene-setting in the previous

# Continuing Adventures in F#

Tuesday, January 29, 2008 8:44 AM by Matthew Podwysocki's Blog

Update: Added more F# samples and the foundations of functional programming In a previous post , I've

# Adventures in F# - F# 101 Part 2

Thursday, February 21, 2008 4:52 PM by Matthew Podwysocki's Blog

I know it's been a little bit too long since I began this series from when I continued, but plenty of

# von neumann architecture

Thursday, March 13, 2008 6:25 PM by von neumann architecture

Pingback from  von neumann architecture

# imperative sentence definition

Thursday, March 20, 2008 8:09 PM by imperative sentence definition

Pingback from  imperative sentence definition

# Love and Theft » Blog Archive » F#, .NET and Functional Languages

Pingback from  Love and Theft  » Blog Archive   » F#, .NET and Functional Languages

# » F#, .NET and Functional Languages

Wednesday, August 05, 2009 7:25 AM by » F#, .NET and Functional Languages

Pingback from » F#, .NET and Functional Languages

# Adventures in F# – F# 101 « Khurram Nazir's Blog

Tuesday, November 03, 2009 6:00 AM by Adventures in F# – F# 101 « Khurram Nazir's Blog

Pingback from  Adventures in F# – F# 101 « Khurram Nazir's Blog