background preloader

Functional Programming

Facebook Twitter

Backus's Idea of Functional Programming. In my earlier post about John Backus, I promised to write something about his later work on functional programming languages. While I was in a doctors office getting treated for an awful cough, I re-read his 1977 Turing Award Lecture. Even 30 years later, it remains a fascinating read, and far from being dated, it’s positively astonishingly to see both how far-sighted Backus was, and how little progress we’ve actually made. Backus started from a pretty solid perspective. Almost all of the common programming languages that we see – ranging from Backus’s own 1950s version of Fortran, to the most modern languages we see widely used today (Java/C/C++), to the various languages that have been proposed as alternatives to those (Modula-3, Oberon, Ada), even to most of the supposedly functional programming languages (Lisp, ML) – all of them are ultimately based quite strongly on the von Neumann model of computing.

That division stinks for a lot of reasons. Def InnerProduct ≡ (/+)º(α×)ºTranspose. Teaching Functional Programming. Why functional programming? The canonical answer to that question is probably “Why functional programming matters“, but here’s a specific example that makes the case nicely. Neil Mitchell is working on Supero, an optimizing compiler for Haskell which includes some ideas from supercompilation. But that’s not important right now. What is important is the technique Mitchell uses in the blog post at the second link above. Algebra. Functional programming is based on defining functions. def square(i: Int) = i * i In pure functional programming, where side-effects are not allowed (or at least kept securely behind bars), you can treat that definition as an equation; wherever you see square(foo) you can rewrite it as (foo * foo) and vice versa.

The point is that you can’t do that with confidence for impure (i.e., having side-effects) languages. N = square(++i); …which clearly is not the same as… n = ++i * ++i; …at least in most cases. If you even had to think about it for one millisecond (or, like me, thought “That’s ugly! Haskell and Scheme: Which One and Why? While I was waiting for stuff to install on my new machine, I was doing some browsing around the web, and came across an interesting article at a blog called “The Only Winning Move”, titled [Scheme Death Knell?]

( It’s not a bad article at all, but I disagree with its conclusions, and I thought it would be interesting to discuss why. The article was brought on by *another* blog post, this one at [Lambda the Ultimate]( suggesting that someone translate Douglas Hofstadter’s columns introducing Scheme to replace the Scheme with Haskell. Josh over at TOWM was slightly irritated by this idea, and concludes that the only reason why people are suggesting that is because scheme is getting too old to be cool and trendy, and so they want to switch to something newer. I disagree. Syntax ——– Syntax is in many ways a minor issue. I think that syntactically, Haskell is a better language for beginners. Functional Reactive Programming in F# (1) Hello, This message is the first of a series describing my exploration of Functional Reactive Programming (FRP).

Although I already had quite a bit of experience with imperative reactive programming, I discovered FRP while reading the book of Paul Hudak: "The Haskell School of Expression". Articles about FRP have also been published by The Yale Haskell Group . Most of the contents of this message and the followings has been quite largely inspired by these sources of information. I started my own implementation of a FRP library with 2 goals in mind: To gain a better understanding about this very peculiar domain.

#light type Time = float type 'a Behavior = Time -> 'a Although this choice is correct, we rather use the next one. Type 'a Behavior = Beh of (Time -> 'a) With this simple type, it is already easy to define some behavior. To ease the creation of behavior, we can write some useful combinators: constB: transforms a constant into a Behavior: We can then write: Functional reactive programming in C# for WPF. I've been working on a library to wrap WPF dependency properties with signals so that data binding is easier to do in WPF when using C#: Dependency properties by themselves are like signals (time-varying values) but don't support direct transformation (i.e., map) or composition like signals do.

Instead, you have to define verbose value converters and jump through a lot of hoops. To solve this problem, the WPF signal library creates a signal-like wrapper around dependency properties so they can support direct transformation and composition; e.g., we can directly express relationships like: rectangle().top().bind = canvas.height() - rectangle.height() meaning that the rectangle is stuck to the bottom of the canvas, even as the size of the canvas or rectangle changes. This is currently a source code release only. Functional Programming For The Rest of Us. Memoizing recursive functions via the fixed-point Y combinator ... If you like the article below, you might also enjoy: Recursion as fixed points Students of algebra are already familiar with recursion and fixed points. They just don't realize it. Consider an equation like "x = x2 - 2. " If asked to solve for the value of x, a student might re-arrange the equation to use the quadratic formula. A fixed point of a function f is an input that is equal to its output; that is x is a fixed point of the function f if x = f(x).

Define the function f such that f(v) = v2 - 2. F(-1) = (-1)2 - 2 = 1 - 2 = -1, and: f(2) = (2)2 - 2 = 4 - 2 = 2 or by graphing y = x and y = f(x): These are exactly the solutions to x = x2 - 2 given by Wolfram Alpha. The insight that powers the upcoming technique is the observation that any time we have a recursive definition of the form "x = f(x)," the meaning of x is going to be defined in terms of fixed points.

The Y combinator is that trick. The Y combinator in theory Find a functional whose fixed point is the recursive function we seek. Functional Data Structures Out Of Nowhere. I’ve been watching the Structure and Interpretation of Computer Programs videos recently while riding the Caltrain and enjoyed it quite a bit. For some reason I can’t quite grasp, I find these fun. Maybe it’s the fact that these are 20 years old now and still terribly relevant (especially for functional programming), maybe it’s the look of the attendance, very eighties, or maybe the obvious delectation Hal Abelson and Gerald Jay Sussman have teaching. Anyways, pretty intesting stuff. One of the thing they emphasize a lot during their lessons is the blurring line between data structures and functions when programming in a functional style extensively.

The Javascript accessors overriding technique in my last post was a pretty good example of it but there’s a much better one in the video: constructing a Pair data structure out of pure nothingness. Javascript Ruby Arc It’s a very good illustration of how to introduce data structures in any functional language. Photo (and painting) by Rob Lee.