Quantcast
Channel: haskell – Luke Palmer
Viewing all articles
Browse latest Browse all 14

Programming for a culture approaching singularity

$
0
0

In 2001, Ray Kurzweil published his inspiring, controversial essay The Law of Accelerating Returns. I acknowledge that the essay sometimes seems far-fetched, and perhaps some details are overly (or underly) idealistic, but in essence I find it very convincing. I will take it as an assumption for this article, and I also assume my readers are familiar with the gist of the essay.

I read Kurzweil’s essay about six weeks ago, and it has set off a chain reaction in my consciousness, causing me to reconsider my perceptions of technology. At first, it seemed to murder my denotational idealism, destroy my hopes of Dana (my purely functional operating system). It filled me with an excited fear, as if I were being carried down the rapids of a great river, trying not to hit the next rock.

Now the ideas have settled and my idealism is born again with a new perspective. The spirit of Dana still has a place, and I will describe what I think that is.

We are at a unique inflection point in the process of technological acceleration. In the last 50-100 years, orders of magnitude of progress began to occur in a single human lifetime. Technology is folding upon itself, each new invention making use of more and more recent knowledge and tools.

Dana was originally envisaged to facilitate this growth by observing that the vast majority of software out there is not compositional, thus we are re-inventing many wheels. Dana’s goal was to provide a framework in which everything must be compositional. But my early execution of the idea was misguided. It may have had a noble goal, but I was pretending that I was at the beginning of an acceleration of software, not the middle. Dana’s implementation did not make use of modern technology, and thus its development was an order of magnitude or two slower than the rate of modern software’s development. Further, its despotic purism would never catch on: it was trying to change the direction of an avalanche by throwing a snowball.

A truly modern compositional environment must embrace the flow of knowledge. Those who try to reverse the paradigm will end up reinventing all the technology that the mainstream is already using, and by the time they are finished the mainstream will be lightyears ahead. The state of the art may be ugly, but it exists, which is more than the purists can say. One person — one team — cannot develop software anymore; to do something new, we must employ the labor of the entire world.

Currently, the most constraining property of software is that for it to be reusable, it must be designed for reuse. It is hard to reuse libraries in a way for which they were not intended. And in fact most of the coding that goes on is not in libraries at all — people aren’t intending their code to be reused by anyone. They just want to get their product out the door and get some cash flowing in. Below is a reified mental picture of this.

The state of progress in reusable software

The bulb on the bottom is the software designed for reuse. It grows slowly and uniformly, mostly occurring in the open-source world, created by people who like to write good software. The rays at the top are individual, result-oriented projects. This code is not typically designed to be reused. These projects grow quickly in a single direction, but if someone else wants to go in a similar direction, they have to start at the bottom. This is an economic phenomenon, and the economic goals of developers are not likely to change.

To accelerate progress, we must find a way to re-use those chutes, the goal being to transform the picture into something more like this:

In this picture, each results-oriented chute carries with it a net of all the support code that was written for it, upon which anyone can build. Of course only with the consent of the creator (people will find a way to protect their information), but if it is useful, then people will buy it.

I have thus become a supporter of Donald Knuth’s mantra:

I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit.

Forking a project in order to use one of its features in another project is no simple task. Depending on the complexity of the feature, you might gain some time by doing things this way, but many competent software engineers prefer to simply rewrite from scratch. One reason is in order to understand the code, but another is simply economics. Saying that I rewrite features because it is fun is missing half of the equation. I rewrite features because it is more fun then spending a week fighting integration issues (very much not fun). I claim that to support this new picture, our primary goal should be to vastly reduce integration issues.

Here is where my purist ideals kick back in, having found a new purpose. Pure functions have no integration issues, at least semantically speaking (performance is always a bit more subtle). A pure function makes explicit all of its inputs and outputs, so to use it, you can just rip it out of its context, provide the inputs, and transform the outputs. While depending on the situation that may be a lot of work, compare it to finding all the places a singleton was modified in order to recreate its state at some point in the execution of the program. What if the singleton returned a reference to an object that was stored away and modified by an obscure corner of the program, upon which your feature later depended. This sounds like a terribly inflexible design, but most software is terribly designed, and we still want to reuse it.

However, just saying “use pure functions, voila!” is no solution. The trend of the world is gradually shifting toward more functional idioms, but asking everyday programmers (who we care about, because they are doing our work for us) to switch to purity is asking a lot. Retraining brains is too hard.

So that is my open research question. How do we introduce the immense advantage of purity from this perspective into programming pop culture? Perhaps it is a new virtual machine, that does dynamic dataflow analysis. Perhaps it is a static analysis tool. Perhaps it is a new language, which can interface with many popular existing languages. This would be slow to take off because it requires migration (thus risking never), but because of interop and the good reuse properties (with tool support) it might be faster to achieve critical mass. Clojure and Scala are doing kindof okay.

Once you have a way to track assumptions, the rest follows easily. It would be possible to assemble a database of every function ever written, with the ability to just drag and drop it into your program (ehem, Haskellers, why don’t we have this?). The natural version control system, another step in the direction of the DVCSes, tracks changes between individual objects, and how those changes induce changes in the objects that reference them (this is the part I have thought through the most, but I will forgo describing it for it is a small detail and this post is already very long).

A singularity-aware culture should keep its eye on a way to bring the next inevitable order of magnitude of progress. I believe the elimination of integration issues is one such way, and Haskell, for the most part, has already eliminated them. The first step is to bring this property to the popular languages in any way we can (this includes making a Haskell a popular language… LOL).

Did you enjoy reading this article? Let me know! :-) Flattr this



Viewing all articles
Browse latest Browse all 14

Trending Articles