- cross-posted to:
- eric_posts_urls
- cross-posted to:
- eric_posts_urls
In this post I want to make this kind of simplicity more precise and talk about some reasons it’s important. I propose five key ideas for simple programming languages: ready-at-hand features, fast iteration cycles, a single way of doing things, first-order reasoning principles, and simple static type systems. I discuss each of these at length below.
Aside: simplicity in languages is interesting. I’d say most popular languages, from Rust and Haskell to Python and JavaScript, are not simple. Popular PL research topics, such as linear types and effect systems, are also not simple (I suppose all the simple concepts have already been done over and over).
Making a simple language which is also practical requires a careful selection of features: powerful enough to cover all of the language’s possible use-cases, but not too powerful that they encourage over-engineered or unnecessarily-clever (hard-to-understand) solutions (e.g. metaprogramming). The simplest languages tend to be DSLs with very specific use-cases, and the least simple ones tend to have so much complexity, people write simpler DSLs in them. But then, many simple DSLs become complex in aggregate, to implement and to learn…so once again, it’s a balance of “which features have the broadest use-cases while remaining easy to reason about”?
You mention that Haskell isn’t simple. That may be so. However, all of Haskell compiles down to Haskell Core which is incredibly simple. It’s similar with many languages. Purescript back end is being rewritten in Chez Scheme which is also incredibly simple.
My point is, if you stack enough simple layers on simple layers, things get very complex.
That’s sort of obvious and seems to kind of miss the point of a programming language. A language is an abstraction over the capabilities of a (possibly virtual) machine. The machine itself can generally only do relatively simple things; but writing assembly code is usually more difficult than writing the same functionality in a higher level language, because individual machine instructions are such a small building block for designing higher-level behaviors. So it’s hardly surprising that simple layers stacked on each other result in complexity. The point of the article (and of language design in general) is about how to balance expressive power versus simplicity of language concepts.
I disagree. Assembly languages for modern architectures are a complexity hell. You need books with thousands of pages to explain how they work. In comparison the lambda calculus is much simpler.
I should have said “relatively simple”, not “very simple”.
https://programming.dev/comment/8548915
That really hasn’t been true for at least 2 decades. And nowadays assembly code is no more that another abstraction layer, as microcode in the processor becomes increasingly complex. It’s as out-of-date an idea as the idea that C code is ‘close to the metal’.
I should have said “relatively simple”, not “very simple”. Yes, modern assembly instructions can often be relatively complex (though not on all architectures). But the point is that every abstraction layer presents a simpler API compared to what’s below, but must be implemented in terms of complex combinations of the fundamentally simple units of functionality in the layer below it. This is true of assembly, yes, but that doesn’t make it less true of higher level languages.
Haskell is simple in some ways and complicated in others.
It doesn’t have optional or named parameters. There are no objects or methods. No constructors. It doesn’t distinguish syntactically between procedures and functions. There are no for loops or while loops. && and || aren’t treated specially. It doesn’t even have functions with more than one argument. Every function takes one argument and returns one result.