Having had some experience with the actor model and Akka.NET I had been reading up and watching the odd video segment on the Pony programming language that is currently under active development. Pony is described as
"an open-source, object-oriented, actor-model, capabilities-secure, high performance programming language."
Most of that description will be familiar to mainstream developers. "Actor-model" means it supports the actor model, that most developers will not be familiar with. "Capabilities-secure" is something a bit more mysterious, that I will get to in a minute. "High performance" in this context means "performance comparable to C/C++." It compiles ahead of time to native code, as contrasted with languages that target the .NET Common Language Runtime (CLR) or the Java Virtual Machine (JVM).
The actor model can be implemented in a number of programming languages, via frameworks. So, for example, on the JVM and .NET there is the Akka toolkit, which can be consumed by JVM- or .NET-supported languages, e.g., Java, Scala, C#, F#, etc. And you can create an actor framework in C++ if you want.
But, in the same way that you can create an object-oriented framework in C, while its being far easier to use a language that supports the concepts natively, e.g., C++; Pony is a language that supports actors natively. In that way, it is conceptually similar to Erlang, the difference being that the latter is a dynamic/static functional language while Pony is a static, native code OO language that supports a functional style if desired.
As a simple illustration of supporting actors natively, Pony has both class and actor types as language elements. Pony syntax looks like a cross between Eiffel and Scala, but closer to the former. As such, Pony code is very readable but the language itself is conceptually quite tough. (See "capabilities security," aka "reference capabilities," below.)
What is Pony trying to achieve? Well, firstly the appeal of the actor model is that it provides a higher level abstraction over concurrent and parallel computation and is the increasingly preferred approach for programming distributed applications in a multi-core world. So far, so good. Why not use something like Akka on the JVM? Pony originated in the financial sector where the developers were working on trading applications. These typically require both high performance and low latency. While the likes of Akka are used in that sector they still also use concurrent C++ since the stronger the real-time requirements the more something like C++ becomes necessary, as the JVM supports garbage collection and GC pauses can be a hindrance or even unacceptable in such systems. It would be nicer and less error-prone to make use of an actor model framework in C++ but there are no mature ones currently available.
The main advantage of the actor model compared to traditional concurrency approaches is its avoidance of deadlocks and (easier) avoidance of race conditions.
But Pony wanted to do better than this. It definitely wanted memory management but wanted to improve on traditional GC, so as to avoid the latency issue. It fine-grains GC to per-actor, so it's not a stop-the-world affair.
It also wanted to completely banish dead locks and race conditions and ensure this at compile time. This is what "capabilities-secure" is all about. It ensures the avoidance of deadlocks and race conditions by some subtle extensions to the type system. This is the main innovation of Pony.
As such, Pony makes a number of bold claims.
Here are a few...
- It's type safe. Really type safe. There's a mathematical proof and everything.
- It's memory safe. Ok, this comes with type safe, but it's still interesting. There are no dangling pointers, no buffer overruns, heck, the language doesn't even have the concept of null!
- It's exception safe. There are no runtime exceptions. All exceptions have defined semantics, and they are always handled.
- It's data-race free. Pony doesn't have locks or atomic operations or anything like that. Instead, the type system ensures at compile time that your concurrent program can never have data races. So you can write highly concurrent code and never get it wrong.
- It's deadlock free. This one is easy, because Pony has no locks at all! So they definitely don't deadlock, because they don't exist.
The capabilities-security is by far the hardest feature of Pony for newcomers to grasp. It's rather like the difficulty in transitioning from procedural to object-oriented code, or from OO to functional. The Rust programming language, that I've barely looked at, has some similarly difficult concepts, partially addressing the same issues I think.
Pony is still at a very early stage of development. But there is a very readable tutorial. It is also usable fairly easily via Docker. In fact, it was my initial motivation for installing Docker a while back. There is also a Visual Studio Code extension for basic syntax highlighting, although it's not completely up-to-date, but better than nothing.
I don't know how far away from 1.0 Pony is at the moment but it's something to keep an eye on. It has some interesting ideas that I'm sure will gain some traction either with Pony or via adoption in other languages.
From the Go FAQ
"Go is an attempt to combine the ease of programming of an interpreted, dynamically typed language with the efficiency and safety of a statically typed, compiled language."
Other goals were fast compilation and easy (or easier) concurrency for a distributed applications world. It was positioned originally as a possible alternative to C and C++ at least for certain tasks but in practice it has been picked up more by the dynamic languages crowd. So it has turned out to be an extra string in the bow for Python developers, who want more performance and scalability combined with concurrency.
Go shuns object orientation and generics, although the former is not quite true. It has objects but no formal inheritance but the modern philosophy is to favour composition over inheritance anyway, while having a shallow inheritance hierarchy. They say they are open to adding generics at a later stage.
Go does not have exception handling (although there is a stop the world "panic," intended to be used when the application really can't proceed).
The normal way of handling errors is via return values. This is achieved quite conveniently via Go's multiple return values feature.
Go is way easier to learn than Pony. This is not a slight against Pony. Its goals are different. Go's approach to concurrency is similar to Pony's. It is based on message passing, as in the actor model, but is less formalised. But you can formalise an actor model on top of it. In fact, there is at least one such framework in development as I write.
However, it is still possible to create deadlocks in Go, unlike in Pony. But Go is able to detect deadlocks at runtime and terminate the program, explaining why.
Go uses allegedly very efficient garbage collection but it is not as fine-grained as Pony's per-actor GC.
Unlike with the other two languages I've yet to even dabble in Rust, though it keeps popping up in the tech press. Rust seems on the surface to occupy the same space as Go. It has some of the same concerns, e.g., safe, concurrent programming. But Rust is aimed much more squarely at systems programming, uses a sophisticated form of reference counting for memory management and appears to be a worthy alternative to systems-oriented C/C++. It appears to have something similar to Pony's reference capabilities, specifically its idea of reference ownership. But, at the time of writing, I know nothing about it.
Summing up, conceptually, it appears that Go and Rust intersect in some areas, but Rust has a different rationale. Rust and Pony intersect in some areas but Pony has a different rationale. E.g., Rust and Pony both aimed at eliminating data races via safe referencing. Go isn’t quite as thoroughgoing in this respect, although it does make it easier to tame them compared to traditional approaches.. But Go is aimed at fast compile times and simplicity. Rust and Pony aren’t. But all three are native and comparable to C/C++ in raw performance.